Sample records for computer vision technology

  1. (Computer) Vision without Sight

    PubMed Central

    Manduchi, Roberto; Coughlan, James

    2012-01-01

    Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563

  2. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  3. Understanding and preventing computer vision syndrome.

    PubMed

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  4. Chapter 11. Quality evaluation of apple by computer vision

    USDA-ARS?s Scientific Manuscript database

    Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...

  5. Milestones on the road to independence for the blind

    NASA Astrophysics Data System (ADS)

    Reed, Kenneth

    1997-02-01

    Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'

  6. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  7. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  8. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  9. Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System

    ERIC Educational Resources Information Center

    Xu, Richard Y. D.; Jin, Jesse S.

    2007-01-01

    This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…

  10. Another 25 Years of AIED? Challenges and Opportunities for Intelligent Educational Technologies of the Future

    ERIC Educational Resources Information Center

    Pinkwart, Niels

    2016-01-01

    This paper attempts an analysis of some current trends and future developments in computer science, education, and educational technology. Based on these trends, two possible future predictions of AIED are presented in the form of a utopian vision and a dystopian vision. A comparison of these two visions leads to seven challenges that AIED might…

  11. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  12. Recent advances in the development and transfer of machine vision technologies for space

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P.; Pendleton, Thomas

    1991-01-01

    Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.

  13. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    NASA Technical Reports Server (NTRS)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  14. Employment after Vision Loss: Results of a Collective Case Study.

    ERIC Educational Resources Information Center

    Crudden, Adele

    2002-01-01

    A collective case study approach was used to examine factors that influence the job retention of persons with vision loss. Computer technology was found to be a major positive influence and print access and technology were a source of stress for most participants (n=10). (Contains 7 references.) (Author/CR)

  15. Development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification

    NASA Astrophysics Data System (ADS)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems in industrial plants based on aggregation to determine the current storage area using computer vision and radiofrequency identification. It describes the developed of the project of hardware for industrial products positioning system in the territory of a plant on the basis of radio-frequency grid. It describes the development of the project of hardware for industrial products positioning system in the plant on the basis of computer vision methods. It describes the development of the method of aggregation to determine the current storage area using computer vision and radiofrequency identification. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  16. Computational thinking and thinking about computing

    PubMed Central

    Wing, Jeannette M.

    2008-01-01

    Computational thinking will influence everyone in every field of endeavour. This vision poses a new educational challenge for our society, especially for our children. In thinking about computing, we need to be attuned to the three drivers of our field: science, technology and society. Accelerating technological advances and monumental societal demands force us to revisit the most basic scientific questions of computing. PMID:18672462

  17. Benefit from NASA

    NASA Image and Video Library

    1985-01-01

    The NASA imaging processing technology, an advanced computer technique to enhance images sent to Earth in digital form by distant spacecraft, helped develop a new vision screening process. The Ocular Vision Screening system, an important step in preventing vision impairment, is a portable device designed especially to detect eye problems in children through the analysis of retinal reflexes.

  18. The Implications of Pervasive Computing on Network Design

    NASA Astrophysics Data System (ADS)

    Briscoe, R.

    Mark Weiser's late-1980s vision of an age of calm technology with pervasive computing disappearing into the fabric of the world [1] has been tempered by an industry-driven vision with more of a feel of conspicuous consumption. In the modified version, everyone carries around consumer electronics to provide natural, seamless interactions both with other people and with the information world, particularly for eCommerce, but still through a pervasive computing fabric.

  19. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  20. Contemporary Technology.

    ERIC Educational Resources Information Center

    Clark, Gilbert, Ed.

    1999-01-01

    This theme issue of "InSEA News" focuses on contemporary technology and art education. The articles are: "International Travel and Contemporary Technology" (Gilbert Clark); "Recollections and Visions for Electronic Computing in Art Education" (Guy Hubbard); "Using Technologies in Art Education: A Review of…

  1. Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development and Application to Critical Problems in Defense

    DTIC Science & Technology

    1983-10-28

    Computing. By seizing an opportunity to leverage recent advances in artificial intelligence, computer science, and microelectronics, the Agency plans...occurred in many separated areas of artificial intelligence, computer science, and microelectronics. Advances in "expert system" technology now...and expert knowledge o Advances in Artificial Intelligence: Mechanization of speech recognition, vision, and natural language understanding. o

  2. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    We re-address the vision of human-computer symbiosis expressed by J. C. R. Licklider nearly a half-century ago, when he wrote: “The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.” (Licklider, 1960). Unfortunately, little progress was made toward this vision over four decades following Licklider’s challenge, despite significant advancements in the fields of human factors and computer science. Licklider’s vision wasmore » largely forgotten. However, recent advances in information science and technology, psychology, and neuroscience have rekindled the potential of making the Licklider’s vision a reality. This paper provides a historical context for and updates the vision, and it argues that such a vision is needed as a unifying framework for advancing IS&T.« less

  3. Microwave vision for robots

    NASA Technical Reports Server (NTRS)

    Lewandowski, Leon; Struckman, Keith

    1994-01-01

    Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.

  4. School Technology Grows Up.

    ERIC Educational Resources Information Center

    Vail, Kathleen

    2003-01-01

    Practitioners and researchers in the education technology field asked to give their vision of the future list laptop computers, personal digital assistants, electronic testing, wireless networking, and multimedia technology among the technology advances headed soon for schools. A sidebar lists 12 online resources. (MLF)

  5. Two-dimensional (2D) displacement measurement of moving objects using a new MEMS binocular vision system

    NASA Astrophysics Data System (ADS)

    Di, Si; Lin, Hui; Du, Ruxu

    2011-05-01

    Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.

  6. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  7. Synthetic Vision Displays for Planetary and Lunar Lander Vehicles

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Shelton, Kevin J.; Kramer, Lynda J.; Bailey, Randall E.; Norman, Robert M.

    2008-01-01

    Aviation research has demonstrated that Synthetic Vision (SV) technology can substantially enhance situation awareness, reduce pilot workload, improve aviation safety, and promote flight path control precision. SV, and related flight deck technologies are currently being extended for application in planetary exploration vehicles. SV, in particular, holds significant potential for many planetary missions since the SV presentation provides a computer-generated view for the flight crew of the terrain and other significant environmental characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts, not available from other vision technologies. In addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. The paper accompanies a conference demonstration that introduced a prototype NASA Synthetic Vision system for lunar lander spacecraft. The paper will describe technical challenges and potential solutions to SV applications for the lunar landing mission, including the requirements for high-resolution lunar terrain maps, accurate positioning and orientation, and lunar cockpit display concepts to support projected mission challenges.

  8. A Review on Making Things See: Augmented Reality for Futuristic Virtual Educator

    ERIC Educational Resources Information Center

    Iqbal, Javid; Sidhu, Manjit Singh

    2017-01-01

    In the past few years many choreographers have focused upon implementation of computer technology to enhance their artistic skills. Computer vision technology presents new methods for learning, instructing, developing, and assessing physical movements as well as provides scope to expand dance resources and rediscover the learning process. This…

  9. Learning with Portable Digital Devices in Australian Schools: 20 Years On!

    ERIC Educational Resources Information Center

    Newhouse, C. Paul

    2014-01-01

    Portable computing technologies such as laptops, tablets, smartphones, wireless networking, voice/stylus input, and plug and play peripheral devices, appear to offer the means of finally realising much of the long heralded vision for computers to support learning in schools. There is the possibility for the technology to finally become a…

  10. Refocusing the Vision: The Future of Instructional Technology

    ERIC Educational Resources Information Center

    Pence, Harry E.; McIntosh, Steven

    2011-01-01

    Two decades ago, many campuses mobilized a major effort to deal with a clear problem; faculty and students needed access to desktop computing technologies. Now the situation is much more complex. Responding to the current challenges, like mobile computing and social networking, will be ore difficult but equally important. There is a clear need for…

  11. Integrated Imaging and Vision Techniques for Industrial Inspection: A Special Issue on Machine Vision and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep

    2010-06-05

    Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less

  12. Assistive technology for children and young people with low vision.

    PubMed

    Thomas, Rachel; Barker, Lucy; Rubin, Gary; Dahlmann-Noor, Annegret

    2015-06-18

    Recent technological developments, such as the near universal spread of mobile phones and portable computers and improvements in the accessibility features of these devices, give children and young people with low vision greater independent access to information. Some electronic technologies, such as closed circuit TV, are well established low vision aids and newer versions, such as electronic readers or off-the shelf tablet computers, may offer similar functionalities with easier portability and at lower cost. To assess the effect of electronic assistive technologies on reading, educational outcomes and quality of life in children and young people with low vision. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 9), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to October 2014), EMBASE (January 1980 to October 2014), the Health Technology Assessment Programme (HTA) (www.hta.ac.uk/), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 30 October 2014. We intended to include randomised controlled trials (RCTs) and quasi-RCTs in this review. We planned to include trials involving children between the ages of 5 and 16 years with low vision as defined by, or equivalent to, the WHO 1992 definition of low vision. We planned to include studies that explore the use of assistive technologies (ATs). These could include all types of closed circuit television/electronic vision enhancement systems (CCTV/EVES), computer technology including tablet computers and adaptive technologies such as screen readers, screen magnification and optical character recognition (OCR). We intended to compare the use of ATs with standard optical aids, which include distance refractive correction (with appropriate near addition for aphakic (no lens)/pseudophakic (with lens implant) patients) and monocular/binoculars for distance and brightfield magnifiers for near. We also planned to include studies that compare different types of ATs with each other, without or in addition to conventional optical aids, and those that compare ATs given with or without instructions for use. Independently, two review authors reviewed titles and abstracts for eligibility. They divided studies into categories to 'definitely include', 'definitely exclude' and 'possibly include', and the same two authors made final judgements about inclusion/exclusion by obtaining full-text copies of the studies in the 'possibly include' category. We did not identify any randomised controlled trials in this subject area. High-quality evidence about the usefulness of electronic AT for children and young people with visual impairment is needed to inform the choice healthcare and education providers and family have to make when selecting a technology. Randomised controlled trials are needed to assess the impact of AT. Research protocols should carefully select outcomes relevant not only to the scientific community, but more importantly to families and teachers. Functional outcomes such as reading accuracy, comprehension and speed should be recorded, as well as the impact of AT on independent learning and quality of life.

  13. Machine vision methods for use in grain variety discrimination and quality analysis

    NASA Astrophysics Data System (ADS)

    Winter, Philip W.; Sokhansanj, Shahab; Wood, Hugh C.

    1996-12-01

    Decreasing cost of computer technology has made it feasible to incorporate machine vision technology into the agriculture industry. The biggest attraction to using a machine vision system is the computer's ability to be completely consistent and objective. One use is in the variety discrimination and quality inspection of grains. Algorithms have been developed using Fourier descriptors and neural networks for use in variety discrimination of barley seeds. RGB and morphology features have been used in the quality analysis of lentils, and probability distribution functions and L,a,b color values for borage dockage testing. These methods have been shown to be very accurate and have a high potential for agriculture. This paper presents the techniques used and results obtained from projects including: a lentil quality discriminator, a barley variety classifier, a borage dockage tester, a popcorn quality analyzer, and a pistachio nut grading system.

  14. Computer-Assisted Culture Learning in an Online Augmented Reality Environment Based on Free-Hand Gesture Interaction

    ERIC Educational Resources Information Center

    Yang, Mau-Tsuen; Liao, Wan-Che

    2014-01-01

    The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…

  15. Accessibility of e-Learning and Computer and Information Technologies for Students with Visual Impairments in Postsecondary Education

    ERIC Educational Resources Information Center

    Fichten, Catherine S.; Asuncion, Jennison V.; Barile, Maria; Ferraro, Vittoria; Wolforth, Joan

    2009-01-01

    This article presents the results of two studies on the accessibility of e-learning materials and other information and computer and communication technologies for 143 Canadian college and university students with low vision and 29 who were blind. It offers recommendations for enhancing access, creating new learning opportunities, and eliminating…

  16. Technology for Work, Home, and Leisure. Tech Use Guide: Using Computer Technology.

    ERIC Educational Resources Information Center

    Williams, John M.

    This guide provides a brief introduction to several types of technological devices useful to individuals with disabilities and illustrates how some individuals are applying technology in the workplace and at home. Devices described include communication aids, low-vision products, voice-activated systems, environmental controls, and aids for…

  17. Big data computing: Building a vision for ARS information management

    USDA-ARS?s Scientific Manuscript database

    Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...

  18. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  19. Computers for the Disabled.

    ERIC Educational Resources Information Center

    Lazzaro, Joseph J.

    1993-01-01

    Describes adaptive technology for personal computers that accommodate disabled users and may require special equipment including hardware, memory, expansion slots, and ports. Highlights include vision aids, including speech synthesizers, magnification, braille, and optical character recognition (OCR); hearing adaptations; motor-impaired…

  20. Object and Facial Recognition in Augmented and Virtual Reality: Investigation into Software, Hardware and Potential Uses

    NASA Technical Reports Server (NTRS)

    Schulte, Erin

    2017-01-01

    As augmented and virtual reality grows in popularity, and more researchers focus on its development, other fields of technology have grown in the hopes of integrating with the up-and-coming hardware currently on the market. Namely, there has been a focus on how to make an intuitive, hands-free human-computer interaction (HCI) utilizing AR and VR that allows users to control their technology with little to no physical interaction with hardware. Computer vision, which is utilized in devices such as the Microsoft Kinect, webcams and other similar hardware has shown potential in assisting with the development of a HCI system that requires next to no human interaction with computing hardware and software. Object and facial recognition are two subsets of computer vision, both of which can be applied to HCI systems in the fields of medicine, security, industrial development and other similar areas.

  1. Assistive technology applied to education of students with visual impairment.

    PubMed

    Alves, Cássia Cristiane de Freitas; Monteiro, Gelse Beatriz Martins; Rabello, Suzana; Gasparetto, Maria Elisabete Rodrigues Freire; de Carvalho, Keila Monteiro

    2009-08-01

    Verify the application of assistive technology, especially information technology in the education of blind and low-vision students from the perceptions of their teachers. Descriptive survey study in public schools in three municipalities of the state of São Paulo, Brazil. The sample comprised 134 teachers. According to the teachers' opinions, there are differences in the specificities and applicability of assistive technology for blind and low-vision students, for whom specific computer programs are important. Information technology enhances reading and writing skills, as well as communication with the world on an equal basis, thereby improving quality of life and facilitating the learning process. The main reason for not using information technology is the lack of planning courses. The main requirements for the use of information technology in schools are enough computers for all students, advisers to help teachers, and pedagogical support. Assistive technology is applied to education of students with visual impairment; however, teachers indicate the need for infrastructure and pedagogical support. Information technology is an important tool in the inclusion process and can promote independence and autonomy of students with visual impairment.

  2. Computer vision-based classification of hand grip variations in neurorehabilitation.

    PubMed

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  3. Notions of Technology and Visual Literacy

    ERIC Educational Resources Information Center

    Stankiewicz, Mary Ann

    2004-01-01

    For many art educators, the word "technology" conjures up visions of overhead projectors and VCRs, video and digital cameras, computers equipped with graphic programs and presentation software, digital labs where images rendered in pixels replace the debris of charcoal dust and puddled paints. One forgets that visual literacy and technology have…

  4. Applications of Computer Vision for Assessing Quality of Agri-food Products: A Review of Recent Research Advances.

    PubMed

    Ma, Ji; Sun, Da-Wen; Qu, Jia-Huan; Liu, Dan; Pu, Hongbin; Gao, Wen-Hong; Zeng, Xin-An

    2016-01-01

    With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

  5. Survey of computer vision-based natural disaster warning systems

    NASA Astrophysics Data System (ADS)

    Ko, ByoungChul; Kwak, Sooyeong

    2012-07-01

    With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.

  6. Ubiquitous computing technology for just-in-time motivation of behavior change.

    PubMed

    Intille, Stephen S

    2004-01-01

    This paper describes a vision of health care where "just-in-time" user interfaces are used to transform people from passive to active consumers of health care. Systems that use computational pattern recognition to detect points of decision, behavior, or consequences automatically can present motivational messages to encourage healthy behavior at just the right time. Further, new ubiquitous computing and mobile computing devices permit information to be conveyed to users at just the right place. In combination, computer systems that present messages at the right time and place can be developed to motivate physical activity and healthy eating. Computational sensing technologies can also be used to measure the impact of the motivational technology on behavior.

  7. Technology Support for Discussion Based Learning: From Computer Supported Collaborative Learning to the Future of Massive Open Online Courses

    ERIC Educational Resources Information Center

    Rosé, Carolyn Penstein; Ferschke, Oliver

    2016-01-01

    This article offers a vision for technology supported collaborative and discussion-based learning at scale. It begins with historical work in the area of tutorial dialogue systems. It traces the history of that area of the field of Artificial Intelligence in Education as it has made an impact on the field of Computer-Supported Collaborative…

  8. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  9. Evaluation of the Texas Technology Immersion Pilot: Outcomes for the Third Year (2006-07)

    ERIC Educational Resources Information Center

    Shapley, Kelly; Sheehan, Daniel; Maloney, Catherine; Caranikas-Walker, Fanny

    2008-01-01

    The Technology Immersion Pilot (TIP), created by the Texas Legislature in 2003, set forth a vision for technology immersion in public schools. Senate Bill 396 called for the Texas Education Agency (TEA) to establish a pilot project to "immerse" schools in technology by providing a wireless mobile computing device for each teacher and…

  10. 2020 Vision Project Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, K.W.; Scott, K.P.

    2000-11-01

    Since the 2020 Vision project began in 1996, students from participating schools have completed and submitted a variety of scenarios describing potential world and regional conditions in the year 2020 and their possible effect on US national security. This report summarizes the students' views and describes trends observed over the course of the 2020 Vision project's five years. It also highlights the main organizational features of the project. An analysis of thematic trends among the scenarios showed interesting shifts in students' thinking, particularly in their views of computer technology, US relations with China, and globalization. In 1996, most students perceivedmore » computer technology as highly beneficial to society, but as the year 2000 approached, this technology was viewed with fear and suspicion, even personified as a malicious, uncontrollable being. Yet, after New Year's passed with little disruption, students generally again perceived computer technology as beneficial. Also in 1996, students tended to see US relations with China as potentially positive, with economic interaction proving favorable to both countries. By 2000, this view had transformed into a perception of China emerging as the US' main rival and ''enemy'' in the global geopolitical realm. Regarding globalization, students in the first two years of the project tended to perceive world events as dependent on US action. However, by the end of the project, they saw the US as having little control over world events and therefore, we Americans would need to cooperate and compromise with other nations in order to maintain our own well-being.« less

  11. Technology Enriched Schools: Nine Case Studies with Reflections.

    ERIC Educational Resources Information Center

    Collis, Betty, Ed.; Carleer, Gerrit, Ed.

    Technology enriched school projects are initiatives in real school settings that try to reduce or remove problems and constraints that hamper the effective use of computers in the schools. These projects are based, not on technology, but on educational need and vision. They examine the atmosphere and functioning of a school that uses technology…

  12. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  13. Present status and trends of image fusion

    NASA Astrophysics Data System (ADS)

    Xiang, Dachao; Fu, Sheng; Cai, Yiheng

    2009-10-01

    Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.

  14. Hypermediated Art Criticism

    ERIC Educational Resources Information Center

    Taylor, Pamela G.; Carpenter, B. Stephen, II

    2007-01-01

    Technological media catapults our perception into what Marshall McLuhan called "new transforming vision and awareness." As our lives become more and more immersed in such technologies as television, film, and interactive computers, we find ourselves inundated with a heightened sense of mindfulness--an aesthetic experience made possible through…

  15. Unstructured Grid Adaptation: Status, Potential Impacts, and Recommended Investments Toward CFD Vision 2030

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.

    2016-01-01

    Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.

  16. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  17. Unification and Enhancement of Planetary Robotic Vision Ground Processing: The EC FP7 Project PRoVisG

    NASA Astrophysics Data System (ADS)

    Paar, G.

    2009-04-01

    At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.

  18. Adaptive Technology that Provides Access to Computers. DO-IT Program.

    ERIC Educational Resources Information Center

    Washington Univ., Seattle.

    This brochure describes the different types of barriers individuals with mobility impairments, blindness, low vision, hearing impairments, and specific learning disabilities face in providing computer input, interpreting output, and reading documentation. The adaptive hardware and software that has been developed to provide functional alternatives…

  19. Vision 2010: The Future of Higher Education Business and Learning Applications

    ERIC Educational Resources Information Center

    Carey, Patrick; Gleason, Bernard

    2006-01-01

    The global software industry is in the midst of a major evolutionary shift--one based on open computing--and this trend, like many transformative trends in technology, is being led by the IT staffs and academic computing faculty of the higher education industry. The elements of this open computing approach are open source, open standards, open…

  20. Computer vision in roadway transportation systems: a survey

    NASA Astrophysics Data System (ADS)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  1. Computer vision syndrome: A review.

    PubMed

    Gowrisankaran, Sowjanya; Sheedy, James E

    2015-01-01

    Computer vision syndrome (CVS) is a collection of symptoms related to prolonged work at a computer display. This article reviews the current knowledge about the symptoms, related factors and treatment modalities for CVS. Relevant literature on CVS published during the past 65 years was analyzed. Symptoms reported by computer users are classified into internal ocular symptoms (strain and ache), external ocular symptoms (dryness, irritation, burning), visual symptoms (blur, double vision) and musculoskeletal symptoms (neck and shoulder pain). The major factors associated with CVS are either environmental (improper lighting, display position and viewing distance) and/or dependent on the user's visual abilities (uncorrected refractive error, oculomotor disorders and tear film abnormalities). Although the factors associated with CVS have been identified the physiological mechanisms that underlie CVS are not completely understood. Additionally, advances in technology have led to the increased use of hand-held devices, which might impose somewhat different visual challenges compared to desktop displays. Further research is required to better understand the physiological mechanisms underlying CVS and symptoms associated with the use of hand-held and stereoscopic displays.

  2. Data-driven ranch management: A vision for sustainable ranching

    USDA-ARS?s Scientific Manuscript database

    Introduction The 21st century has ushered in an era of tiny, inexpensive electronics with impressive capabilities for sensing the environment. Also emerging are new technologies for communicating data to computer systems where new analytical tools can process the data. Many of these technologies w...

  3. Self-Report Computer-Based Survey of Technology Use by People with Intellectual and Developmental Disabilities

    ERIC Educational Resources Information Center

    Tanis, Emily Shea; Palmer, Susan; Wehmeyer, Michael; Davies, Daniel K.; Stock, Steven E.; Lobb, Kathy; Bishop, Barbara

    2012-01-01

    Advancements of technologies in the areas of mobility, hearing and vision, communication, and daily living for people with intellectual and developmental disabilities has the potential to greatly enhance independence and self-determination. Previous research, however, suggests that there is a technological divide with regard to the use of such…

  4. Development of a Computer Vision Technology for the Forest Products Manufacturing Industry

    Treesearch

    D. Earl Kline; Richard Conners; Philip A. Araman

    1992-01-01

    The goal of this research is to create an automated processing/grading system for hardwood lumber that will be of use to the forest products industry. The objective of creating a full scale machine vision prototype for inspecting hardwood lumber will become a reality in calendar year 1992. Space for the full scale prototype has been created at the Brooks Forest...

  5. A Vision in Aeronautics: The K-12 Wind Tunnel Project

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A Vision in Aeronautics, a project within the NASA Lewis Research Center's Information Infrastructure Technologies and Applications (IITA) K-12 Program, employs small-scale, subsonic wind tunnels to inspire students to explore the world of aeronautics and computers. Recently, two educational K-12 wind tunnels were built in the Cleveland area. During the 1995-1996 school year, preliminary testing occurred in both tunnels.

  6. Education Calls for a New Philosophy.

    ERIC Educational Resources Information Center

    Scheidlinger, Zygmunt

    1999-01-01

    Highlights changes brought on by computers and technological advancement and notes that only those with a vision of the future can direct and participate in the evolution of education. Suggests that virtual reality, simulation, animation and other computer-based features will render traditional class learning futile and that computerized education…

  7. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  8. Educational Technology in Voc Ed. Information Series No. 268.

    ERIC Educational Resources Information Center

    Lipson, Joseph I.

    This monograph provides a vision of the future for vocational educators in a position to improve programs, such as teachers and administrators of local educational agencies and state leaders who set priorities in educational agencies. The monograph addresses nationwide technological concerns of the computer, image storage and creation, and…

  9. Computer vision challenges and technologies for agile manufacturing

    NASA Astrophysics Data System (ADS)

    Molley, Perry A.

    1996-02-01

    Sandia National Laboratories, a Department of Energy laboratory, is responsible for maintaining the safety, security, reliability, and availability of the nuclear weapons stockpile for the United States. Because of the changing national and global political climates and inevitable budget cuts, Sandia is changing the methods and processes it has traditionally used in the product realization cycle for weapon components. Because of the increasing age of the nuclear stockpile, it is certain that the reliability of these weapons will degrade with time unless eventual action is taken to repair, requalify, or renew them. Furthermore, due to the downsizing of the DOE weapons production sites and loss of technical personnel, the new product realization process is being focused on developing and deploying advanced automation technologies in order to maintain the capability for producing new components. The goal of Sandia's technology development program is to create a product realization environment that is cost effective, has improved quality and reduced cycle time for small lot sizes. The new environment will rely less on the expertise of humans and more on intelligent systems and automation to perform the production processes. The systems will be robust in order to provide maximum flexibility and responsiveness for rapidly changing component or product mixes. An integrated enterprise will allow ready access to and use of information for effective and efficient product and process design. Concurrent engineering methods will allow a speedup of the product realization cycle, reduce costs, and dramatically lessen the dependency on creating and testing physical prototypes. Virtual manufacturing will allow production processes to be designed, integrated, and programed off-line before a piece of hardware ever moves. The overriding goal is to be able to build a large variety of new weapons parts on short notice. Many of these technologies that are being developed are also applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.

  10. The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home.

    PubMed

    Mihailidis, Alex; Carmichael, Brent; Boger, Jennifer

    2004-09-01

    This paper discusses the use of computer vision in pervasive healthcare systems, specifically in the design of a sensing agent for an intelligent environment that assists older adults with dementia during an activity of daily living. An overview of the techniques applied in this particular example is provided, along with results from preliminary trials completed using the new sensing agent. A discussion of the results obtained to date is presented, including technical and social issues that remain for the advancement and acceptance of this type of technology within pervasive healthcare.

  11. Computer vision-based technologies and commercial best practices for the advancement of the motion imagery tradecraft

    NASA Astrophysics Data System (ADS)

    Phipps, Marja; Capel, David; Srinivasan, James

    2014-06-01

    Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.

  12. The Computer Revolution and Evangelical Mission Research and Strategy: An Historical Overview

    ERIC Educational Resources Information Center

    Jaffarian, Michael

    2014-01-01

    This chapter consists of a short history of the impact of computer technology on Christian evangelical mission research from the 1970's to the present. The trail winds through World Vision, MARC, the "World Christian Encyclopedia," "Operation World," SIL, OC International, DAWN, Global Mapping International, the AD2000 and…

  13. Smoke and Air Resource Management-Peering Through the Haze

    Treesearch

    A. R. Fox Riebau

    1987-01-01

    This paper presents a vision of the future rooted in consideration of the past 20 years in the smoke and air resource management field. This future is characterized by rapid technological development of computers for computation, communications, and remote sensing capabilities and of the possible societal responses to these advances. We discuss intellectual...

  14. Quality Control by Artificial Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less

  15. Assistive Technology for Students with Visual Impairments: In-Service Teacher Training and Its Relationship to Student Access and Usage across Academic Subject Areas

    ERIC Educational Resources Information Center

    Segers, Kathryn S.

    2014-01-01

    Technology is used in almost every school and classroom today and motivates and intrigues students with vision. This same technology is often not accessible to students with visual impairments and blindness. Assistive technology must be used by students with visual impairments and blindness in order to access a computer, the Internet, and print…

  16. Computer Vision Malaria Diagnostic Systems-Progress and Prospects.

    PubMed

    Pollak, Joseph Joel; Houri-Yafin, Arnon; Salpeter, Seth J

    2017-01-01

    Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.

  17. Mozart to Michelangelo: Software to Hone Your Students' Fine Arts Skills.

    ERIC Educational Resources Information Center

    Smith, Russell

    2000-01-01

    Describes 15 art and music computer software products for classroom use. "Best bets" (mostly secondary level) include Clearvue Inc.'s Art of Seeing, Sunburst Technology's Curious George Paint & Print Studio, Inspiration Software's Inspiration 6.0, Harmonic Vision's Music Ace 2, and Coda Music Technology's PrintMusic! 2000 and SmartMusic Studio.…

  18. 3 Ways that Web-Based Computing Will Change Colleges--And Challenge Them

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2008-01-01

    Cloud computing, one of the latest technology buzzwords, is so hard to explain that Google drove a bus from campus to campus to walk students through the company's vision of it. After students sat through a demo at computers set up nearby, they boarded the bus and got free T-shirts. The bus only stopped at colleges that had already agreed to hand…

  19. Technology and the Longer View. Proceedings of the Annual National Conference on Technology and Education (1st, Ft. Worth, Texas, April 19-20, 1984).

    ERIC Educational Resources Information Center

    Klier, Betje, Ed.; And Others

    To address the challenges to education posed by the new information technologies, participants from 41 states--publishers, administrators, teachers, subject specialists, and leaders from the field of educational computing--gathered to share ideas and visions. This report of the conference proceedings includes introductory comments by John Roach of…

  20. Technologies for Achieving Field Ubiquitous Computing

    NASA Astrophysics Data System (ADS)

    Nagashima, Akira

    Although the term “ubiquitous” may sound like jargon used in information appliances, ubiquitous computing is an emerging concept in industrial automation. This paper presents the author's visions of field ubiquitous computing, which is based on the novel Internet Protocol IPv6. IPv6-based instrumentation will realize the next generation manufacturing excellence. This paper focuses on the following five key issues: 1. IPv6 standardization; 2. IPv6 interfaces embedded in field devices; 3. Compatibility with FOUNDATION fieldbus; 4. Network securities for field applications; and 5. Wireless technologies to complement IP instrumentation. Furthermore, the principles of digital plant operations and ubiquitous production to support the above key technologies to achieve field ubiquitous systems are discussed.

  1. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  2. Evolution of Embedded Processing for Wide Area Surveillance

    DTIC Science & Technology

    2014-01-01

    future vision . 15. SUBJECT TERMS Embedded processing; high performance computing; general-purpose graphical processing units (GPGPUs) 16. SECURITY...recon- naissance (ISR) mission capabilities. The capabilities these advancements are achieving include the ability to provide persistent all...fighters to support and positively affect their mission . Significant improvements in high-performance computing (HPC) technology make it possible to

  3. Strategic Vision for Adopting 21st Century Science Methodologies

    EPA Pesticide Factsheets

    To better protect human health and the environment, EPA’s OPP is developing and evaluating new technologies in molecular, cellular, computational sciences to supplement or replace more traditional methods of toxicity testing and risk assessment.

  4. Third Conference on Artificial Intelligence for Space Applications, part 2

    NASA Technical Reports Server (NTRS)

    Denton, Judith S. (Compiler); Freeman, Michael S. (Compiler); Vereen, Mary (Compiler)

    1988-01-01

    Topics relative to the application of artificial intelligence to space operations are discussed. New technologies for space station automation, design data capture, computer vision, neural nets, automatic programming, and real time applications are discussed.

  5. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  6. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  7. Journal Writing with Web 2.0 Tools: A Vision for Older Adults

    ERIC Educational Resources Information Center

    Shepherd, Craig E.; Aagard, Steven

    2011-01-01

    This article describes how Web 2.0 technologies may facilitate journaling and related inquiry methods among older adults. Benefits and limitations of journaling are summarized as well as computer skills of older adults. We then describe how Web 2.0 technologies can enhance journaling among older adults by diminishing feelings of isolation,…

  8. A Vision for the Net Generation Media Center. Media Matters

    ERIC Educational Resources Information Center

    Johnson, Doug

    2005-01-01

    Many children today have never lived in a home without a computer. They are the "Net Generation," constantly "connected" by iPod, cell phone, keyboard, digital video camera, or game controller to various technologies. Recent studies have found that Net Genners see technology as "embedded in society," a primary means of connection with friends, and…

  9. Tablet computers versus optical aids to support education and learning in children and young people with low vision: protocol for a pilot randomised controlled trial, CREATE (Children Reading with Electronic Assistance To Educate).

    PubMed

    Crossland, Michael D; Thomas, Rachel; Unwin, Hilary; Bharani, Seelam; Gothwal, Vijaya K; Quartilho, Ana; Bunce, Catey; Dahlmann-Noor, Annegret

    2017-06-21

    Low vision and blindness adversely affect education and independence of children and young people. New 'assistive' technologies such as tablet computers can display text in enlarged font, read text out to the user, allow speech input and conversion into typed text, offer document and spreadsheet processing and give access to wide sources of information such as the internet. Research on these devices in low vision has been limited to case series. We will carry out a pilot randomised controlled trial (RCT) to assess the feasibility of a full RCT of assistive technologies for children/young people with low vision. We will recruit 40 students age 10-18 years in India and the UK, whom we will randomise 1:1 into two parallel groups. The active intervention will be Apple iPads; the control arm will be the local standard low-vision aid care. Primary outcomes will be acceptance/usage, accessibility of the device and trial feasibility measures (time to recruit children, lost to follow-up). Exploratory outcomes will be validated measures of vision-related quality of life for children/young people as well as validated measures of reading and educational outcomes. In addition, we will carry out semistructured interviews with the participants and their teachers. NRES reference 15/NS/0068; dissemination is planned via healthcare and education sector conferences and publications, as well as via patient support organisations. NCT02798848; IRAS ID 179658, UCL reference 15/0570. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Tablet computers versus optical aids to support education and learning in children and young people with low vision: protocol for a pilot randomised controlled trial, CREATE (Children Reading with Electronic Assistance To Educate)

    PubMed Central

    Crossland, Michael D; Thomas, Rachel; Unwin, Hilary; Bharani, Seelam; Gothwal, Vijaya K; Quartilho, Ana; Bunce, Catey

    2017-01-01

    Introduction Low vision and blindness adversely affect education and independence of children and young people. New ‘assistive’ technologies such as tablet computers can display text in enlarged font, read text out to the user, allow speech input and conversion into typed text, offer document and spreadsheet processing and give access to wide sources of information such as the internet. Research on these devices in low vision has been limited to case series. Methods and analysis We will carry out a pilot randomised controlled trial (RCT) to assess the feasibility of a full RCT of assistive technologies for children/young people with low vision. We will recruit 40 students age 10–18 years in India and the UK, whom we will randomise 1:1 into two parallel groups. The active intervention will be Apple iPads; the control arm will be the local standard low-vision aid care. Primary outcomes will be acceptance/usage, accessibility of the device and trial feasibility measures (time to recruit children, lost to follow-up). Exploratory outcomes will be validated measures of vision-related quality of life for children/young people as well as validated measures of reading and educational outcomes. In addition, we will carry out semistructured interviews with the participants and their teachers. Ethics and dissemination NRES reference 15/NS/0068; dissemination is planned via healthcare and education sector conferences and publications, as well as via patient support organisations. Trial registration number NCT02798848; IRAS ID 179658, UCL reference 15/0570. PMID:28637740

  11. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  12. Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau

    NASA Astrophysics Data System (ADS)

    Simpson, R. L., Jr.

    1987-06-01

    The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology.

  13. Three-Dimensional Images For Robot Vision

    NASA Astrophysics Data System (ADS)

    McFarland, William D.

    1983-12-01

    Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.

  14. The Study of Learners' Preference for Visual Complexity on Small Screens of Mobile Computers Using Neural Networks

    ERIC Educational Resources Information Center

    Wang, Lan-Ting; Lee, Kun-Chou

    2014-01-01

    The vision plays an important role in educational technologies because it can produce and communicate quite important functions in teaching and learning. In this paper, learners' preference for the visual complexity on small screens of mobile computers is studied by neural networks. The visual complexity in this study is divided into five…

  15. High-fidelity video and still-image communication based on spectral information: natural vision system and its applications

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki

    2006-01-01

    In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.

  16. Computer interfaces for the visually impaired

    NASA Technical Reports Server (NTRS)

    Higgins, Gerry

    1991-01-01

    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.

  17. Building a robust vehicle detection and classification module

    NASA Astrophysics Data System (ADS)

    Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry

    2015-12-01

    The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.

  18. Computer Vision in the Temples of Karnak: Past, Present & Future

    NASA Astrophysics Data System (ADS)

    Tournadre, V.; Labarta, C.; Megard, P.; Garric, A.; Saubestre, E.; Durand, B.

    2017-05-01

    CFEETK, the French-Egyptian Center for the Study of the Temples of Karnak, is celebrating this year the 50th anniversary of its foundation. As a multicultural and transdisciplinary research center, it has always been a playground for testing emerging technologies applied to various fields. The raise of automatic computer vision algorithms is an interesting topic, as it allows nonexperts to provide high value results. This article presents the evolution in measurement experiments in the past 50 years, and it describes how cameras are used today. Ultimately, it aims to set the trends of the upcoming projects and it discusses how image processing could contribute further to the study and the conservation of the cultural heritage.

  19. Pathways of Toxicity

    EPA Science Inventory

    Despite wide-spread consensus on the need to transform toxicology and risk assessment in order to keep pace with technological and computational changes that have revolutionized the life sciences, there remains much work to be done to achieve the vision of toxicology based on a m...

  20. Embedded Systems and TensorFlow Frameworks as Assistive Technology Solutions.

    PubMed

    Mulfari, Davide; Palla, Alessandro; Fanucci, Luca

    2017-01-01

    In the field of deep learning, this paper presents the design of a wearable computer vision system for visually impaired users. The Assistive Technology solution exploits a powerful single board computer and smart glasses with a camera in order to allow its user to explore the objects within his surrounding environment, while it employs Google TensorFlow machine learning framework in order to real time classify the acquired stills. Therefore the proposed aid can increase the awareness of the explored environment and it interacts with its user by means of audio messages.

  1. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  2. A Self-Report Computer-Based Survey of Technology Use by People with Intellectual and Developmental Disabilities

    PubMed Central

    Tanis, Emily Shea; Palmer, Susan B.; Wehmeyer, Michael L.; Davies, Danial; Stock, Steven; Lobb, Kathy; Bishop, Barbara

    2014-01-01

    Advancements of technologies in the areas of mobiliy, hearing and vision, communication, and daily living for people with intellectual and developmental disabilities (IDD) has the potential to greatly enhance indepencence and self-determination. Previous research, however, suggests that there is a “technological divide” with regard to the use of such technologies by people with IDD when compared with the general public. The present study sought to provide current information with regard to technology use by people with IDD by examining the technology needs, use, and barriers to such use experienced by 180 adults with IDD through QuestNet, a self-directed computer survey program. The study findings suggest that although there has been progress in technology acquisition and use by people IDD, yet there remains an underutilization of technologies across the population. PMID:22316226

  3. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  4. The Next Wave: Humans, Computers, and Redefining Reality

    NASA Technical Reports Server (NTRS)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  5. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  6. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  7. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  8. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  9. A clinical information systems strategy for a large integrated delivery network.

    PubMed Central

    Kuperman, G. J.; Spurr, C.; Flammini, S.; Bates, D.; Glaser, J.

    2000-01-01

    Integrated delivery networks (IDNs) are an emerging class of health care institutions. IDNs are formed from the affiliation of individual health care institutions and are intended to be more efficient in the current fiscal health care environment. To realize efficiencies and support their strategic visions, IDNs rely critically on excellent information technology (IT). Because of its importance to the mission of the IDN, strategic decisions about IT are made by the top leadership of the IDN. At Partners HealthCare System, a large IDN in Boston, MA, a clinical information systems strategy has been created to support the Partners clinical vision. In this paper, we discuss the Partners' structure, clinical vision, and current IT initiatives in place to address the clinical vision. The initiatives are: a clinical data repository, inpatient process support, electronic medical records, a portal strategy, referral applications, knowledge resources, support for product lines, patient computing, confidentiality, and clinical decision support. We address several of the issues encountered in trying to bring excellent information technology to a large IDN. PMID:11079921

  10. Engineering Social Justice into Traffic Control for Self-Driving Vehicles?

    PubMed

    Mladenovic, Milos N; McPherson, Tristram

    2016-08-01

    The convergence of computing, sensing, and communication technology will soon permit large-scale deployment of self-driving vehicles. This will in turn permit a radical transformation of traffic control technology. This paper makes a case for the importance of addressing questions of social justice in this transformation, and sketches a preliminary framework for doing so. We explain how new forms of traffic control technology have potential implications for several dimensions of social justice, including safety, sustainability, privacy, efficiency, and equal access. Our central focus is on efficiency and equal access as desiderata for traffic control design. We explain the limitations of conventional traffic control in meeting these desiderata, and sketch a preliminary vision for a next-generation traffic control tailored to address better the demands of social justice. One component of this vision is cooperative, hierarchically distributed self-organization among vehicles. Another component of this vision is a priority system enabling selection of priority levels by the user for each vehicle trip in the network, based on the supporting structure of non-monetary credits.

  11. Comparison of progressive addition lenses for general purpose and for computer vision: an office field study.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  12. An Operationally Based Vision Assessment Simulator for Domes

    NASA Technical Reports Server (NTRS)

    Archdeacon, John; Gaska, James; Timoner, Samson

    2012-01-01

    The Operational Based Vision Assessment (OBVA) simulator was designed and built by NASA and the United States Air Force (USAF) to provide the Air Force School of Aerospace Medicine (USAFSAM) with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. This paper describes the general design objectives and implementation characteristics of the simulator visual system being created to meet these requirements. A key design objective for the OBVA research simulator is to develop a real-time computer image generator (IG) and display subsystem that can display and update at 120 frame s per second (design target), or at a minimum, 60 frames per second, with minimal transport delay using commercial off-the-shelf (COTS) technology. There are three key parts of the OBVA simulator that are described in this paper: i) the real-time computer image generator, ii) the various COTS technology used to construct the simulator, and iii) the spherical dome display and real-time distortion correction subsystem. We describe the various issues, possible COTS solutions, and remaining problem areas identified by NASA and the USAF while designing and building the simulator for future vision research. We also describe the critically important relationship of the physical display components including distortion correction for the dome consistent with an objective of minimizing latency in the system. The performance of the automatic calibration system used in the dome is also described. Various recommendations for possible future implementations shall also be discussed.

  13. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  14. Evaluation of tablet computers for visual function assessment.

    PubMed

    Bodduluri, Lakshmi; Boon, Mei Ying; Dain, Stephen J

    2017-04-01

    Recent advances in technology and the increased use of tablet computers for mobile health applications such as vision testing necessitate an understanding of the behavior of the displays of such devices, to facilitate the reproduction of existing or the development of new vision assessment tests. The purpose of this study was to investigate the physical characteristics of one model of tablet computer (iPad mini Retina display) with regard to display consistency across a set of devices (15) and their potential application as clinical vision assessment tools. Once the tablet computer was switched on, it required about 13 min to reach luminance stability, while chromaticity remained constant. The luminance output of the device remained stable until a battery level of 5%. Luminance varied from center to peripheral locations of the display and with viewing angle, whereas the chromaticity did not vary. A minimal (1%) variation in luminance was observed due to temperature, and once again chromaticity remained constant. Also, these devices showed good temporal stability of luminance and chromaticity. All 15 tablet computers showed gamma functions approximating the standard gamma (2.20) and showed similar color gamut sizes, except for the blue primary, which displayed minimal variations. The physical characteristics across the 15 devices were similar and are known, thereby facilitating the use of this model of tablet computer as visual stimulus displays.

  15. Remote sensing of vegetation structure using computer vision

    NASA Astrophysics Data System (ADS)

    Dandois, Jonathan P.

    High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal conditions under which forest structure measurements should be obtained with UAS-SFM remote sensing. Ultimately remote sensing of vegetation by computer vision offers the potential to provide an 'ecologist's eye view', capturing not only canopy 3D and spectral properties, but also seeing the trees in the forest and the leaves on the trees.

  16. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  17. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  18. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  19. Impact of information and communication technology on child health.

    PubMed

    Woo, Eugenia Hc; White, Peter; Lai, Christopher Wk

    2016-06-01

    This article provides a general framework for understanding the use of information and communication technology in education and discusses the impact of computer usage on students' health and development. Potential beneficial and harmful effects of computer use by children are discussed. Early epidemiological and laboratory studies have indicated that children are at least of similar risk of developing musculoskeletal and vision problems as adults, and musculoskeletal and visual health problems developed in childhood are likely to persist into adulthood. This article, therefore, aims to provide a reflection on the deficits of existing policy and recommendations for child-specific guidelines in computer use. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  20. How the Air Force Should Stay Engaged in Computer Vision Technology Development

    DTIC Science & Technology

    2007-04-01

    present individuals. The survey 29 Paddy Comyn, "Sensing Forward to a Driverless Future," The Irish...34 Military Embedded Systems (2006). Comyn, Paddy. "Sensing Forward to a Driverless Future." The Irish Times 21 February 2007. Dakley, Norman C. The

  1. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  2. Parallel Algorithms for Computer Vision.

    DTIC Science & Technology

    1989-01-01

    34 IEEE Tran. Pattern Ankyaij and Ma- Artifcial Intelligence , Tokyo, 1979. chine Intelligence , 6, 1984. Kirkpatrick, S., C.D. Gelatt, Jr. and M.P. Vecchi...MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB T P06010 JAN 89 ETL-0529 UNCLASSIFIED DACA76-85-C-0010 F.’G 12/1I N mommiimmmiiso...PoggioI Massachusetts Institute of Technology i Artificial Intelligence Laboratory 545 Technology Square Cambridge, Massachusetts 02139 DTIC January

  3. Computer and visual display terminals (VDT) vision syndrome (CVDTS).

    PubMed

    Parihar, J K S; Jain, Vaibhav Kumar; Chaturvedi, Piyush; Kaushik, Jaya; Jain, Gunjan; Parihar, Ashwini K S

    2016-07-01

    Computer and visual display terminals have become an essential part of modern lifestyle. The use of these devices has made our life simple in household work as well as in offices. However the prolonged use of these devices is not without any complication. Computer and visual display terminals syndrome is a constellation of symptoms ocular as well as extraocular associated with prolonged use of visual display terminals. This syndrome is gaining importance in this modern era because of the widespread use of technologies in day-to-day life. It is associated with asthenopic symptoms, visual blurring, dry eyes, musculoskeletal symptoms such as neck pain, back pain, shoulder pain, carpal tunnel syndrome, psychosocial factors, venous thromboembolism, shoulder tendonitis, and elbow epicondylitis. Proper identification of symptoms and causative factors are necessary for the accurate diagnosis and management. This article focuses on the various aspects of the computer vision display terminals syndrome described in the previous literature. Further research is needed for the better understanding of the complex pathophysiology and management.

  4. Vision 20/20: Single photon counting x-ray detectors in medical imaging

    PubMed Central

    Taguchi, Katsuyuki; Iwanczyk, Jan S.

    2013-01-01

    Photon counting detectors (PCDs) with energy discrimination capabilities have been developed for medical x-ray computed tomography (CT) and x-ray (XR) imaging. Using detection mechanisms that are completely different from the current energy integrating detectors and measuring the material information of the object to be imaged, these PCDs have the potential not only to improve the current CT and XR images, such as dose reduction, but also to open revolutionary novel applications such as molecular CT and XR imaging. The performance of PCDs is not flawless, however, and it seems extremely challenging to develop PCDs with close to ideal characteristics. In this paper, the authors offer our vision for the future of PCD-CT and PCD-XR with the review of the current status and the prediction of (1) detector technologies, (2) imaging technologies, (3) system technologies, and (4) potential clinical benefits with PCDs. PMID:24089889

  5. Digital Libraries--Methods and Applications

    ERIC Educational Resources Information Center

    Huang, Kuo Hung, Ed.

    2011-01-01

    Digital library is commonly seen as a type of information retrieval system which stores and accesses digital content remotely via computer networks. However, the vision of digital libraries is not limited to technology or management, but user experience. This book is an attempt to share the practical experiences of solutions to the operation of…

  6. Assistive technology and home modification for people with neurovisual deficits.

    PubMed

    Copolillo, Al; Ivanoff, Synneve Dahlin

    2011-01-01

    People with neurovisual deficits from acquired brain injuries and other neurological disabilities can benefit from the array of assistive technologies and home modifications available to the larger vision impairment population, especially when symptoms are mild and associated neurological conditions are few. Optics, proper lighting, and magnification to increase the perceived size of both objects and reading material and to improve contrast sensitivity have been shown to be beneficial. Innovative technologies, universally designed for safe independent living and community participation are gradually developing and show promise for addressing the needs of this population. This article highlights technologies that may be useful for people with neurovisual deficits and describes the evidence to support their training and use. The use of various types of eyewear to reduce falls; prisms and telescopic lenses to improve visual attention and minimize the impact of visual field deficits; and technologies to improve computer use, wayfinding, and home safety are discussed. While there remains substantial need for further research and development focusing on the needs of people with vision impairments from neurological conditions, practitioners can use technology with caution to improve functional outcomes.

  7. Computer vision syndrome: a review.

    PubMed

    Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W

    2005-01-01

    As computers become part of our everyday life, more and more people are experiencing a variety of ocular symptoms related to computer use. These include eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. This article describes both the characteristics and treatment modalities that are available at this time. Computer vision syndrome symptoms may be the cause of ocular (ocular-surface abnormalities or accommodative spasms) and/or extraocular (ergonomic) etiologies. However, the major contributor to computer vision syndrome symptoms by far appears to be dry eye. The visual effects of various display characteristics such as lighting, glare, display quality, refresh rates, and radiation are also discussed. Treatment requires a multidirectional approach combining ocular therapy with adjustment of the workstation. Proper lighting, anti-glare filters, ergonomic positioning of computer monitor and regular work breaks may help improve visual comfort. Lubricating eye drops and special computer glasses help relieve ocular surface-related symptoms. More work needs to be done to specifically define the processes that cause computer vision syndrome and to develop and improve effective treatments that successfully address these causes.

  8. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  9. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  10. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    NASA Astrophysics Data System (ADS)

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-03-01

    Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20 cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models.

  11. Neo-Symbiosis: The Next Stage in the Evolution of Human Information Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffith, Douglas; Greitzer, Frank L.

    Abstract--The purpose of this paper is to re-address the vision of human-computer symbiosis as originally expressed by J.C.R. Licklider nearly a half-century ago. We describe this vision, place it in some historical context relating to the evolution of human factors research, and we observe that the field is now in the process of re-invigorating Licklider’s vision. We briefly assess the state of the technology within the context of contemporary theory and practice, and we describe what we regard as this emerging field of neo-symbiosis. We offer some initial thoughts on requirements to define functionality of neo-symbiotic systems and discuss researchmore » challenges associated with their development and evaluation.« less

  12. A timely reminder about stimulus display times and other presentation parameters on CRTs and newer technologies.

    PubMed

    Bauer, Ben

    2015-09-01

    Scientific experimentation requires specification and control of independent variables with accurate measurement of dependent variables. In Vision Sciences (here broadly including experimental psychology, cognitive neuroscience, psychophysics, and clinical vision), proper specification and control of stimulus rendering (already a thorny issue) may become more problematic as several newer display technologies replace cathode ray tubes (CRTs) in the lab. The present paper alerts researchers to spatiotemporal differences in display technologies and how these might affect various types of experiments. Parallels are drawn to similar challenges and solutions that arose during the change from cabinet-style tachistoscopes to computer driven CRT tachistoscopes. Technical papers outlining various strengths and limitations of several classes of display devices are introduced as a resource for the reader wanting to select appropriate displays for different presentation requirements. These papers emphasise the need to measure rather than assume display characteristics because manufacturers' specifications and software reports/settings may not correspond with actual performance. This is consistent with the call by several Vision Science and Psychological Science bodies to increase replications and increase detail in Method sections. Finally, several recent tachistoscope-based experiments, which focused on the same question but were implemented with different technologies, are compared for illustrative purposes. (c) 2015 APA, all rights reserved).

  13. Heliophysics Data Environment: What's next? (Invited)

    NASA Astrophysics Data System (ADS)

    Martens, P.

    2010-12-01

    In the last two decades the Heliophysics community has witnessed the societal recognition of the importance of space weather and space climate for our technology and ecology, resulting in a renewed priority for and investment in Heliophysics. As a result of that and the explosive development of information technology, Heliophysics has experienced an exponential growth in the amount and variety of data acquired, as well as the easy electronic storage and distribution of these data. The Heliophysics community has responded well to these challenges. The first, most obvious and most needed response, was the development of Virtual Heliophysics Observatories. While the VxOs of Heliophysics still need a lot of work with respect to the expansion of search options and interoperability, I believe the basic structures and functionalities have been established, and that they meet the needs of the community. In the future we'll see a refinement, completion, and integration of VxOs, not a fundamentally different approach -- in my opinion. The challenge posed by the huge increase in amount of data is not met by VxOs alone. No individual scientist or group, even with the assistance of tons of graduate students, can analyze the torrent of data currently coming down from the fleet of heliospheric observatories. Once more information technology provides an opportunity: Automated feature recognition of solar imagery is feasible, has been implemented in a number of instances, and is strongly supported by NASA. For example, the SDO Feature Finding Team is developing a suite of 16 feature recognition modules for SDO imagery that operates in near-real time, produces space-weather warnings, and populates on-line event catalogs. Automated feature recognition -- "computer vision" -- not only save enormous amounts of time in the analysis of events, it also allows for a shift from the analysis of single events to that of sets of features and events -- the latter being by far the most important implication of computer vision. Consider some specific examples of possibilities here: From the on-line SDO metadata a user can produce with a few IDL line commands information that previously would have taken years to compile, e.g.: - Draw a butterfly diagram for Active Regions, - Find all filaments that coincide with sigmoids and correlate the automatically detected sigmoid handedness with filament chirality, - Correlate EUV jets with small scale flux emergence in coronal holes only, - Draw PIL maps with regions of high shear and large magnetic field gradients overlayed, to pinpoint potential flaring regions. Then correlate with actual flare occurrence. I emphasize that the access to those metadata will be provided by VxOs, and that the interplay between computer vision codes and data will be facilitated by VxOs. My vision for the near and medium future for the VxOs is then to provide a simple and seamless interface between data, cataloged metadata, and computer vision software, either existing or newly developed by the user. Heliospheric virtual observatories and computer vision systems will work together to constantly monitor the Sun, provide space weather warnings, populate catalogs of metadata, analyze trends, and produce real-time on-line imagery of current events.

  14. Quaternions in computer vision and robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pervin, E.; Webb, J.A.

    1982-01-01

    Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.

  15. Computer Vision Techniques for Transcatheter Intervention

    PubMed Central

    Zhao, Feng; Roach, Matthew

    2015-01-01

    Minimally invasive transcatheter technologies have demonstrated substantial promise for the diagnosis and the treatment of cardiovascular diseases. For example, transcatheter aortic valve implantation is an alternative to aortic valve replacement for the treatment of severe aortic stenosis, and transcatheter atrial fibrillation ablation is widely used for the treatment and the cure of atrial fibrillation. In addition, catheter-based intravascular ultrasound and optical coherence tomography imaging of coronary arteries provides important information about the coronary lumen, wall, and plaque characteristics. Qualitative and quantitative analysis of these cross-sectional image data will be beneficial to the evaluation and the treatment of coronary artery diseases such as atherosclerosis. In all the phases (preoperative, intraoperative, and postoperative) during the transcatheter intervention procedure, computer vision techniques (e.g., image segmentation and motion tracking) have been largely applied in the field to accomplish tasks like annulus measurement, valve selection, catheter placement control, and vessel centerline extraction. This provides beneficial guidance for the clinicians in surgical planning, disease diagnosis, and treatment assessment. In this paper, we present a systematical review on these state-of-the-art methods. We aim to give a comprehensive overview for researchers in the area of computer vision on the subject of transcatheter intervention. Research in medical computing is multi-disciplinary due to its nature, and hence, it is important to understand the application domain, clinical background, and imaging modality, so that methods and quantitative measurements derived from analyzing the imaging data are appropriate and meaningful. We thus provide an overview on the background information of the transcatheter intervention procedures, as well as a review of the computer vision techniques and methodologies applied in this area. PMID:27170893

  16. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  17. Report on Technology Horizons: A Vision for Air Force Science and Technology During 2010-2030. Volume 1

    DTIC Science & Technology

    2010-05-15

    flow and decision processes across the air and space domains. It thus comprises traditional wired and fiber-optic computer networks based on...dual flow path design allow high volumetric efficiency, and high cruise speed provides significantly increased survivability. Vertical takeoff...emerging “third-stream engine architectures” can enable for constant mass flow engines that can provide further reductions in fuel consumption. A wide

  18. Flexible Wing Base Micro Aerial Vehicles: Vision-Guided Flight Stability and Autonomy for Micro Air Vehicles

    NASA Technical Reports Server (NTRS)

    Ettinger, Scott M.; Nechyba, Michael C.; Ifju, Peter G.; Wazak, Martin

    2002-01-01

    Substantial progress has been made recently towards design building and test-flying remotely piloted Micro Air Vehicle's (MAVs). We seek to complement this progress in overcoming the aerodynamic obstacles to.flight at very small scales with a vision stability and autonomy system. The developed system based on a robust horizon detection algorithm which we discuss in greater detail in a companion paper. In this paper, we first motivate the use of computer vision for MAV autonomy arguing that given current sensor technology, vision may he the only practical approach to the problem. We then briefly review our statistical vision-based horizon detection algorithm, which has been demonstrated at 30Hz with over 99.9% correct horizon identification. Next we develop robust schemes for the detection of extreme MAV attitudes, where no horizon is visible, and for the detection of horizon estimation errors, due to external factors such as video transmission noise. Finally, we discuss our feed-back controller for self-stabilized flight, and report results on vision autonomous flights of duration exceeding ten minutes.

  19. Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    NASA Technical Reports Server (NTRS)

    Liu, Xuan; Furrer, David; Kosters, Jared; Holmes, Jack

    2018-01-01

    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness.

  20. Synthetic Vision for Lunar and Planetary Landing Vehicles

    NASA Technical Reports Server (NTRS)

    Williams, Steven P.; Arthur, Jarvis (Trey) J., III; Shelton, Kevin J.; Prinzel, Lawrence J., III; Norman, R. Michael

    2008-01-01

    The Crew Vehicle Interface (CVI) group of the Integrated Intelligent Flight Deck Technologies (IIFDT) has done extensive research in the area of Synthetic Vision (SV), and has shown that SV technology can substantially enhance flight crew situation awareness, reduce pilot workload, promote flight path control precision and improve aviation safety. SV technology is being extended to evaluate its utility for lunar and planetary exploration vehicles. SV may hold significant potential for many lunar and planetary missions since the SV presentation provides a computer-generated view of the terrain and other significant environment characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts and in addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. This paper discusses preliminary SV concepts for tactical and strategic displays for a lunar landing vehicle. The technical challenges and potential solutions to SV applications for the lunar landing mission are explored, including the requirements for high resolution terrain lunar maps and an accurate position and orientation of the vehicle that is essential in providing lunar Synthetic Vision System (SVS) cockpit displays. The paper also discusses the technical challenge of creating an accurate synthetic terrain portrayal using an ellipsoid lunar digital elevation model which eliminates projection errors and can be efficiently rendered in real-time.

  1. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  2. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  3. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.

  4. Sensory Interactive Teleoperator Robotic Grasping

    NASA Technical Reports Server (NTRS)

    Alark, Keli; Lumia, Ron

    1997-01-01

    As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.

  5. Global Information Infrastructure: The Birth, Vision, and Architecture.

    ERIC Educational Resources Information Center

    Targowski, Andrew S.

    A new world has arrived in which computer and communications technologies will transform the national and global economies into information-driven economies. This is triggering the Information Revolution, which will have political and societal impacts every bit as profound as those of the Industrial Revolution. The 21st century is viewed as one…

  6. Gender DiVisions Across Technology Advertisements and the WWW: Implications for Educational Equity.

    ERIC Educational Resources Information Center

    Knupfer, Nancy Nelson

    1998-01-01

    Examines images and patterns of gender stereotypes within mediated and electronic advertisements that reach students online or when viewing computer software and educational television and questions decisions made in the construction of these images. The paper explains the importance of teachers, parents, and the community working together to…

  7. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  8. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  9. Driving into the future: how imaging technology is shaping the future of cars

    NASA Astrophysics Data System (ADS)

    Zhang, Buyue

    2015-03-01

    Fueled by the development of advanced driver assistance system (ADAS), autonomous vehicles, and the proliferation of cameras and sensors, automotive is becoming a rich new domain for innovations in imaging technology. This paper presents an overview of ADAS, the important imaging and computer vision problems to solve for automotive, and examples of how some of these problems are solved, through which we highlight the challenges and opportunities in the automotive imaging space.

  10. Book4All: A Tool to Make an e-Book More Accessible to Students with Vision/Visual-Impairments

    NASA Astrophysics Data System (ADS)

    Calabrò, Antonello; Contini, Elia; Leporini, Barbara

    Empowering people who are blind or otherwise visually impaired includes ensuring that products and electronic materials incorporate a broad range of accessibility features and work well with screen readers and other assistive technology devices. This is particularly important for students with vision impairments. Unfortunately, authors and publishers often do not include specific criteria when preparing the contents. Consequently, e-books can be inadequate for blind and low vision users, especially for students. In this paper we describe a semi-automatic tool developed to support operators who adapt e-documents for visually impaired students. The proposed tool can be used to convert a PDF e-book into a more suitable accessible and usable format readable on desktop computer or on mobile devices.

  11. A computer vision-based approach for structural displacement measurement

    NASA Astrophysics Data System (ADS)

    Ji, Yunfeng

    2010-04-01

    Along with the incessant advancement in optics, electronics and computer technologies during the last three decades, commercial digital video cameras have experienced a remarkable evolution, and can now be employed to measure complex motions of objects with sufficient accuracy, which render great assistance to structural displacement measurement in civil engineering. This paper proposes a computer vision-based approach for dynamic measurement of structures. One digital camera is used to capture image sequences of planar targets mounted on vibrating structures. The mathematical relationship between image plane and real space is established based on computer vision theory. Then, the structural dynamic displacement at the target locations can be quantified using point reconstruction rules. Compared with other tradition displacement measurement methods using sensors, such as accelerometers, linear-variable-differential-transducers (LVDTs) and global position system (GPS), the proposed approach gives the main advantages of great flexibility, a non-contact working mode and ease of increasing measurement points. To validate, four tests of sinusoidal motion of a point, free vibration of a cantilever beam, wind tunnel test of a cross-section bridge model, and field test of bridge displacement measurement, are performed. Results show that the proposed approach can attain excellent accuracy compared with the analytical ones or the measurements using conventional transducers, and proves to deliver an innovative and low cost solution to structural displacement measurement.

  12. Machine vision 1992-1996: technology program to promote research and its utilization in industry

    NASA Astrophysics Data System (ADS)

    Soini, Antti J.

    1994-10-01

    Machine vision technology has got a strong interest in Finnish research organizations, which is resulting in many innovative products to industry. Despite this end users were very skeptical towards machine vision and its robustness for harsh industrial environments. Therefore Technology Development Centre, TEKES, who funds technology related research and development projects in universities and individual companies, decided to start a national technology program, Machine Vision 1992 - 1996. Led by industry the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is in nationally important, demanding applications. The program will create new industry and business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers are working on our forty different projects. The key themes that we have are process control, robot vision and quality control.

  13. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    NASA Astrophysics Data System (ADS)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.

  14. Electronic health records (EHRs): supporting ASCO's vision of cancer care.

    PubMed

    Yu, Peter; Artz, David; Warner, Jeremy

    2014-01-01

    ASCO's vision for cancer care in 2030 is built on the expanding importance of panomics and big data, and envisions enabling better health for patients with cancer by the rapid transformation of systems biology knowledge into cancer care advances. This vision will be heavily dependent on the use of health information technology for computational biology and clinical decision support systems (CDSS). Computational biology will allow us to construct models of cancer biology that encompass the complexity of cancer panomics data and provide us with better understanding of the mechanisms governing cancer behavior. The Agency for Healthcare Research and Quality promotes CDSS based on clinical practice guidelines, which are knowledge bases that grow too slowly to match the rate of panomic-derived knowledge. CDSS that are based on systems biology models will be more easily adaptable to rapid advancements and translational medicine. We describe the characteristics of health data representation, a model for representing molecular data that supports data extraction and use for panomic-based clinical research, and argue for CDSS that are based on systems biology and are algorithm-based.

  15. Gesture therapy: a vision-based system for upper extremity stroke rehabilitation.

    PubMed

    Sucar, L; Luis, Roger; Leder, Ron; Hernandez, Jorge; Sanchez, Israel

    2010-01-01

    Stroke is the main cause of motor and cognitive disabilities requiring therapy in the world. Therefor it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. We have developed a low-cost vision-based system that allows stroke survivors to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a virtual environment for facilitating repetitive movement training, with computer vision algorithms that track the hand of a patient, using an inexpensive camera and a personal computer. This system, called Gesture Therapy, includes a gripper with a pressure sensor to include hand and finger rehabilitation; and it tracks the head of the patient to detect and avoid trunk compensation. It has been evaluated in a controlled clinical trial at the National Institute for Neurology and Neurosurgery in Mexico City, comparing it with conventional occupational therapy. In this paper we describe the latest version of the Gesture Therapy System and summarize the results of the clinical trail.

  16. Classification of Normal and Pathological Gait in Young Children Based on Foot Pressure Data.

    PubMed

    Guo, Guodong; Guffey, Keegan; Chen, Wenbin; Pergami, Paola

    2017-01-01

    Human gait recognition, an active research topic in computer vision, is generally based on data obtained from images/videos. We applied computer vision technology to classify pathology-related changes in gait in young children using a foot-pressure database collected using the GAITRite walkway system. As foot positioning changes with children's development, we also investigated the possibility of age estimation based on this data. Our results demonstrate that the data collected by the GAITRite system can be used for normal/pathological gait classification. Combining age information and normal/pathological gait classification increases the accuracy of the classifier. This novel approach could support the development of an accurate, real-time, and economic measure of gait abnormalities in children, able to provide important feedback to clinicians regarding the effect of rehabilitation interventions, and to support targeted treatment modifications.

  17. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030

    PubMed Central

    Slotnick, Jeffrey P.; Khodadoust, Abdollah; Alonso, Juan J.; Darmofal, David L.; Gropp, William D.; Lurie, Elizabeth A.; Mavriplis, Dimitri J.; Venkatakrishnan, Venkat

    2014-01-01

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be ‘cleaner’ and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. PMID:25024413

  18. 3D Medical Collaboration Technology to Enhance Emergency Healthcare

    PubMed Central

    Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951

  19. 3D medical collaboration technology to enhance emergency healthcare.

    PubMed

    Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E

    2009-04-19

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.

  20. Research and realization implementation of monitor technology on illegal external link of classified computer

    NASA Astrophysics Data System (ADS)

    Zhang, Hong

    2017-06-01

    In recent years, with the continuous development and application of network technology, network security has gradually entered people's field of vision. The host computer network external network of violations is an important reason for the threat of network security. At present, most of the work units have a certain degree of attention to network security, has taken a lot of means and methods to prevent network security problems such as the physical isolation of the internal network, install the firewall at the exit. However, these measures and methods to improve network security are often not comply with the safety rules of human behavior damage. For example, the host to wireless Internet access and dual-network card to access the Internet, inadvertently formed a two-way network of external networks and computer connections [1]. As a result, it is possible to cause some important documents and confidentiality leak even in the the circumstances of user unaware completely. Secrecy Computer Violation Out-of-band monitoring technology can largely prevent the violation by monitoring the behavior of the offending connection. In this paper, we mainly research and discuss the technology of secret computer monitoring.

  1. Implementation of Protocols To Enable Doctoral Training in Physical and Computational Chemistry of a Blind Graduate Student.

    PubMed

    Minkara, Mona S; Weaver, Michael N; Gorske, Jim; Bowers, Clifford R; Merz, Kenneth M

    2015-08-11

    There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth year graduate student in computational chemistry at the University of Florida. She is also blind. This account presents efforts conducted by an expansive team of university and student personnel in conjunction with Mona to adapt different portions of the graduate student curriculum to meet Mona's needs. The most important consideration is prior preparation of materials to assist with coursework and cumulative exams. Herein we present an account of the first four years of Mona's graduate experience hoping this will assist in the development of protocols for future blind and low-vision graduate students in computational chemistry.

  2. Image Processing Occupancy Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less

  3. A light-stimulated synaptic device based on graphene hybrid phototransistor

    NASA Astrophysics Data System (ADS)

    Qin, Shuchao; Wang, Fengqiu; Liu, Yujie; Wan, Qing; Wang, Xinran; Xu, Yongbing; Shi, Yi; Wang, Xiaomu; Zhang, Rong

    2017-09-01

    Neuromorphic chips refer to an unconventional computing architecture that is modelled on biological brains. They are increasingly employed for processing sensory data for machine vision, context cognition, and decision making. Despite rapid advances, neuromorphic computing has remained largely an electronic technology, making it a challenge to access the superior computing features provided by photons, or to directly process vision data that has increasing importance to artificial intelligence. Here we report a novel light-stimulated synaptic device based on a graphene-carbon nanotube hybrid phototransistor. Significantly, the device can respond to optical stimuli in a highly neuron-like fashion and exhibits flexible tuning of both short- and long-term plasticity. These features combined with the spatiotemporal processability make our device a capable counterpart to today’s electrically-driven artificial synapses, with superior reconfigurable capabilities. In addition, our device allows for generic optical spike processing, which provides a foundation for more sophisticated computing. The silicon-compatible, multifunctional photosensitive synapse opens up a new opportunity for neural networks enabled by photonics and extends current neuromorphic systems in terms of system complexities and functionalities.

  4. Performance of computer vision in vivo flow cytometry with low fluorescence contrast

    PubMed Central

    Markovic, Stacey; Li, Siyuan; Niedre, Mark

    2015-01-01

    Abstract. Detection and enumeration of circulating cells in the bloodstream of small animals are important in many areas of preclinical biomedical research, including cancer metastasis, immunology, and reproductive medicine. Optical in vivo flow cytometry (IVFC) represents a class of technologies that allow noninvasive and continuous enumeration of circulating cells without drawing blood samples. We recently developed a technique termed computer vision in vivo flow cytometry (CV-IVFC) that uses a high-sensitivity fluorescence camera and an automated computer vision algorithm to interrogate relatively large circulating blood volumes in the ear of a mouse. We detected circulating cells at concentrations as low as 20  cells/mL. In the present work, we characterized the performance of CV-IVFC with low-contrast imaging conditions with (1) weak cell fluorescent labeling using cell-simulating fluorescent microspheres with varying brightness and (2) high background tissue autofluorescence by varying autofluorescence properties of optical phantoms. Our analysis indicates that CV-IVFC can robustly track and enumerate circulating cells with at least 50% sensitivity even in conditions with two orders of magnitude degraded contrast than our previous in vivo work. These results support the significant potential utility of CV-IVFC in a wide range of in vivo biological models. PMID:25822954

  5. Gesture Therapy: A Vision-Based System for Arm Rehabilitation after Stroke

    NASA Astrophysics Data System (ADS)

    Sucar, L. Enrique; Azcárate, Gildardo; Leder, Ron S.; Reinkensmeyer, David; Hernández, Jorge; Sanchez, Israel; Saucedo, Pedro

    Each year millions of people in the world survive a stroke, in the U.S. alone the figure is over 600,000 people per year. Movement impairments after stroke are typically treated with intensive, hands-on physical and occupational therapy for several weeks after the initial injury. However, due to economic pressures, stroke patients are receiving less therapy and going home sooner, so the potential benefit of the therapy is not completely realized. Thus, it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. Current solutions are too expensive, as they require a robotic system for rehabilitation. We have developed a low-cost, computer vision system that allows individuals with stroke to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a web based virtual environment for facilitating repetitive movement training, with state-of-the art computer vision algorithms that track the hand of a patient and obtain its 3-D coordinates, using two inexpensive cameras and a conventional personal computer. An initial prototype of the system has been evaluated in a pilot clinical study with promising results.

  6. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  7. Computer vision in the poultry industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision is becoming increasingly important in the poultry industry due to increasing use and speed of automation in processing operations. Growing awareness of food safety concerns has helped add food safety inspection to the list of tasks that automated computer vision can assist. Researc...

  8. Ubiquitous Accessibility for People with Visual Impairments: Are We There Yet?

    PubMed Central

    Billah, Syed Masum; Ashok, Vikas; Porter, Donald E.; Ramakrishnan, IV

    2017-01-01

    Ubiquitous access is an increasingly common vision of computing, wherein users can interact with any computing device or service from anywhere, at any time. In the era of personal computing, users with visual impairments required special-purpose, assistive technologies, such as screen readers, to interact with computers. This paper investigates whether technologies like screen readers have kept pace with, or have created a barrier to, the trend toward ubiquitous access, with a specific focus on desktop computing as this is still the primary way computers are used in education and employment. Towards that, the paper presents a user study with 21 visually-impaired participants, specifically involving the switching of screen readers within and across different computing platforms, and the use of screen readers in remote access scenarios. Among the findings, the study shows that, even for remote desktop access—an early forerunner of true ubiquitous access—screen readers are too limited, if not unusable. The study also identifies several accessibility needs, such as uniformity of navigational experience across devices, and recommends potential solutions. In summary, assistive technologies have not made the jump into the era of ubiquitous access, and multiple, inconsistent screen readers create new practical problems for users with visual impairments. PMID:28782061

  9. Ubiquitous Accessibility for People with Visual Impairments: Are We There Yet?

    PubMed

    Billah, Syed Masum; Ashok, Vikas; Porter, Donald E; Ramakrishnan, I V

    2017-05-01

    Ubiquitous access is an increasingly common vision of computing, wherein users can interact with any computing device or service from anywhere, at any time. In the era of personal computing, users with visual impairments required special-purpose, assistive technologies, such as screen readers, to interact with computers. This paper investigates whether technologies like screen readers have kept pace with, or have created a barrier to, the trend toward ubiquitous access, with a specific focus on desktop computing as this is still the primary way computers are used in education and employment. Towards that, the paper presents a user study with 21 visually-impaired participants, specifically involving the switching of screen readers within and across different computing platforms, and the use of screen readers in remote access scenarios. Among the findings, the study shows that, even for remote desktop access-an early forerunner of true ubiquitous access-screen readers are too limited, if not unusable. The study also identifies several accessibility needs, such as uniformity of navigational experience across devices, and recommends potential solutions. In summary, assistive technologies have not made the jump into the era of ubiquitous access, and multiple, inconsistent screen readers create new practical problems for users with visual impairments.

  10. Automated hardwood lumber grading utilizing a multiple sensor machine vision technology

    Treesearch

    D. Earl Kline; Chris Surak; Philip A. Araman

    2003-01-01

    Over the last 10 years, scientists at the Thomas M. Brooks Forest Products Center, the Bradley Department of Electrical and Computer Engineering, and the USDA Forest Service have been working on lumber scanning systems that can accurately locate and identify defects in hardwood lumber. Current R&D efforts are targeted toward developing automated lumber grading...

  11. Teaching Children Thinking. Artificial Intelligence Memo Number 247.

    ERIC Educational Resources Information Center

    Papert, Seymour

    It is possible to maintain a vision of a technologically oriented educational system which is grander than the current one in which new gadgets are used to teach the old material in a thinly disguised old way. Educational innovation, particularly when computers are included, can find better things for children to do and better ways for the child…

  12. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    ERIC Educational Resources Information Center

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-01-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…

  13. [Comparison study between biological vision and computer vision].

    PubMed

    Liu, W; Yuan, X G; Yang, C X; Liu, Z Q; Wang, R

    2001-08-01

    The development and bearing of biology vision in structure and mechanism were discussed, especially on the aspects including anatomical structure of biological vision, tentative classification of reception field, parallel processing of visual information, feedback and conformity effect of visual cortical, and so on. The new advance in the field was introduced through the study of the morphology of biological vision. Besides, comparison between biological vision and computer vision was made, and their similarities and differences were pointed out.

  14. Effectiveness of Assistive Technologies for Low Vision Rehabilitation: A Systematic Review

    ERIC Educational Resources Information Center

    Jutai, Jeffrey W.; Strong, J. Graham; Russell-Minda, Elizabeth

    2009-01-01

    "Low vision" describes any condition of diminished vision that is uncorrectable by standard eyeglasses, contact lenses, medication, or surgery that disrupts a person's ability to perform common age-appropriate visual tasks. Examples of assistive technologies for vision rehabilitation include handheld magnifiers; electronic vision-enhancement…

  15. A tangible programming tool for children to cultivate computational thinking.

    PubMed

    Wang, Danli; Wang, Tingting; Liu, Zhen

    2014-01-01

    Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.

  16. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  17. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  18. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  19. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  20. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  1. Information Systems for NASA's Aeronautics and Space Enterprises

    NASA Technical Reports Server (NTRS)

    Kutler, Paul

    1998-01-01

    The aerospace industry is being challenged to reduce costs and development time as well as utilize new technologies to improve product performance. Information technology (IT) is the key to providing revolutionary solutions to the challenges posed by the increasing complexity of NASA's aeronautics and space missions and the sophisticated nature of the systems that enable them. The NASA Ames vision is to develop technologies enabling the information age, expanding the frontiers of knowledge for aeronautics and space, improving America's competitive position, and inspiring future generations. Ames' missions to accomplish that vision include: 1) performing research to support the American aviation community through the unique integration of computation, experimentation, simulation and flight testing, 2) studying the health of our planet, understanding living systems in space and the origins of the universe, developing technologies for space flight, and 3) to research, develop and deliver information technologies and applications. Information technology may be defined as the use of advance computing systems to generate data, analyze data, transform data into knowledge and to use as an aid in the decision-making process. The knowledge from transformed data can be displayed in visual, virtual and multimedia environments. The decision-making process can be fully autonomous or aided by a cognitive processes, i.e., computational aids designed to leverage human capacities. IT Systems can learn as they go, developing the capability to make decisions or aid the decision making process on the basis of experiences gained using limited data inputs. In the future, information systems will be used to aid space mission synthesis, virtual aerospace system design, aid damaged aircraft during landing, perform robotic surgery, and monitor the health and status of spacecraft and planetary probes. NASA Ames through the Center of Excellence for Information Technology Office is leading the effort in pursuit of revolutionary, IT-based approaches to satisfying NASA's aeronautics and space requirements. The objective of the effort is to incorporate information technologies within each of the Agency's four Enterprises, i.e., Aeronautics and Space Transportation Technology, Earth, Science, Human Exploration and Development of Space and Space Sciences. The end results of these efforts for Enterprise programs and projects should be reduced cost, enhanced mission capability and expedited mission completion.

  2. Computer vision for foreign body detection and removal in the food industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  3. Deep Learning for Computer Vision: A Brief Review

    PubMed Central

    Doulamis, Nikolaos; Doulamis, Anastasios; Protopapadakis, Eftychios

    2018-01-01

    Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein. PMID:29487619

  4. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  5. Experimental validation of docking and capture using space robotics testbeds

    NASA Technical Reports Server (NTRS)

    Spofford, John; Schmitz, Eric; Hoff, William

    1991-01-01

    This presentation describes the application of robotic and computer vision systems to validate docking and capture operations for space cargo transfer vehicles. Three applications are discussed: (1) air bearing systems in two dimensions that yield high quality free-flying, flexible, and contact dynamics; (2) validation of docking mechanisms with misalignment and target dynamics; and (3) computer vision technology for target location and real-time tracking. All the testbeds are supported by a network of engineering workstations for dynamic and controls analyses. Dynamic simulation of multibody rigid and elastic systems are performed with the TREETOPS code. MATRIXx/System-Build and PRO-MATLAB/Simulab are the tools for control design and analysis using classical and modern techniques such as H-infinity and LQG/LTR. SANDY is a general design tool to optimize numerically a multivariable robust compensator with a user-defined structure. Mathematica and Macsyma are used to derive symbolically dynamic and kinematic equations.

  6. Beyond the computer-based patient record: re-engineering with a vision.

    PubMed

    Genn, B; Geukers, L

    1995-01-01

    In order to achieve real benefit from the potential offered by a Computer-Based Patient Record, the capabilities of the technology must be applied along with true re-engineering of healthcare delivery processes. University Hospital recognizes this and is using systems implementation projects, such as the catalyst, for transforming the way we care for our patients. Integration is fundamental to the success of these initiatives and this must be explicitly planned against an organized systems architecture whose standards are market-driven. University Hospital also recognizes that Community Health Information Networks will offer improved quality of patient care at a reduced overall cost to the system. All of these implementation factors are considered up front as the hospital makes its initial decisions on to how to computerize its patient records. This improves our chances for success and will provide a consistent vision to guide the hospital's development of new and better patient care.

  7. A computer vision for animal ecology.

    PubMed

    Weinstein, Ben G

    2018-05-01

    A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.

  8. Towards Dynamic Authentication in the Grid — Secure and Mobile Business Workflows Using GSet

    NASA Astrophysics Data System (ADS)

    Mangler, Jürgen; Schikuta, Erich; Witzany, Christoph; Jorns, Oliver; Ul Haq, Irfan; Wanek, Helmut

    Until now, the research community mainly focused on the technical aspects of Grid computing and neglected commercial issues. However, recently the community tends to accept that the success of the Grid is crucially based on commercial exploitation. In our vision Foster's and Kesselman's statement "The Grid is all about sharing." has to be extended by "... and making money out of it!". To allow for the realization of this vision the trust-worthyness of the underlying technology needs to be ensured. This can be achieved by the use of gSET (Gridified Secure Electronic Transaction) as a basic technology for trust management and secure accounting in the presented Grid based workflow. We present a framework, conceptually and technically, from the area of the Mobile-Grid, which justifies the Grid infrastructure as a viable platform to enable commercially successful business workflows.

  9. A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge

    DTIC Science & Technology

    2016-07-29

    Science Foundation (NSF), Department of Defense (DOD), National Institute of Standards and Technology (NIST), Intelligence Community (IC) Introduction...multiple Federal agencies: • Intelligent big data sensors that act autonomously and are programmable via the network for increased flexibility, and... intelligence for scientific discovery enabled by rapid extreme-scale data analysis, capable of understanding and making sense of results and thereby

  10. A Summary of Proceedings for the Advanced Deployable Day/Night Simulation Symposium

    DTIC Science & Technology

    2009-07-01

    initiated to design , develop, and deliver transportable visual simulations that jointly provide night-vision and high-resolution daylight capability. The...Deployable Day/Night Simulation (ADDNS) Technology Demonstration Project was initiated to design , develop, and deliver transportable visual...was Dr. Richard Wildes (York University); Mr. Vitaly Zholudev (Department of Computer Science, York University), Mr. X. Zhu (Neptec Design Group), and

  11. Assessing the performance of structure-from-motion photogrammetry and terrestrial lidar 1 at reconstructing soil surface microtopography of naturally vegetated plots

    USDA-ARS?s Scientific Manuscript database

    Soil microtopography or soil roughness is a property of critical importance in many earth surface processes but is often difficult to measure. Advances in computer vision technologies have made image-based 3D depiction of the soil surface or Structure-from-Motion (SfM) available to many scientists ...

  12. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.

    PubMed

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon

    2017-02-28

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  13. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    NASA Astrophysics Data System (ADS)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon

    2017-02-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  14. Toxicity assessment of polluted sediments using swimming behavior alteration test with Daphnia magna

    NASA Astrophysics Data System (ADS)

    Nikitin, O. V.; Nasyrova, E. I.; Nuriakhmetova, V. R.; Stepanova, N. Yu; Danilova, N. V.; Latypova, V. Z.

    2018-01-01

    Recently behavioral responses of organisms are increasingly used as a reliable and sensitive tool in aquatic toxicology. Behavior-related endpoints allow efficiently studying the effects of sub-lethal exposure to contaminants. At present behavioural parameters frequently are determined with the use of digital analysis of video recording by computer vision technology. However, most studies evaluate the toxicity of aqueous solutions. Due to methodological difficulties associated with sample preparation not a lot of examples of the studies related to the assessment of toxicity of other environmental objects (wastes, sewage sludges, soils, sediments etc.) by computer vision technology. This paper presents the results of assessment of the swimming behavior alterations of Daphnia magna in elutriates from both uncontaminated natural and artificially chromium-contaminated bottom sediments. It was shown, that in elutriate from chromium contaminated bottom sediments (chromium concentration 115±5.7 μg l-1) the swimming speed of daphnids was decreases from 0.61 cm s-1 (median speed over the period) to 0.50 cm s-1 (median speed at the last minute of the experiment). The relocation of Daphnia from the culture medium to the extract from the non-polluted sediments does not essential changes the swimming activity.

  15. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  16. Computer vision-based sorting of Atlantic salmon (Salmo salar) fillets according to their color level.

    PubMed

    Misimi, E; Mathiassen, J R; Erikson, U

    2007-01-01

    Computer vision method was used to evaluate the color of Atlantic salmon (Salmo salar) fillets. Computer vision-based sorting of fillets according to their color was studied on 2 separate groups of salmon fillets. The images of fillets were captured using a digital camera of high resolution. Images of salmon fillets were then segmented in the regions of interest and analyzed in red, green, and blue (RGB) and CIE Lightness, redness, and yellowness (Lab) color spaces, and classified according to the Roche color card industrial standard. Comparisons of fillet color between visual evaluations were made by a panel of human inspectors, according to the Roche SalmoFan lineal standard, and the color scores generated from computer vision algorithm showed that there were no significant differences between the methods. Overall, computer vision can be used as a powerful tool to sort fillets by color in a fast and nondestructive manner. The low cost of implementing computer vision solutions creates the potential to replace manual labor in fish processing plants with automation.

  17. Enabling the environmentally clean air transportation of the future: a vision of computational fluid dynamics in 2030.

    PubMed

    Slotnick, Jeffrey P; Khodadoust, Abdollah; Alonso, Juan J; Darmofal, David L; Gropp, William D; Lurie, Elizabeth A; Mavriplis, Dimitri J; Venkatakrishnan, Venkat

    2014-08-13

    As global air travel expands rapidly to meet demand generated by economic growth, it is essential to continue to improve the efficiency of air transportation to reduce its carbon emissions and address concerns about climate change. Future transports must be 'cleaner' and designed to include technologies that will continue to lower engine emissions and reduce community noise. The use of computational fluid dynamics (CFD) will be critical to enable the design of these new concepts. In general, the ability to simulate aerodynamic and reactive flows using CFD has progressed rapidly during the past several decades and has fundamentally changed the aerospace design process. Advanced simulation capabilities not only enable reductions in ground-based and flight-testing requirements, but also provide added physical insight, and enable superior designs at reduced cost and risk. In spite of considerable success, reliable use of CFD has remained confined to a small region of the operating envelope due, in part, to the inability of current methods to reliably predict turbulent, separated flows. Fortunately, the advent of much more powerful computing platforms provides an opportunity to overcome a number of these challenges. This paper summarizes the findings and recommendations from a recent NASA-funded study that provides a vision for CFD in the year 2030, including an assessment of critical technology gaps and needed development, and identifies the key CFD technology advancements that will enable the design and development of much cleaner aircraft in the future. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  18. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  19. Machine Learning, deep learning and optimization in computer vision

    NASA Astrophysics Data System (ADS)

    Canu, Stéphane

    2017-03-01

    As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.

  20. Use of Field Programmable Gate Array Technology in Future Space Avionics

    NASA Technical Reports Server (NTRS)

    Ferguson, Roscoe C.; Tate, Robert

    2005-01-01

    Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.

  1. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  2. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  3. Experiences Using an Open Source Software Library to Teach Computer Vision Subjects

    ERIC Educational Resources Information Center

    Cazorla, Miguel; Viejo, Diego

    2015-01-01

    Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with emphasis on readability and…

  4. Real time AI expert system for robotic applications

    NASA Technical Reports Server (NTRS)

    Follin, John F.

    1987-01-01

    A computer controlled multi-robot process cell to demonstrate advanced technologies for the demilitarization of obsolete chemical munitions was developed. The methods through which the vision system and other sensory inputs were used by the artificial intelligence to provide the information required to direct the robots to complete the desired task are discussed. The mechanisms that the expert system uses to solve problems (goals), the different rule data base, and the methods for adapting this control system to any device that can be controlled or programmed through a high level computer interface are discussed.

  5. The NASA Computational Fluid Dynamics (CFD) program - Building technology to solve future challenges

    NASA Technical Reports Server (NTRS)

    Richardson, Pamela F.; Dwoyer, Douglas L.; Kutler, Paul; Povinelli, Louis A.

    1993-01-01

    This paper presents the NASA Computational Fluid Dynamics program in terms of a strategic vision and goals as well as NASA's financial commitment and personnel levels. The paper also identifies the CFD program customers and the support to those customers. In addition, the paper discusses technical emphasis and direction of the program and some recent achievements. NASA's Ames, Langley, and Lewis Research Centers are the research hubs of the CFD program while the NASA Headquarters Office of Aeronautics represents and advocates the program.

  6. A Tangible Programming Tool for Children to Cultivate Computational Thinking

    PubMed Central

    Wang, Danli; Liu, Zhen

    2014-01-01

    Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5–9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity. PMID:24719575

  7. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  8. Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.

    PubMed

    Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish

    2015-01-01

    Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.

  9. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    NASA Astrophysics Data System (ADS)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  10. [Facing the challenges of ubiquitous computing in the health care sector].

    PubMed

    Georgieff, Peter; Friedewald, Michael

    2010-01-01

    The steady progress of microelectronics, communications and information technology will enable the realisation of the vision for "ubiquitous computing" where the Internet extends into the real world embracing everyday objects. The necessary technical basis is already in place. Due to their diminishing size, constantly falling price and declining energy consumption, processors, communications modules and sensors are being increasingly integrated into everyday objects today. This development is opening up huge opportunities for both the economy and individuals. In the present paper we discuss possible applications, but also technical, social and economic barriers to a wide-spread use of ubiquitous computing in the health care sector. .

  11. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms.

    PubMed

    Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan

    2017-06-06

    Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.

  12. A High-Speed Target-Free Vision-Based Sensor for Bus Rapid Transit Viaduct Vibration Measurements Using CMT and ORB Algorithms

    PubMed Central

    Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan

    2017-01-01

    Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable. PMID:28587275

  13. Vision-Based UAV Flight Control and Obstacle Avoidance

    DTIC Science & Technology

    2006-01-01

    denoted it by Vb = (Vb1, Vb2 , Vb3). Fig. 2 shows the block diagram of the proposed vision-based motion analysis and obstacle avoidance system. We denote...structure analysis often involve computation- intensive computer vision tasks, such as feature extraction and geometric modeling. Computation-intensive...First, we extract a set of features from each block. 2) Second, we compute the distance between these two sets of features. In conventional motion

  14. Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement

    PubMed Central

    Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.

    2017-01-01

    Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975

  15. Learning the Digital Way: Evaluating Progress, Tackling Challenges. Technology Counts, 2015. Education Week. Volume 34, Number 35

    ERIC Educational Resources Information Center

    Edwards, Virginia B., Ed.

    2015-01-01

    Lofty ed-tech visions are always tempered by reality. Unexpected problems that arose during a launch of a 1-to-1 computing program and ambitious digital curriculum initiative in Los Angeles led to the dialing back of the effort. Financial, legal, and managerial repercussions continue to swirl in the wake. These problems should not prevent schools…

  16. IO Sphere: The Professional Journal of Joint Information Operations. Special Edition 2008

    DTIC Science & Technology

    2008-01-01

    members, disseminate propaganda, videos , brochures, and training materials, as well as to coordinate terrorist acts in an anonymous and...collaboration among larger communities of cyber Porn versus Terror Years ago, authorities noticed that child pornography websites, though often...stepping foot on them. Moreover, video information can be analyzed by computer vision algorithms. Based on technology available today, it’s not

  17. Understanding the Situation in the Urban Environment

    DTIC Science & Technology

    2001-05-15

    second type of information, termed executable information, communciates a clearly understood vision of the operation and desired outcome after a decision...information necessary for the commander as situational awareness information which creates understanding and execution information which communciates a...technological advances yet to take place in such fields as computers or remotely controlled sensors, ൿ DOrner, 39. 56 Creveld, 265. 22 will be less opaque

  18. Implementation of Protocols to Enable Doctoral Training in Physical and Computational Chemistry of a Blind Graduate Student

    ERIC Educational Resources Information Center

    Minkara, Mona S.; Weaver, Michael N.; Gorske, Jim; Bowers, Clifford R.; Merz, Kenneth M., Jr.

    2015-01-01

    There exists a sparse representation of blind and low-vision students in science, technology, engineering and mathematics (STEM) fields. This is due in part to these individuals being discouraged from pursuing STEM degrees as well as a lack of appropriate adaptive resources in upper level STEM courses and research. Mona Minkara is a rising fifth…

  19. Rapid analysis and quantification of fluorescent brighteners in wheat flour by Tri-step infrared spectroscopy and computer vision technology

    NASA Astrophysics Data System (ADS)

    Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Gu, Dong-Chen; Sun, Su-Qin; Xu, Chang-Hua; Wang, Xi-Chang

    2015-11-01

    Fluorescent brightener, industrial whitening agent, has been illegally used to whitening wheat flour. In this article, computer vision technology (E-eyes) and colorimetry were employed to investigate color difference among different concentrations of fluorescent brightener in wheat flour using DMS as an example. Tri-step infrared spectroscopy (Fourier transform-infrared spectroscopy coupled with second derivative infrared spectroscopy (SD-IR) and two dimensional correlation infrared spectroscopy (2DCOS-IR)) was used to identify and quantitate DMS in wheat flour. According to color analysis, the whitening effect was significant when added with less than 30 mg/g DMS but when more than 100 mg/g, the flour began greenish. Thus it was speculated that the concentration of DMS should be below 100 mg/g in real flour adulterant with DMS. With the increase of the concentration, the spectral similarity of wheat flour with DMS to DMS standard was increasing. SD-IR peaks at 1153 cm-1, 1141 cm-1, 1112 cm-1, 1085 cm-1 and 1025 cm-1 attributed to DMS were regularly enhanced. Furthermore, it could be differentiated by 2DOS-IR between DMS standard and wheat flour added with DMS low to 0.05 mg/g and the bands in the range of 1000-1500 cm-1 could be an exclusive range to identify whether wheat flour contained DMS. Finally, a quantitative prediction model based on IR spectra was established successfully by Partial least squares (PLS) with a concentration range from 1 mg/g to 100 mg/g. The calibration set gave a determination coefficient of 0.9884 with a standard error (RMSEC) of 5.56 and the validation set presented a determination coefficient of 0.9881 with a standard error of 5.73. It was demonstrated that computer vision technology and colorimetry were effective to estimate the content of DMS in wheat flour and the Tri-step infrared macro-fingerprinting combined with PLS was applicable for rapid and nondestructive fluorescent brightener identification and quantitation.

  20. Heterogeneous compute in computer vision: OpenCL in OpenCV

    NASA Astrophysics Data System (ADS)

    Gasparakis, Harris

    2014-02-01

    We explore the relevance of Heterogeneous System Architecture (HSA) in Computer Vision, both as a long term vision, and as a near term emerging reality via the recently ratified OpenCL 2.0 Khronos standard. After a brief review of OpenCL 1.2 and 2.0, including HSA features such as Shared Virtual Memory (SVM) and platform atomics, we identify what genres of Computer Vision workloads stand to benefit by leveraging those features, and we suggest a new mental framework that replaces GPU compute with hybrid HSA APU compute. As a case in point, we discuss, in some detail, popular object recognition algorithms (part-based models), emphasizing the interplay and concurrent collaboration between the GPU and CPU. We conclude by describing how OpenCL has been incorporated in OpenCV, a popular open source computer vision library, emphasizing recent work on the Transparent API, to appear in OpenCV 3.0, which unifies the native CPU and OpenCL execution paths under a single API, allowing the same code to execute either on CPU or on a OpenCL enabled device, without even recompiling.

  1. Deep learning-based artificial vision for grasp classification in myoelectric hands.

    PubMed

    Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush

    2017-06-01

    Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb Ultra TM prosthetic hand and a motion control TM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

  2. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  3. Deep learning-based artificial vision for grasp classification in myoelectric hands

    NASA Astrophysics Data System (ADS)

    Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush

    2017-06-01

    Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at {{5}\\circ} intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached 85 % for the seen and 75 % for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of 84 % in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88 % . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

  4. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    PubMed

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  5. Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer

    2005-01-01

    Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.

  6. Technology transfer of operator-in-the-loop simulation

    NASA Technical Reports Server (NTRS)

    Yae, K. H.; Lin, H. C.; Lin, T. C.; Frisch, H. P.

    1994-01-01

    The technology developed for operator-in-the-loop simulation in space teleoperation has been applied to Caterpillar's backhoe, wheel loader, and off-highway truck. On an SGI workstation, the simulation integrates computer modeling of kinematics and dynamics, real-time computational and visualization, and an interface with the operator through the operator's console. The console is interfaced with the workstation through an IBM-PC in which the operator's commands were digitized and sent through an RS-232 serial port. The simulation gave visual feedback adequate for the operator in the loop, with the camera's field of vision projected on a large screen in multiple view windows. The view control can emulate either stationary or moving cameras. This simulator created an innovative engineering design environment by integrating computer software and hardware with the human operator's interactions. The backhoe simulation has been adopted by Caterpillar in building a virtual reality tool for backhoe design.

  7. Reducing the Time and Cost of Testing Engines

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Producing a new aircraft engine currently costs approximately $1 billion, with 3 years of development time for a commercial engine and 10 years for a military engine. The high development time and cost make it extremely difficult to transition advanced technologies for cleaner, quieter, and more efficient new engines. To reduce this time and cost, NASA created a vision for the future where designers would use high-fidelity computer simulations early in the design process in order to resolve critical design issues before building the expensive engine hardware. To accomplish this vision, NASA's Glenn Research Center initiated a collaborative effort with the aerospace industry and academia to develop its Numerical Propulsion System Simulation (NPSS), an advanced engineering environment for the analysis and design of aerospace propulsion systems and components. Partners estimate that using NPSS has the potential to dramatically reduce the time, effort, and expense necessary to design and test jet engines by generating sophisticated computer simulations of an aerospace object or system. These simulations will permit an engineer to test various design options without having to conduct costly and time-consuming real-life tests. By accelerating and streamlining the engine system design analysis and test phases, NPSS facilitates bringing the final product to market faster. NASA's NPSS Version (V)1.X effort was a task within the Agency s Computational Aerospace Sciences project of the High Performance Computing and Communication program, which had a mission to accelerate the availability of high-performance computing hardware and software to the U.S. aerospace community for its use in design processes. The technology brings value back to NASA by improving methods of analyzing and testing space transportation components.

  8. Emerging computer technologies and the news media of the future

    NASA Technical Reports Server (NTRS)

    Vrabel, Debra A.

    1993-01-01

    The media environment of the future may be dramatically different from what exists today. As new computing and communications technologies evolve and synthesize to form a global, integrated communications system of networks, public domain hardware and software, and consumer products, it will be possible for citizens to fulfill most information needs at any time and from any place, to obtain desired information easily and quickly, to obtain information in a variety of forms, and to experience and interact with information in a variety of ways. This system will transform almost every institution, every profession, and every aspect of human life--including the creation, packaging, and distribution of news and information by media organizations. This paper presents one vision of a 21st century global information system and how it might be used by citizens. It surveys some of the technologies now on the market that are paving the way for new media environment.

  9. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  10. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  11. Can Humans Fly Action Understanding with Multiple Classes of Actors

    DTIC Science & Technology

    2015-06-08

    recognition using structure from motion point clouds. In European Conference on Computer Vision, 2008. [5] R. Caruana. Multitask learning. Machine Learning...tonomous driving ? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] L. Gorelick, M. Blank

  12. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set contains many views of the same vehicle in different contexts and situations simulating operational scenarios.

  13. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Vision servo of industrial robot: A review

    NASA Astrophysics Data System (ADS)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  15. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  16. Smart Camera System for Aircraft and Spacecraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; White, Janis; Abernathy, Michael F.

    2003-01-01

    This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SC3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results. Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and awareness. The system created to date provides a real-time operations personnel an appropriate level of situation 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using "static" data acquired by an aircraft or satellite at some point in the past. The SC3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1. The SC3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.

  17. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  18. Computer Vision Syndrome.

    PubMed

    Randolph, Susan A

    2017-07-01

    With the increased use of electronic devices with visual displays, computer vision syndrome is becoming a major public health issue. Improving the visual status of workers using computers results in greater productivity in the workplace and improved visual comfort.

  19. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    PubMed Central

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon

    2017-01-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871

  20. Computer vision and augmented reality in gastrointestinal endoscopy

    PubMed Central

    Mahmud, Nadim; Cohen, Jonah; Tsourides, Kleovoulos; Berzin, Tyler M.

    2015-01-01

    Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy—which relies on the integration of high-definition video data with pathologic correlates—requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy. PMID:26133175

  1. Cloud computing in medical imaging.

    PubMed

    Kagadis, George C; Kloukinas, Christos; Moore, Kevin; Philbin, Jim; Papadimitroulas, Panagiotis; Alexakos, Christos; Nagy, Paul G; Visvikis, Dimitris; Hendee, William R

    2013-07-01

    Over the past century technology has played a decisive role in defining, driving, and reinventing procedures, devices, and pharmaceuticals in healthcare. Cloud computing has been introduced only recently but is already one of the major topics of discussion in research and clinical settings. The provision of extensive, easily accessible, and reconfigurable resources such as virtual systems, platforms, and applications with low service cost has caught the attention of many researchers and clinicians. Healthcare researchers are moving their efforts to the cloud, because they need adequate resources to process, store, exchange, and use large quantities of medical data. This Vision 20/20 paper addresses major questions related to the applicability of advanced cloud computing in medical imaging. The paper also considers security and ethical issues that accompany cloud computing.

  2. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  3. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  4. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  5. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  6. Dynamic Vision for Control

    DTIC Science & Technology

    2006-07-27

    unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry

  7. MS Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koppenaal, David W.; Barinaga, Charles J.; Denton, M Bonner B.

    2005-11-01

    Good eyesight is often taken for granted, a situation that everyone appreciates once vision begins to fade with age. New eyeglasses or contact lenses are traditional ways to improve vision, but recent new technology, i.e. LASIK laser eye surgery, provides a new and exciting means for marked vision restoration and improvement. In mass spectrometry, detectors are the 'eyes' of the MS instrument. These 'eyes' have also been taken for granted. New detectors and new technologies are likewise needed to correct, improve, and extend ion detection and hence, our 'chemical vision'. The purpose of this report is to review and assessmore » current MS detector technology and to provide a glimpse towards future detector technologies. It is hoped that the report will also serve to motivate interest, prompt ideas, and inspire new visions for ion detection research.« less

  8. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  9. Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2

    NASA Technical Reports Server (NTRS)

    Lea, Robert N. (Editor); Villarreal, James A. (Editor)

    1991-01-01

    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.

  10. Electronic Nose and Electronic Tongue

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Nabarun; Bandhopadhyay, Rajib

    Human beings have five senses, namely, vision, hearing, touch, smell and taste. The sensors for vision, hearing and touch have been developed for several years. The need for sensors capable of mimicking the senses of smell and taste have been felt only recently in food industry, environmental monitoring and several industrial applications. In the ever-widening horizon of frontier research in the field of electronics and advanced computing, emergence of electronic nose (E-Nose) and electronic tongue (E-Tongue) have been drawing attention of scientists and technologists for more than a decade. By intelligent integration of multitudes of technologies like chemometrics, microelectronics and advanced soft computing, human olfaction has been successfully mimicked by such new techniques called machine olfaction (Pearce et al. 2002). But the very essence of such research and development efforts has centered on development of customized electronic nose and electronic tongue solutions specific to individual applications. In fact, research trends as of date clearly points to the fact that a machine olfaction system as versatile, universal and broadband as human nose and human tongue may not be feasible in the decades to come. But application specific solutions may definitely be demonstrated and commercialized by modulation in sensor design and fine-tuning the soft computing solutions. This chapter deals with theory, developments of E-Nose and E-Tongue technology and their applications. Also a succinct account of future trends of R&D efforts in this field with an objective of establishing co-relation between machine olfaction and human perception has been included.

  11. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  12. Neuro-inspired smart image sensor: analog Hmax implementation

    NASA Astrophysics Data System (ADS)

    Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman

    2015-03-01

    Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.

  13. Traffic light detection and intersection crossing using mobile computer vision

    NASA Astrophysics Data System (ADS)

    Grewei, Lynne; Lagali, Christopher

    2017-05-01

    The solution for Intersection Detection and Crossing to support the development of blindBike an assisted biking system for the visually impaired is discussed. Traffic light detection and intersection crossing are key needs in the task of biking. These problems are tackled through the use of mobile computer vision, in the form of a mobile application on an Android phone. This research builds on previous Traffic Light detection algorithms with a focus on efficiency and compatibility on a resource-limited platform. Light detection is achieved through blob detection algorithms utilizing training data to detect patterns of Red, Green and Yellow in complex real world scenarios where multiple lights may be present. Also, issues of obscurity and scale are addressed. Safe Intersection crossing in blindBike is also discussed. This module takes a conservative "assistive" technology approach. To achieve this blindBike use's not only the Android device but, an external bike cadence Bluetooth/Ant enabled sensor. Real world testing results are given and future work is discussed.

  14. Color line-scan technology in industrial applications

    NASA Astrophysics Data System (ADS)

    Lemstrom, Guy F.

    1995-10-01

    Color machine vision opens new possibilities for industrial on-line quality control applications. With color machine vision it's possible to detect different colors and shades, make color separation, spectroscopic applications and at the same time do measurements in the same way as with gray scale technology. These can be geometrical measurements such as dimensions, shape, texture etc. By combining these technologies in a color line scan camera, it brings the machine vision to new dimensions of realizing new applications and new areas in the machine vision business. Quality and process control requirements in the industry get more demanding every day. Color machine vision can be the solution for many simple tasks that haven't been realized with gray scale technology. The lack of detecting or measuring colors has been one reason why machine vision has not been used in quality control as much as it could have been. Color machine vision has shown a growing enthusiasm in the industrial machine vision applications. Potential areas of the industry include food, wood, mining and minerals, printing, paper, glass, plastic, recycling etc. Tasks are from simple measuring to total process and quality control. The color machine vision is not only for measuring colors. It can also be for contrast enhancement, object detection, background removing, structure detection and measuring. Color or spectral separation can be used in many different ways for working out machine vision application than before. It's only a question of how to use the benefits of having two or more data per measured pixel, instead of having only one as in case with traditional gray scale technology. There are plenty of potential applications already today that can be realized with color vision and it's going to give more performance to many traditional gray scale applications in the near future. But the most important feature is that color machine vision offers a new way of working out applications, where machine vision hasn't been applied before.

  15. Smart storage technologies applied to fresh foods: A review.

    PubMed

    Wang, Jingyu; Zhang, Min; Gao, Zhongxue; Adhikari, Benu

    2017-06-30

    Fresh foods are perishable, seasonal and regional in nature and their storage, transportation, and preservation of freshness are quite challenging. Smart storage technologies can online detection and monitor the changes of quality parameters and storage environment of fresh foods during storage, so that operators can make timely adjustments to reduce the loss. This article reviews the smart storage technologies from two aspects: online detection technologies and smartly monitoring technologies for fresh foods. Online detection technologies include electronic nose, nuclear magnetic resonance (NMR), near infrared spectroscopy (NIRS), hyperspectral imaging and computer vision. Smartly monitoring technologies mainly include some intelligent indicators for monitoring the change of storage environment. Smart storage technologies applied to fresh foods need to be highly efficient and nondestructive and need to be competitively priced. In this work, we have critically reviewed the principles, applications, and development trends of smart storage technologies.

  16. [Advances in automatic detection technology for images of thin blood film of malaria parasite].

    PubMed

    Juan-Sheng, Zhang; Di-Qiang, Zhang; Wei, Wang; Xiao-Guang, Wei; Zeng-Guo, Wang

    2017-05-05

    This paper reviews the computer vision and image analysis studies aiming at automated diagnosis or screening of malaria in microscope images of thin blood film smears. On the basis of introducing the background and significance of automatic detection technology, the existing detection technologies are summarized and divided into several steps, including image acquisition, pre-processing, morphological analysis, segmentation, count, and pattern classification components. Then, the principles and implementation methods of each step are given in detail. In addition, the promotion and application in automatic detection technology of thick blood film smears are put forwarded as questions worthy of study, and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  17. We Canwatch It For You Wholesale

    NASA Astrophysics Data System (ADS)

    Lipton, Alan J.

    This chapter provides an introduction to video analytics—a branch of computer vision technology that deals with automatic detection of activities and events in surveillance video feeds. Initial applications focused on the security and surveillance space, but as the technology improves it is rapidly finding a home in many other application areas. This chapter looks at some of those spaces, the requirements they impose on video analytics systems, and provides an example architecture and set of technology components to meet those requirements. This exemplary system is put through its paces to see how it stacks up in an embedded environment. Finally, we explore the future of video analytics and examine some of the market requirements that are driving breakthroughs in both video analytics and processor platform technology alike.

  18. The Illinois Century Network: New Dimensions for Education in Illinois. A Vision for Communications and Computing Networking To Retain and Expand Illinois' Position as a World Leader by the Turn of the Century. Report and First-Phase Recommendations of the Higher Education Technology Task Force to the Illinois Board of Higher Education and the Illinois Community College Board.

    ERIC Educational Resources Information Center

    Illinois State Board of Higher Education, Springfield.

    This proposal calls on the state of Illinois to initiate a statewide computing and telecommunications network that would give its residents access to higher education, advanced training, and electronic information resources. The proposed network, entitled Illinois Century Network, would link all higher education institutions in the state to…

  19. Requirements and principles for the implementation and construction of large-scale geographic information systems

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  20. Automated Ecological Assessment of Physical Activity: Advancing Direct Observation.

    PubMed

    Carlson, Jordan A; Liu, Bo; Sallis, James F; Kerr, Jacqueline; Hipp, J Aaron; Staggs, Vincent S; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M

    2017-12-01

    Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82-0.98). Total MET-minutes were slightly underestimated by 9.3-17.1% and the ICCs were good (0.68-0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings.

  1. Automated Ecological Assessment of Physical Activity: Advancing Direct Observation

    PubMed Central

    Carlson, Jordan A.; Liu, Bo; Sallis, James F.; Kerr, Jacqueline; Papa, Amy; Dean, Kelsey; Vasconcelos, Nuno M.

    2017-01-01

    Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total) of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET)-minutes. Means and standard deviations (SD) of bias/difference values, and intraclass correlation coefficients (ICC) assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA) had small biases (within 20% of the criterion mean) and the ICCs were excellent (0.82–0.98). Total MET-minutes were slightly underestimated by 9.3–17.1% and the ICCs were good (0.68–0.79). The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals) and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings. PMID:29194358

  2. Integrated 3-D vision system for autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Hou, Kun M.; Shawky, Mohamed; Tu, Xiaowei

    1992-03-01

    Nowadays, autonomous vehicles have become a multidiscipline field. Its evolution is taking advantage of the recent technological progress in computer architectures. As the development tools became more sophisticated, the trend is being more specialized, or even dedicated architectures. In this paper, we will focus our interest on a parallel vision subsystem integrated in the overall system architecture. The system modules work in parallel, communicating through a hierarchical blackboard, an extension of the 'tuple space' from LINDA concepts, where they may exchange data or synchronization messages. The general purpose processing elements are of different skills, built around 40 MHz i860 Intel RISC processors for high level processing and pipelined systolic array processors based on PLAs or FPGAs for low-level processing.

  3. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  4. Advances in display technology III; Proceedings of the Meeting, Los Angeles, CA, January 18, 19, 1983

    NASA Astrophysics Data System (ADS)

    Schlam, E.

    1983-01-01

    Human factors in visible displays are discussed, taking into account an introduction to color vision, a laser optometric assessment of visual display viewability, the quantification of color contrast, human performance evaluations of digital image quality, visual problems of office video display terminals, and contemporary problems in airborne displays. Other topics considered are related to electroluminescent technology, liquid crystal and related technologies, plasma technology, and display terminal and systems. Attention is given to the application of electroluminescent technology to personal computers, electroluminescent driving techniques, thin film electroluminescent devices with memory, the fabrication of very large electroluminescent displays, the operating properties of thermally addressed dye switching liquid crystal display, light field dichroic liquid crystal displays for very large area displays, and hardening military plasma displays for a nuclear environment.

  5. Present Vision--Future Vision.

    ERIC Educational Resources Information Center

    Fitterman, L. Jeffrey

    This paper addresses issues of current and future technology use for and by individuals with visual impairments and blindness in Florida. Present technology applications used in vision programs in Florida are individually described, including video enlarging, speech output, large inkprint, braille print, paperless braille, and tactual output…

  6. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  7. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  8. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  9. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Treesearch

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  10. Night vision and electro-optics technology transfer, 1972 - 1981

    NASA Astrophysics Data System (ADS)

    Fulton, R. W.; Mason, G. F.

    1981-09-01

    The purpose of this special report, 'Night Vision and Electro-Optics Technology Transfer 1972-1981,' is threefold: To illustrate, through actual case histories, the potential for exploiting a highly developed and available military technology for solving non-military problems. To provide, in a layman's language, the principles behind night vision and electro-optical devices in order that an awareness may be developed relative to the potential for adopting this technology for non-military applications. To obtain maximum dollar return from research and development investments by applying this technology to secondary applications. This includes, but is not limited to, applications by other Government agencies, state and local governments, colleges and universities, and medical organizations. It is desired that this summary of Technology Transfer activities within Night Vision and Electro-Optics Laboratory (NV/EOL) will benefit those who desire to explore one of the vast technological resources available within the Defense Department and the Federal Government.

  11. Beyond Moore's Law: Harnessing spatial-digital disruptive technologies for Digital Earth

    NASA Astrophysics Data System (ADS)

    Foresman, Timothy W.

    2016-11-01

    Moore's law will reach its plateau by 2020. Big data, however, will continue to increase as the Internet of Things and social media converge into the new era of ‘huge data’. Disruptive technologies, including big data and cloud computing are forces impacting business and government communities. The truth of our collective future is suggested to align with the Digital Earth (DE) vision. Benefits of technological advances will be manifested from business performance improvements based on capitalizing the locational attributes of corporate and government assets - the foundation of big data. Better governance and better business represents a key foundation for sustainability and therefore should be explicit DE guiding principles.

  12. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  13. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    ERIC Educational Resources Information Center

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  14. Visions of Change: Information Technology, Education and Postmodernism.

    ERIC Educational Resources Information Center

    Conlon, Tom

    2000-01-01

    Encourages visionary questions relating to information technology and education. Describes the context of postmodernist change and discusses two contrasting visions of how education could change, paternalism and libertarianism. Concludes that teachers, learners, and communities need to articulate their own visions of education to ensure a…

  15. PICSiP: new system-in-package technology using a high bandwidth photonic interconnection layer for converged microsystems

    NASA Astrophysics Data System (ADS)

    Tekin, Tolga; Töpper, Michael; Reichl, Herbert

    2009-05-01

    Technological frontiers between semiconductor technology, packaging, and system design are disappearing. Scaling down geometries [1] alone does not provide improvement of performance, less power, smaller size, and lower cost. It will require "More than Moore" [2] through the tighter integration of system level components at the package level. System-in-Package (SiP) will deliver the efficient use of three dimensions (3D) through innovation in packaging and interconnect technology. A key bottleneck to the implementation of high-performance microelectronic systems, including SiP, is the lack of lowlatency, high-bandwidth, and high density off-chip interconnects. Some of the challenges in achieving high-bandwidth chip-to-chip communication using electrical interconnects include the high losses in the substrate dielectric, reflections and impedance discontinuities, and susceptibility to crosstalk [3]. Obviously, the incentive for the use of photonics to overcome the challenges and leverage low-latency and highbandwidth communication will enable the vision of optical computing within next generation architectures. Supercomputers of today offer sustained performance of more than petaflops, which can be increased by utilizing optical interconnects. Next generation computing architectures are needed with ultra low power consumption; ultra high performance with novel interconnection technologies. In this paper we will discuss a CMOS compatible underlying technology to enable next generation optical computing architectures. By introducing a new optical layer within the 3D SiP, the development of converged microsystems, deployment for next generation optical computing architecture will be leveraged.

  16. No Photon Left Behind: Advanced Optics at ARPA-E for Buildings and Solar Energy

    NASA Astrophysics Data System (ADS)

    Branz, Howard M.

    2015-04-01

    Key technology challenges in building efficiency and solar energy utilization require transformational optics, plasmonics and photonics technologies. We describe advanced optical technologies funded by the Advanced Research Projects Agency - Energy. Buildings technologies include a passive daytime photonic cooler, infra-red computer vision mapping for energy audit, and dual-band electrochromic windows based on plasmonic absorption. Solar technologies include novel hybrid energy converters that combine high-efficiency photovoltaics with concentrating solar thermal collection and storage. Because the marginal cost of thermal energy storage is low, these systems enable generation of inexpensive and dispatchable solar energy that can be deployed when the sun doesn't shine. The solar technologies under development include nanoparticle plasmonic spectrum splitting, Rugate filter interference structures and photovoltaic cells that can operate efficiently at over 400° C.

  17. Smartphone, tablet computer and e-reader use by people with vision impairment.

    PubMed

    Crossland, Michael D; Silva, Rui S; Macedo, Antonio F

    2014-09-01

    Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  18. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  19. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    PubMed Central

    Casanova, Joaquin J.; O'Shaughnessy, Susan A.; Evett, Steven R.; Rush, Charles M.

    2014-01-01

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p < 0.0001). Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications. PMID:25251410

  20. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  1. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  2. Real-time tracking using stereo and motion: Visual perception for space robotics

    NASA Technical Reports Server (NTRS)

    Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann

    1994-01-01

    The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.

  3. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    PubMed

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  4. Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)

    1993-01-01

    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.

  5. Mapping Parameterized Dataflow Graphs onto FPGA Platforms (Preprint)

    DTIC Science & Technology

    2014-02-01

    Shen , Nimish Sane, William Plishker, Shuvra S. Bhattacharyya (University of Maryland) Hojin Kee (National Instruments) 5d. PROJECT NUMBER T2MC 5e...Rodyushkin, A. Ku - ranov, and V. Eruhimov. Computer vision workload analysis: Case study of video surveillance systems. Intel Technology Journal, 9, 2005...Prototyping, pages 1–7, Fairfax, Virginia, June 2010. [56] H. Wu, C. Shen , S. S. Bhattacharyya, K. Compton, M. Schulte, M. Wolf, and T. Zhang. Design and

  6. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  7. The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths

    DTIC Science & Technology

    2012-09-01

    The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths by...The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths Kelvin S. Oie...REPORT DATE (DD-MM-YYYY) September 2012 2. REPORT TYPE Final 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE The Cognition and Neuroergonomics

  8. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  9. Enhanced computer vision with Microsoft Kinect sensor: a review.

    PubMed

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  10. Texture and art with deep neural networks.

    PubMed

    Gatys, Leon A; Ecker, Alexander S; Bethge, Matthias

    2017-10-01

    Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience. Copyright © 2017. Published by Elsevier Ltd.

  11. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  12. MIT-NASA Workshop: Transformational Technologies

    NASA Technical Reports Server (NTRS)

    Mankins, J. C. (Editor); Christensen, C. B.; Gresham, E. C.; Simmons, A.; Mullins, C. A.

    2005-01-01

    As a space faring nation, we are at a critical juncture in the evolution of space exploration. NASA has announced its Vision for Space Exploration, a vision of returning humans to the Moon, sending robots and eventually humans to Mars, and exploring the outer solar system via automated spacecraft. However, mission concepts have become increasingly complex, with the potential to yield a wealth of scientific knowledge. Meanwhile, there are significant resource challenges to be met. Launch costs remain a barrier to routine space flight; the ever-changing fiscal and political environments can wreak havoc on mission planning; and technologies are constantly improving, and systems that were state of the art when a program began can quickly become outmoded before a mission is even launched. This Conference Publication describes the workshop and featured presentations by world-class experts presenting leading-edge technologies and applications in the areas of power and propulsion; communications; automation, robotics, computing, and intelligent systems; and transformational techniques for space activities. Workshops such as this one provide an excellent medium for capturing the broadest possible array of insights and expertise, learning from researchers in universities, national laboratories, NASA field Centers, and industry to help better our future in space.

  13. Impact of computer use on children's vision.

    PubMed

    Kozeis, N

    2009-10-01

    Today, millions of children use computers on a daily basis. Extensive viewing of the computer screen can lead to eye discomfort, fatigue, blurred vision and headaches, dry eyes and other symptoms of eyestrain. These symptoms may be caused by poor lighting, glare, an improper work station set-up, vision problems of which the person was not previously aware, or a combination of these factors. Children can experience many of the same symptoms related to computer use as adults. However, some unique aspects of how children use computers may make them more susceptible than adults to the development of these problems. In this study, the most common eye symptoms related to computer use in childhood, the possible causes and ways to avoid them are reviewed.

  14. New ultraportable display technology and applications

    NASA Astrophysics Data System (ADS)

    Alvelda, Phillip; Lewis, Nancy D.

    1998-08-01

    MicroDisplay devices are based on a combination of technologies rooted in the extreme integration capability of conventionally fabricated CMOS active-matrix liquid crystal display substrates. Customized diffraction grating and optical distortion correction technology for lens-system compensation allow the elimination of many lenses and systems-level components. The MicroDisplay Corporation's miniature integrated information display technology is rapidly leading to many new defense and commercial applications. There are no moving parts in MicroDisplay substrates, and the fabrication of the color generating gratings, already part of the CMOS circuit fabrication process, is effectively cost and manufacturing process-free. The entire suite of the MicroDisplay Corporation's technologies was devised to create a line of application- specific integrated circuit single-chip display systems with integrated computing, memory, and communication circuitry. Next-generation portable communication, computer, and consumer electronic devices such as truly portable monitor and TV projectors, eyeglass and head mounted displays, pagers and Personal Communication Services hand-sets, and wristwatch-mounted video phones are among the may target commercial markets for MicroDisplay technology. Defense applications range from Maintenance and Repair support, to night-vision systems, to portable projectors for mobile command and control centers.

  15. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  16. The flight telerobotic servicer and technology transfer

    NASA Technical Reports Server (NTRS)

    Andary, James F.; Bradford, Kayland Z.

    1991-01-01

    The Flight Telerobotic Servicer (FTS) project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station Freedom (SSF). The FTS will provide a telerobotic capability in the early phases of the SSF program and will be employed for assembly, maintenance, and inspection applications. The current state of space technology and the general nature of the FTS tasks dictate that the FTS be designed with sophisticated teleoperational capabilities for its internal primary operating mode. However, technologies such as advanced computer vision and autonomous planning techniques would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Another objective of the FTS program is to accelerate technology transfer from research to U.S. industry.

  17. Utilization of the Space Vision System as an Augmented Reality System For Mission Operations

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Bowen, Charles

    2003-01-01

    Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.

  18. Operational Assessment of Color Vision

    DTIC Science & Technology

    2016-06-20

    evaluated in this study. 15. SUBJECT TERMS Color vision, aviation, cone contrast test, Colour Assessment & Diagnosis , color Dx, OBVA 16. SECURITY...symbologies are frequently used to aid or direct critical activities such as aircraft landing approaches or railroad right-of-way designations...computer-generated display systems have facilitated the development of computer-based, automated tests of color vision [14,15]. The United Kingdom’s

  19. Technology for NASA's Planetary Science Vision 2050.

    NASA Technical Reports Server (NTRS)

    Lakew, B.; Amato, D.; Freeman, A.; Falker, J.; Turtle, Elizabeth; Green, J.; Mackwell, S.; Daou, D.

    2017-01-01

    NASAs Planetary Science Division (PSD) initiated and sponsored a very successful community Workshop held from Feb. 27 to Mar. 1, 2017 at NASA Headquarters. The purpose of the Workshop was to develop a vision of planetary science research and exploration for the next three decades until 2050. This abstract summarizes some of the salient technology needs discussed during the three-day workshop and at a technology panel on the final day. It is not meant to be a final report on technology to achieve the science vision for 2050.

  20. Review on Microstructure Analysis of Metals and Alloys Using Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Rekha, Suganthini; Bupesh Raja, V. K.

    2017-05-01

    The metals and alloys find vast application in engineering and domestic sectors. The mechanical properties of the metals and alloys are influenced by their microstructure. Hence the microstructural investigation is very critical. Traditionally the microstructure is studied using optical microscope with suitable metallurgical preparation. The past few decades the computers are applied in the capture and analysis of the optical micrographs. The advent of computer softwares like digital image processing and computer vision technologies are a boon to the analysis of the microstructure. In this paper the literature study of the various developments in the microstructural analysis, is done. The conventional optical microscope is complemented by the use of Scanning Electron Microscope (SEM) and other high end equipments.

  1. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  2. Issues central to a useful image understanding environment

    NASA Astrophysics Data System (ADS)

    Beveridge, J. Ross; Draper, Bruce A.; Hanson, Allen R.; Riseman, Edward M.

    1992-04-01

    A recent DARPA initiative has sparked interested in software environments for computer vision. The goal is a single environment to support both basic research and technology transfer. This paper lays out six fundamental attributes such a system must possess: (1) support for both C and Lisp, (2) extensibility, (3) data sharing, (4) data query facilities tailored to vision, (5) graphics, and (6) code sharing. The first three attributes fundamentally constrain the system design. Support for both C and Lisp demands some form of database or data-store for passing data between languages. Extensibility demands that system support facilities, such as spatial retrieval of data, be readily extended to new user-defined datatypes. Finally, data sharing demands that data saved by one user, including data of a user-defined type, must be readable by another user.

  3. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  4. Computer Vision Syndrome: Implications for the Occupational Health Nurse.

    PubMed

    Lurati, Ann Regina

    2018-02-01

    Computers and other digital devices are commonly used both in the workplace and during leisure time. Computer vision syndrome (CVS) is a new health-related condition that negatively affects workers. This article reviews the pathology of and interventions for CVS with implications for the occupational health nurse.

  5. Traffic Monitor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Mestech's X-15 "Eye in the Sky," a traffic monitoring system, incorporates NASA imaging and robotic vision technology. A camera or "sensor box" is mounted in a housing. The sensor detects vehicles approaching an intersection and sends the information to a computer, which controls the traffic light according to the traffic rate. Jet Propulsion Laboratory technical support packages aided in the company's development of the system. The X-15's "smart highway" can also be used to count vehicles on a highway and compute the number in each lane and their speeds, important information for freeway control engineers. Additional applications are in airport and railroad operations. The system is intended to replace loop-type traffic detectors.

  6. A fast two-plus-one phase-shifting algorithm for high-speed three-dimensional shape measurement system

    NASA Astrophysics Data System (ADS)

    Wang, Wenyun; Guo, Yingfu

    2008-12-01

    Phase-shifting methods for 3-D shape measurement have long been employed in optical metrology for their speed and accuracy. For real-time, accurate, 3-D shape measurement, a four-step phase-shifting algorithm which has the advantage of its symmetry is a good choice; however, its measurement error is sensitive to any fringe image errors caused by various sources such as motion blur. To alleviate this problem, a fast two-plus-one phase-shifting algorithm is proposed in this paper. This kind of technology will benefit many applications such as medical imaging, gaming, animation, computer vision, computer graphics, etc.

  7. A multidisciplinary approach to solving computer related vision problems.

    PubMed

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  8. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    PubMed

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Navigation studies based on the ubiquitous positioning technologies

    NASA Astrophysics Data System (ADS)

    Ye, Lei; Mi, Weijie; Wang, Defeng

    2007-11-01

    This paper summarized the nowadays positioning technologies, such as absolute positioning methods and relative positioning methods, indoor positioning and outdoor positioning, active positioning and passive positioning. Global Navigation Satellite System (GNSS) technologies were introduced as the omnipresent out-door positioning technologies, including GPS, GLONASS, Galileo and BD-1/2. After analysis of the shortcomings of GNSS, indoor positioning technologies were discussed and compared, including A-GPS, Cellular network, Infrared, Electromagnetism, Computer Vision Cognition, Embedded Pressure Sensor, Ultrasonic, RFID (Radio Frequency IDentification), Bluetooth, WLAN etc.. Then the concept and characteristics of Ubiquitous Positioning was proposed. After the ubiquitous positioning technologies contrast and selection followed by system engineering methodology, a navigation system model based on Incorporate Indoor-Outdoor Positioning Solution was proposed. And this model was simulated in the Galileo Demonstration for World Expo Shanghai project. In the conclusion, the prospects of ubiquitous positioning based navigation were shown, especially to satisfy the public location information acquiring requirement.

  10. Iris features-based heart disease diagnosis by computer vision

    NASA Astrophysics Data System (ADS)

    Nguchu, Benedictor A.; Li, Li

    2017-07-01

    The study takes advantage of several new breakthroughs in computer vision technology to develop a new mid-irisbiomedical platform that processes iris image for early detection of heart-disease. Guaranteeing early detection of heart disease provides a possibility of having non-surgical treatment as suggested by biomedical researchers and associated institutions. However, our observation discovered that, a clinical practicable solution which could be both sensible and specific for early detection is still lacking. Due to this, the rate of majority vulnerable to death is highly increasing. The delayed diagnostic procedures, inefficiency, and complications of available methods are the other reasons for this catastrophe. Therefore, this research proposes the novel IFB (Iris Features Based) method for diagnosis of premature, and early stage heart disease. The method incorporates computer vision and iridology to obtain a robust, non-contact, nonradioactive, and cost-effective diagnostic tool. The method analyzes abnormal inherent weakness in tissues, change in color and patterns, of a specific region of iris that responds to impulses of heart organ as per Bernard Jensen-iris Chart. The changes in iris infer the presence of degenerative abnormalities in heart organ. These changes are precisely detected and analyzed by IFB method that includes, tensor-based-gradient(TBG), multi orientations gabor filters(GF), textural oriented features(TOF), and speed-up robust features(SURF). Kernel and Multi class oriented support vector machines classifiers are used for classifying normal and pathological iris features. Experimental results demonstrated that the proposed method, not only has better diagnostic performance, but also provides an insight for early detection of other diseases.

  11. An overview of artificial intelligence and robotics. Volume 1: Artificial intelligence. Part B: Applications

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1983-01-01

    Artificial Intelligence (AI) is an emerging technology that has recently attracted considerable attention. Many applications are now under development. This report, Part B of a three part report on AI, presents overviews of the key application areas: Expert Systems, Computer Vision, Natural Language Processing, Speech Interfaces, and Problem Solving and Planning. The basic approaches to such systems, the state-of-the-art, existing systems and future trends and expectations are covered.

  12. NASA Advanced Computing Environment for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak

    2017-01-01

    Vision: To reach for new heights and reveal the unknown so that what we do and learn will benefit all humankind. Mission: To pioneer the future in space exploration, scientific discovery, and aeronautics research. Aeronautics Research (ARMD): Pioneer and prove new flight technologies for safer, more secure, efficient, and environmental friendly air transportation. Human Exploration and Operations (HEOMD): Focus on ISS operations; and develop new spacecraft and other capabilities for affordable, sustainable exploration beyond low Earth orbit. Science (SCMD): Explore the Earth, solar system, and universe beyond; chart best route for discovery; and reap the benefits of Earth and space exploration for society. Space Technology (STMD): Rapidly develop, demonstrate, and infuse revolutionary, high-payoff technologies through collaborative partnerships, expanding the boundaries of aerospace enterprise.

  13. Managing the three 'P's to improve patient safety: nursing administration's role in managing information technology.

    PubMed

    Simpson, Roy L

    2004-08-01

    The Institute of Medicine's landmark report asserted that medical error is seldom the fault of individuals, but the result of faulty healthcare policy/procedure systems. Numerous studies have shown that information technology (IT) can shore up weak systems. For nursing, IT plays a key role in eliminating nursing mistakes. However, managing IT is a function of managing the people who use it. For nursing administrators, successful IT implementations depend on adroit management of the three 'P's: People, processes and (computer) programs. This paper examines critical issues for managing each entity. It discusses the importance of developing trusting organizations, the requirements of process change, how to implement technology in harmony with the organization and the significance of vision.

  14. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  15. Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Noyes, Matthew A.

    2013-01-01

    This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.

  16. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  17. Beyond the Foundations: The Role of Vision and Belief in Teachers' Preparation for Integration of Technology.

    ERIC Educational Resources Information Center

    Albion, Peter R.; Ertmer, Peggy A.

    2002-01-01

    Discussion of the successful adoption and use of information technology in education focuses on teacher's personal philosophical beliefs and how they influence the successful integration of technology. Highlights include beliefs and teacher behavior; changing teachers' beliefs; and using technology to affect change in teachers' visions and…

  18. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  19. Comparative randomised active drug controlled clinical trial of a herbal eye drop in computer vision syndrome.

    PubMed

    Chatterjee, Pranab Kr; Bairagi, Debasis; Roy, Sudipta; Majumder, Nilay Kr; Paul, Ratish Ch; Bagchi, Sunil Ch

    2005-07-01

    A comparative double-blind placebo-controlled clinical trial of a herbal eye drop (itone) was conducted to find out its efficacy and safety in 120 patients with computer vision syndrome. Patients using computers for more than 3 hours continuously per day having symptoms of watering, redness, asthenia, irritation, foreign body sensation and signs of conjunctival hyperaemia, corneal filaments and mucus were studied. One hundred and twenty patients were randomly given either placebo, tears substitute (tears plus) or itone in identical vials with specific code number and were instructed to put one drop four times daily for 6 weeks. Subjective and objective assessments were done at bi-weekly intervals. In computer vision syndrome both subjective and objective improvements were noticed with itone drops. Itone drop was found significantly better than placebo (p<0.01) and almost identical results were observed with tears plus (difference was not statistically significant). Itone is considered to be a useful drug in computer vision syndrome.

  20. Reflections on the Development of a Machine Vision Technology for the Forest Products

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    The authors have approximately 25 years experience in developing machine vision technology for the forest products industry. Based on this experience this paper will attempt to realistically predict what the future holds for this technology. In particular, this paper will attempt to describe some of the benefits this technology will offer, describe how the technology...

  1. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    ERIC Educational Resources Information Center

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  2. Parallel computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhr, L.

    1987-01-01

    This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.

  3. Sensor Control of Robot Arc Welding

    NASA Technical Reports Server (NTRS)

    Sias, F. R., Jr.

    1983-01-01

    The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.

  4. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  5. The Future of Learning Technology: Some Tentative Predictions

    ERIC Educational Resources Information Center

    Rushby, Nick

    2013-01-01

    This paper is a snapshot of an evolving vision of what the future may hold for learning technology. It offers three personal visions of the future and raises many questions that need to be explored if learning technology is to realise its full potential.

  6. Critical infrastructure monitoring using UAV imagery

    NASA Astrophysics Data System (ADS)

    Maltezos, Evangelos; Skitsas, Michael; Charalambous, Elisavet; Koutras, Nikolaos; Bliziotis, Dimitris; Themistocleous, Kyriacos

    2016-08-01

    The constant technological evolution in Computer Vision enabled the development of new techniques which in conjunction with the use of Unmanned Aerial Vehicles (UAVs) may extract high quality photogrammetric products for several applications. Dense Image Matching (DIM) is a Computer Vision technique that can generate a dense 3D point cloud of an area or object. The use of UAV systems and DIM techniques is not only a flexible and attractive solution to produce accurate and high qualitative photogrammetric results but also is a major contribution to cost effectiveness. In this context, this study aims to highlight the benefits of the use of the UAVs in critical infrastructure monitoring applying DIM. A Multi-View Stereo (MVS) approach using multiple images (RGB digital aerial and oblique images), to fully cover the area of interest, is implemented. The application area is an Olympic venue in Attica, Greece, at an area of 400 acres. The results of our study indicate that the UAV+DIM approach respond very well to the increasingly greater demands for accurate and cost effective applications when provided with, a 3D point cloud and orthomosaic.

  7. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  8. An Rx for 20/20 Vision: Vision Planning and Education.

    ERIC Educational Resources Information Center

    Chrisman, Gerald J.; Holliday, Clifford R.

    1996-01-01

    Discusses the Dallas Independent School District's decision to adopt an integrated technology infrastructure and the importance of vision planning for long term goals. Outlines the vision planning process: first draft; environmental projection; restatement of vision in terms of market projections, anticipated customer needs, suspected competitor…

  9. Nondestructive and rapid detection of potato black heart based on machine vision technology

    NASA Astrophysics Data System (ADS)

    Tian, Fang; Peng, Yankun; Wei, Wensong

    2016-05-01

    Potatoes are one of the major food crops in the world. Potato black heart is a kind of defect that the surface is intact while the tissues in skin become black. This kind of potato has lost the edibleness, but it's difficult to be detected with conventional methods. A nondestructive detection system based on the machine vision technology was proposed in this study to distinguish the normal and black heart of potatoes according to the different transmittance of them. The detection system was equipped with a monochrome CCD camera, LED light sources for transmitted illumination and a computer. Firstly, the transmission images of normal and black heart potatoes were taken by the detection system. Then the images were processed by algorithm written with VC++. As the transmitted light intensity was influenced by the radial dimension of the potato samples, the relationship between the grayscale value and the potato radial dimension was acquired by analyzing the grayscale value changing rule of the transmission image. Then proper judging condition was confirmed to distinguish the normal and black heart of potatoes after image preprocessing. The results showed that the nondestructive system built coupled with the processing methods was accessible for the detection of potato black heart at a considerable accuracy rate. The transmission detection technique based on machine vision is nondestructive and feasible to realize the detection of potato black heart.

  10. Eyes of Things.

    PubMed

    Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L; Rico-Saavedra, Jose M; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra

    2017-05-21

    Embedded systems control and monitor a great deal of our reality. While some "classic" features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous "intelligence". Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities.

  11. Eyes of Things

    PubMed Central

    Deniz, Oscar; Vallez, Noelia; Espinosa-Aranda, Jose L.; Rico-Saavedra, Jose M.; Parra-Patino, Javier; Bueno, Gloria; Moloney, David; Dehghani, Alireza; Dunne, Aubrey; Pagani, Alain; Krauss, Stephan; Reiser, Ruben; Waeny, Martin; Sorci, Matteo; Llewellynn, Tim; Fedorczak, Christian; Larmoire, Thierry; Herbst, Marco; Seirafi, Andre; Seirafi, Kasra

    2017-01-01

    Embedded systems control and monitor a great deal of our reality. While some “classic” features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last few years to emphasize connectivity functions, thus contributing to the Internet of Things paradigm. A myriad of sensing/computing devices are being attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Apart from the obvious necessity to process at least some data at the edge (to increase security and reduce power consumption and latency), a major breakthrough will arguably come when such devices are endowed with some level of autonomous “intelligence”. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we cannot conceive an exact algorithm. Central to such intelligence is Computer Vision (CV), i.e., extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places and things. The possibilities of embedded CV are endless if we consider new applications and technologies, such as deep learning, drones, home robotics, intelligent surveillance, intelligent toys, wearable cameras, etc. This paper describes the Eyes of Things (EoT) platform, a versatile computer vision platform tackling those challenges and opportunities. PMID:28531141

  12. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

  13. Designing and validating the joint battlespace infosphere

    NASA Astrophysics Data System (ADS)

    Peterson, Gregory D.; Alexander, W. Perry; Birdwell, J. Douglas

    2001-08-01

    Fielding and managing the dynamic, complex information systems infrastructure necessary for defense operations presents significant opportunities for revolutionary improvements in capabilities. An example of this technology trend is the creation and validation of the Joint Battlespace Infosphere (JBI) being developed by the Air Force Research Lab. The JBI is a system of systems that integrates, aggregates, and distributes information to users at all echelons, from the command center to the battlefield. The JBI is a key enabler of meeting the Air Force's Joint Vision 2010 core competencies such as Information Superiority, by providing increased situational awareness, planning capabilities, and dynamic execution. At the same time, creating this new operational environment introduces significant risk due to an increased dependency on computational and communications infrastructure combined with more sophisticated and frequent threats. Hence, the challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.

  14. The Virtual Pelvic Floor, a tele-immersive educational environment.

    PubMed Central

    Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.

    1999-01-01

    This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378

  15. Toothguide Trainer tests with color vision deficiency simulation monitor.

    PubMed

    Borbély, Judit; Varsányi, Balázs; Fejérdy, Pál; Hermann, Péter; Jakstat, Holger A

    2010-01-01

    The aim of this study was to evaluate whether simulated severe red and green color vision deficiency (CVD) influenced color matching results and to investigate whether training with Toothguide Trainer (TT) computer program enabled better color matching results. A total of 31 color normal dental students participated in the study. Every participant had to pass the Ishihara Test. Participants with a red/green color vision deficiency were excluded. A lecture on tooth color matching was given, and individual training with TT was performed. To measure the individual tooth color matching results in normal and color deficient display modes, the TT final exam was displayed on a calibrated monitor that served as a hardware-based method of simulating protanopy and deuteranopy. Data from the TT final exams were collected in normal and in severe red and green CVD-simulating monitor display modes. Color difference values for each participant in each display mode were computed (∑ΔE(ab)(*)), and the respective means and standard deviations were calculated. The Student's t-test was used in statistical evaluation. Participants made larger ΔE(ab)(*) errors in severe color vision deficient display modes than in the normal monitor mode. TT tests showed significant (p<0.05) difference in the tooth color matching results of severe green color vision deficiency simulation mode compared to normal vision mode. Students' shade matching results were significantly better after training (p=0.009). Computer-simulated severe color vision deficiency mode resulted in significantly worse color matching quality compared to normal color vision mode. Toothguide Trainer computer program improved color matching results. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Policy and Technology Readiness: Engaging the User and Developer Community to Develop a Research Roadmap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Jarrod; Barr, Jonathan L.; Burtner, Edwin R.

    A key challenge for research roadmapping in the crisis response and management domain is articulation of a shared vision that describes what the future can and should include. Visioning allows for far-reaching stakeholder engagement that can properly align research with stakeholders needs. Engagement includes feedback from researchers, policy makers, general public, and end-users on technical and non-technical factors. This work articulates a process and framework for the construction and maintenance of a stakeholder-centric research vision and roadmap in the emergency management domain. This novel roadmapping process integrates three pieces: analysis of the research and technology landscape, visioning, and stakeholder engagement.more » Our structured engagement process elicits research foci for the roadmap based on relevance to stakeholder mission, identifies collaborators, and builds consensus around the roadmap priorities. We find that the vision process and vision storyboard helps SMEs conceptualize and discuss a technology's strengths, weaknesses, and alignment with needs« less

  17. Collaborative Biomedicine in the Age of Big Data: The Case of Cancer

    PubMed Central

    Butte, Atul J; Schully, Sheri D; Dalton, William S; Khoury, Muin J; Hesse, Bradford W

    2014-01-01

    Biomedicine is undergoing a revolution driven by high throughput and connective computing that is transforming medical research and practice. Using oncology as an example, the speed and capacity of genomic sequencing technologies is advancing the utility of individual genetic profiles for anticipating risk and targeting therapeutics. The goal is to enable an era of “P4” medicine that will become increasingly more predictive, personalized, preemptive, and participative over time. This vision hinges on leveraging potentially innovative and disruptive technologies in medicine to accelerate discovery and to reorient clinical practice for patient-centered care. Based on a panel discussion at the Medicine 2.0 conference in Boston with representatives from the National Cancer Institute, Moffitt Cancer Center, and Stanford University School of Medicine, this paper explores how emerging sociotechnical frameworks, informatics platforms, and health-related policy can be used to encourage data liquidity and innovation. This builds on the Institute of Medicine’s vision for a “rapid learning health care system” to enable an open source, population-based approach to cancer prevention and control. PMID:24711045

  18. Collaborative biomedicine in the age of big data: the case of cancer.

    PubMed

    Shaikh, Abdul R; Butte, Atul J; Schully, Sheri D; Dalton, William S; Khoury, Muin J; Hesse, Bradford W

    2014-04-07

    Biomedicine is undergoing a revolution driven by high throughput and connective computing that is transforming medical research and practice. Using oncology as an example, the speed and capacity of genomic sequencing technologies is advancing the utility of individual genetic profiles for anticipating risk and targeting therapeutics. The goal is to enable an era of "P4" medicine that will become increasingly more predictive, personalized, preemptive, and participative over time. This vision hinges on leveraging potentially innovative and disruptive technologies in medicine to accelerate discovery and to reorient clinical practice for patient-centered care. Based on a panel discussion at the Medicine 2.0 conference in Boston with representatives from the National Cancer Institute, Moffitt Cancer Center, and Stanford University School of Medicine, this paper explores how emerging sociotechnical frameworks, informatics platforms, and health-related policy can be used to encourage data liquidity and innovation. This builds on the Institute of Medicine's vision for a "rapid learning health care system" to enable an open source, population-based approach to cancer prevention and control.

  19. Laboratory Directed Research and Development Annual Report for 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Pamela J.

    This report documents progress made on all LDRD-funded projects during fiscal year 2009. As a US Department of Energy (DOE) Office of Science (SC) national laboratory, Pacific Northwest National Laboratory (PNNL) has an enduring mission to bring molecular and environmental sciences and engineering strengths to bear on DOE missions and national needs. Their vision is to be recognized worldwide and valued nationally for leadership in accelerating the discovery and deployment of solutions to challenges in energy, national security, and the environment. To achieve this mission and vision, they provide distinctive, world-leading science and technology in: (1) the design and scalablemore » synthesis of materials and chemicals; (2) climate change science and emissions management; (3) efficient and secure electricity management from generation to end use; and (4) signature discovery and exploitation for threat detection and reduction. PNNL leadership also extends to operating EMSL: the Environmental Molecular Sciences Laboratory, a national scientific user facility dedicated to providing itnegrated experimental and computational resources for discovery and technological innovation in the environmental molecular sciences.« less

  20. [Meibomian gland disfunction in computer vision syndrome].

    PubMed

    Pimenidi, M K; Polunin, G S; Safonova, T N

    2010-01-01

    This article reviews ethiology and pathogenesis of dry eye syndrome due to meibomian gland disfunction (MDG). It is showed that blink rate influences meibomian gland functioning and computer vision syndrome development. Current diagnosis and treatment options of MDG are presented.

  1. Computational Unification: a Vision for Connecting Researchers

    NASA Astrophysics Data System (ADS)

    Troy, R. M.; Kingrey, O. J.

    2002-12-01

    Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.

  2. 78 FR 55086 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ...: Emerging Technologies and Training Neurosciences Integrated Review Group; Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: October 3-4, 2013. Time: 8:00 a.m. to 11:00 a.m... . Name of Committee: Bioengineering Sciences & Technologies Integrated Review Group; Biomaterials and...

  3. Transforming revenue management.

    PubMed

    Silveria, Richard; Alliegro, Debra; Nudd, Steven

    2008-11-01

    Healthcare organizations that want to undertake a patient administrative/revenue management transformation should: Define the vision with underlying business objectives and key performance measures. Strategically partner with key vendors for business process development and technology design. Create a program organization and governance infrastructure. Develop a corporate design model that defines the standards for operationalizing the vision. Execute the vision through technology deployment and corporate design model implementation.

  4. To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.

    PubMed

    Grifantini, Kristina

    2017-01-01

    Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.

  5. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  6. Developing Personal Learning Environments Based on Calm Technologies

    NASA Astrophysics Data System (ADS)

    Fiaidhi, Jinan

    Educational technology is constantly evolving and growing, and it is inevitable that this progression will continually offer new and interesting advances in our world. The instigation of calm technologies for the delivery of education is another new approach now emerging. Calm technology aims to reduce the "excitement" of information overload by letting the learner select what information is at the center of their attention and what information need to be at the peripheral. In this paper we report on the adaptation of calm technologies in an educational setting with emphasis on the needs to cater the preferences of the individual learner to respond to the challenge of providing truly learner-centered, accessible, personalized and flexible learning. Central to calm computing vision is the notion of representing learning objects as widgets, harvesting widgets from the periphery based on semantic wikis as well as widgets garbage collection from the virtual/central learning memory.

  7. The diagnostic performance of expert dermoscopists vs a computer-vision system on small-diameter melanomas.

    PubMed

    Friedman, Robert J; Gutkowicz-Krusin, Dina; Farber, Michele J; Warycha, Melanie; Schneider-Kels, Lori; Papastathis, Nicole; Mihm, Martin C; Googe, Paul; King, Roy; Prieto, Victor G; Kopf, Alfred W; Polsky, David; Rabinovitz, Harold; Oliviero, Margaret; Cognetta, Armand; Rigel, Darrell S; Marghoob, Ashfaq; Rivers, Jason; Johr, Robert; Grant-Kels, Jane M; Tsao, Hensin

    2008-04-01

    To evaluate the performance of dermoscopists in diagnosing small pigmented skin lesions (diameter

  8. Resources and Information for Parents about Braille

    MedlinePlus

    ... Vision Loss Eye Conditions Losing Your Sight? Using Technology For Parents of Blind Children For Job Seekers For Seniors Braille Video Description Programs & Services Technology Evaluation Center on Vision Loss Learning Center Professional ...

  9. Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.

    PubMed

    van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline

    2010-11-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.

  10. Multi-object detection and tracking technology based on hexagonal opto-electronic detector

    NASA Astrophysics Data System (ADS)

    Song, Yong; Hao, Qun; Li, Xiang

    2008-02-01

    A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.

  11. Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 1

    NASA Technical Reports Server (NTRS)

    Lea, Robert N. (Editor); Villarreal, James (Editor)

    1991-01-01

    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Houston, Clear Lake. The workshop was held April 11 to 13 at the Johnson Space Flight Center. Technical topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.

  12. Medical Equipment Tele- and Condition-Based Maintenance with Enhanced Remote Diagnostic Access (RDA) and Computer Vision

    DTIC Science & Technology

    2010-04-01

    failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE APR 2010 2. REPORT...The second is a ‘mechanical’ part that is controlled by circuit boards and is accessible by the technician via the serial console and running...was the use of conventional remote access solution designed for telecommuters or teleworkers in the Information Technology (IT) world, such as a

  13. Computer vision system: a tool for evaluating the quality of wheat in a grain tank

    NASA Astrophysics Data System (ADS)

    Minkin, Uryi Igorevish; Panchenko, Aleksei Vladimirovich; Shkanaev, Aleksandr Yurievich; Konovalenko, Ivan Andreevich; Putintsev, Dmitry Nikolaevich; Sadekov, Rinat Nailevish

    2018-04-01

    The paper describes a technology that allows for automatizing the process of evaluating the grain quality in a grain tank of a combine harvester. Special recognition algorithm analyzes photographic images taken by the camera, and that provides automatic estimates of the total mass fraction of broken grains and the presence of non-grains. The paper also presents the operating details of the tank prototype as well as it defines the accuracy of the algorithms designed.

  14. Placing US Air Force Information Technology Investment Under the Nanoscope A Clear Vision of Nanotechnology’s Impact on Computing in 2030

    DTIC Science & Technology

    2007-04-01

    effectively . Another serious problem is the growing power consumption for high-performance logic chips. If increasing clock frequency and IC density...n) Study Effect of Nanomaterials on Environment What is your judgment of the potential of the various responses based on your knowledge and the...open research closely coupled to internal development and deployment. (n) Study Effect of Nanomaterials on Environment (o) Long-Term, Balanced IT

  15. Astronomy Aid

    NASA Technical Reports Server (NTRS)

    1995-01-01

    As a Jet Propulsion Laboratory astronomer, John D. Callahan developed a computer program called Multimission Interactive Planner (MIP) to help astronomers analyze scientific and optical data collected on the Voyager's Grand Tour. The commercial version of the program called XonVu is published by XonTech, Inc. Callahan has since developed two more advanced programs based on MIP technology, Grand Tour and Jovian Traveler, which simulate Voyager and Giotto missions. The software allows astronomers and space novices to view the objects seen by the spacecraft, manipulating perspective, distance and field of vision.

  16. Danville Community College Information Technology General Plan, 1998-99.

    ERIC Educational Resources Information Center

    Danville Community Coll., VA.

    This document describes technology usage, infrastructure and planning for Danville Community College. The Plan is divided into four sections: Introduction, Vision and Mission, Applications, and Infrastructure. The four major goals identified in Vision and Mission are: (1) to ensure the successful use of all technologies through continued training…

  17. Taming Crowded Visual Scenes

    DTIC Science & Technology

    2014-08-12

    Nolan Warner, Mubarak Shah. Tracking in Dense Crowds Using Prominenceand Neighborhood Motion Concurrence, IEEE Transactions on Pattern Analysis...of  computer  vision,   computer   graphics  and  evacuation  dynamics  by  providing  a  common  platform,  and  provides...areas  that  includes  Computer  Vision,  Computer   Graphics ,  and  Pedestrian   Evacuation  Dynamics.  Despite  the

  18. Computer vision syndrome: a review of ocular causes and potential treatments.

    PubMed

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  19. An Enduring Dialogue between Computational and Empirical Vision.

    PubMed

    Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J

    2018-04-01

    In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. A review of automated image understanding within 3D baggage computed tomography security screening.

    PubMed

    Mouton, Andre; Breckon, Toby P

    2015-01-01

    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.

  1. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  2. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  3. Find Services for People Who Are Blind or Visually Impaired

    MedlinePlus

    ... Vision Loss Eye Conditions Losing Your Sight? Using Technology For Parents of Blind Children For Job Seekers For Seniors Braille Video Description Programs & Services Technology Evaluation Center on Vision Loss Learning Center Professional ...

  4. Color machine vision in industrial process control: case limestone mine

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.; Lemstrom, Guy F.; Koskinen, Seppo

    1994-11-01

    An optical sorter technology has been developed to improve profitability of a mine by using color line scan machine vision technology. The new technology adapted longers the expected life time of the limestone mine and improves its efficiency. Also the project has proved that color line scan technology of today can successfully be applied to industrial use in harsh environments.

  5. Emergent technologies: 25 years

    NASA Astrophysics Data System (ADS)

    Rising, Hawley K.

    2013-03-01

    This paper will talk about the technologies that have been emerging over the 25 years since the Human Vision and Electronic Imaging conference began that the conference has been a part of, and that have been a part of the conference, and will look at those technologies that are emerging today, such as social networks, haptic technologies, and still emerging imaging technologies, and what we might look at for the future.Twenty-five years is a long time, and it is not without difficulty that we remember what was emerging in the late 1980s. Yet to be developed: The first commercial digital still camera was not yet on the market, although there were hand held electronic cameras. Personal computers were not displaying standardized images, and image quality was not something that could be talked about in a standardized fashion, if only because image compression algorithms were not standardized yet for several years hence. Even further away were any standards for movie compression standards, there was no personal computer even on the horizon which could display them. What became an emergent technology and filled many sessions later, image comparison and search, was not possible, nor the current emerging technology of social networks- the world wide web was still several years away. Printer technology was still devising dithers and image size manipulations which would consume many years, as would scanning technology, and image quality for both was a major issue for dithers and Fourier noise.From these humble beginnings to the current moves that are changing computing and the meaning of both electronic devices and human interaction with them, we will see a course through the changing technology that holds some features constant for many years, while others come and go.

  6. Implementation of a robotic flexible assembly system

    NASA Technical Reports Server (NTRS)

    Benton, Ronald C.

    1987-01-01

    As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.

  7. Computer vision syndrome-A common cause of unexplained visual symptoms in the modern era.

    PubMed

    Munshi, Sunil; Varghese, Ashley; Dhar-Munshi, Sushma

    2017-07-01

    The aim of this study was to assess the evidence and available literature on the clinical, pathogenetic, prognostic and therapeutic aspects of Computer vision syndrome. Information was collected from Medline, Embase & National Library of Medicine over the last 30 years up to March 2016. The bibliographies of relevant articles were searched for additional references. Patients with Computer vision syndrome present to a variety of different specialists, including General Practitioners, Neurologists, Stroke physicians and Ophthalmologists. While the condition is common, there is a poor awareness in the public and among health professionals. Recognising this condition in the clinic or in emergency situations like the TIA clinic is crucial. The implications are potentially huge in view of the extensive and widespread use of computers and visual display units. Greater public awareness of Computer vision syndrome and education of health professionals is vital. Preventive strategies should form part of work place ergonomics routinely. Prompt and correct recognition is important to allow management and avoid unnecessary treatments. © 2017 John Wiley & Sons Ltd.

  8. Comparative randomised controlled clinical trial of a herbal eye drop with artificial tear and placebo in computer vision syndrome.

    PubMed

    Biswas, N R; Nainiwal, S K; Das, G K; Langan, U; Dadeya, S C; Mongre, P K; Ravi, A K; Baidya, P

    2003-03-01

    A comparative randomised double masked multicentric clinical trial has been conducted to find out the efficacy and safety of a herbal eye drop preparation, itone eye drops with artificial tear and placebo in 120 patients with computer vision syndrome. Patients using computer for at least 2 hours continuosly per day having symptoms of irritation, foreign body sensation, watering, redness, headache, eyeache and signs of conjunctival congestion, mucous/debris, corneal filaments, corneal staining or lacrimal lake were included in this study. Every patient was instructed to put two drops of either herbal drugs or placebo or artificial tear in the eyes regularly four times for 6 weeks. Objective and subjective findings were recorded at bi-weekly intervals up to six weeks. Side-effects, if any, were also noted. In computer vision syndrome the herbal eye drop preparation was found significantly better than artificial tear (p < 0.01). No side-effects were noted by any of the drugs. Both subjective and objective improvements were observed in itone treated cases. So, itone can be considered as a useful drug in computer vision syndrome.

  9. Computer vision syndrome in presbyopia and beginning presbyopia: effects of spectacle lens type.

    PubMed

    Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique

    2015-05-01

    This office field study investigated the effects of different types of spectacle lenses habitually worn by computer users with presbyopia and in the beginning stages of presbyopia. Computer vision syndrome was assessed through reported complaints and ergonomic conditions. A questionnaire regarding the type of habitually worn near-vision lenses at the workplace, visual conditions and the levels of different types of complaints was administered to 175 participants aged 35 years and older (mean ± SD: 52.0 ± 6.7 years). Statistical factor analysis identified five specific aspects of the complaints. Workplace conditions were analysed based on photographs taken in typical working conditions. In the subgroup of 25 users between the ages of 36 and 57 years (mean 44 ± 5 years), who wore distance-vision lenses and performed more demanding occupational tasks, the reported extents of 'ocular strain', 'musculoskeletal strain' and 'headache' increased with the daily duration of computer work and explained up to 44 per cent of the variance (rs = 0.66). In the other subgroups, this effect was smaller, while in the complete sample (n = 175), this correlation was approximately rs = 0.2. The subgroup of 85 general-purpose progressive lens users (mean age 54 years) adopted head inclinations that were approximately seven degrees more elevated than those of the subgroups with single vision lenses. The present questionnaire was able to assess the complaints of computer users depending on the type of spectacle lenses worn. A missing near-vision addition among participants in the early stages of presbyopia was identified as a risk factor for complaints among those with longer daily durations of demanding computer work. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  10. A novel upper limb rehabilitation system with self-driven virtual arm illusion.

    PubMed

    Aung, Yee Mon; Al-Jumaily, Adel; Anam, Khairul

    2014-01-01

    This paper proposes a novel upper extremity rehabilitation system with virtual arm illusion. It aims for fast recovery from lost functions of the upper limb as a result of stroke to provide a novel rehabilitation system for paralyzed patients. The system is integrated with a number of technologies that include Augmented Reality (AR) technology to develop game like exercise, computer vision technology to create the illusion scene, 3D modeling and model simulation, and signal processing to detect user intention via EMG signal. The effectiveness of the developed system has evaluated via usability study and questionnaires which is represented by graphical and analytical methods. The evaluation provides with positive results and this indicates the developed system has potential as an effective rehabilitation system for upper limb impairment.

  11. Automated site characterization for robotic sample acquisition systems

    NASA Astrophysics Data System (ADS)

    Scholl, Marija S.; Eberlein, Susan J.

    1993-04-01

    A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.

  12. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  13. Computer vision syndrome (CVS) - Thermographic Analysis

    NASA Astrophysics Data System (ADS)

    Llamosa-Rincón, L. E.; Jaime-Díaz, J. M.; Ruiz-Cardona, D. F.

    2017-01-01

    The use of computers has reported an exponential growth in the last decades, the possibility of carrying out several tasks for both professional and leisure purposes has contributed to the great acceptance by the users. The consequences and impact of uninterrupted tasks with computers screens or displays on the visual health, have grabbed researcher’s attention. When spending long periods of time in front of a computer screen, human eyes are subjected to great efforts, which in turn triggers a set of symptoms known as Computer Vision Syndrome (CVS). Most common of them are: blurred vision, visual fatigue and Dry Eye Syndrome (DES) due to unappropriate lubrication of ocular surface when blinking decreases. An experimental protocol was de-signed and implemented to perform thermographic studies on healthy human eyes during exposure to dis-plays of computers, with the main purpose of comparing the existing differences in temperature variations of healthy ocular surfaces.

  14. Going Below Minimums: The Efficacy of Display Enhanced/Synthetic Vision Fusion for Go-Around Decisions during Non-Normal Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.

    2007-01-01

    The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.

  15. Mapping Agricultural Fields in Sub-Saharan Africa with a Computer Vision Approach

    NASA Astrophysics Data System (ADS)

    Debats, S. R.; Luo, D.; Estes, L. D.; Fuchs, T.; Caylor, K. K.

    2014-12-01

    Sub-Saharan Africa is an important focus for food security research, because it is experiencing unprecedented population growth, agricultural activities are largely dominated by smallholder production, and the region is already home to 25% of the world's undernourished. One of the greatest challenges to monitoring and improving food security in this region is obtaining an accurate accounting of the spatial distribution of agriculture. Households are the primary units of agricultural production in smallholder communities and typically rely on small fields of less than 2 hectares. Field sizes are directly related to household crop productivity, management choices, and adoption of new technologies. As population and agriculture expand, it becomes increasingly important to understand both the distribution of field sizes as well as how agricultural communities are spatially embedded in the landscape. In addition, household surveys, a common tool for tracking agricultural productivity in Sub-Saharan Africa, would greatly benefit from spatially explicit accounting of fields. Current gridded land cover data sets do not provide information on individual agricultural fields or the distribution of field sizes. Therefore, we employ cutting edge approaches from the field of computer vision to map fields across Sub-Saharan Africa, including semantic segmentation, discriminative classifiers, and automatic feature selection. Our approach aims to not only improve the binary classification accuracy of cropland, but also to isolate distinct fields, thereby capturing crucial information on size and geometry. Our research focuses on the development of descriptive features across scales to increase the accuracy and geographic range of our computer vision algorithm. Relevant data sets include high-resolution remote sensing imagery and Landsat (30-m) multi-spectral imagery. Training data for field boundaries is derived from hand-digitized data sets as well as crowdsourcing.

  16. Laser speckle imaging for lesion detection on tooth

    NASA Astrophysics Data System (ADS)

    Gavinho, Luciano G.; Silva, João. V. P.; Damazio, João. H.; Sfalcin, Ravana A.; Araujo, Sidnei A.; Pinto, Marcelo M.; Olivan, Silvia R. G.; Prates, Renato A.; Bussadori, Sandra K.; Deana, Alessandro M.

    2018-02-01

    Computer vision technologies for diagnostic imaging applied to oral lesions, specifically, carious lesions of the teeth, are in their early years of development. The relevance of this public problem, dental caries, worries countries around the world, as it affects almost the entire population, at least once in the life of each individual. The present work demonstrates current techniques for obtaining information about lesions on teeth by segmentation laser speckle imagens (LSI). Laser speckle image results from laser light reflection on a rough surface, and it was considered a noise but has important features that carry information about the illuminated surface. Even though these are basic images, only a few works have analyzed it by application of computer vision methods. In this article, we present the latest results of our group, in which Computer vision techniques were adapted to segment laser speckle images for diagnostic purposes. These methods are applied to the segmentation of images between healthy and lesioned regions of the tooth. These methods have proven to be effective in the diagnosis of early-stage lesions, often imperceptible in traditional diagnostic methods in the clinical practice. The first method uses first-order statistical models, segmenting the image by comparing the mean and standard deviation of the intensity of the pixels. The second method is based on the distance of the chi-square (χ2 ) between the histograms of the image, bringing a significant improvement in the precision of the diagnosis, while a third method introduces the use of fractal geometry, exposing, through of the fractal dimension, more precisely the difference between lesioned areas and healthy areas of a tooth compared to other methods of segmentation. So far, we can observe efficiency in the segmentation of the carious regions. A software was developed for the execution and demonstration of the applicability of the models

  17. A large-scale solar dynamics observatory image dataset for computer vision applications.

    PubMed

    Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A

    2017-01-01

    The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.

  18. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  19. Towards Image Documentation of Grave Coverings and Epitaphs for Exhibition Purposes

    NASA Astrophysics Data System (ADS)

    Pomaska, G.; Dementiev, N.

    2015-08-01

    Epitaphs and memorials as immovable items in sacred spaces provide with their inscriptions valuable documents of history. Today not only photography or photos are suitable as presentation material for cultural assets in museums. Computer vision and photogrammetry provide methods for recording, 3D modelling, rendering under artificial light conditions as well as further options for analysis and investigation of artistry. For exhibition purposes epitaphs have been recorded by the structure from motion method. A comparison of different kinds of SFM software distributions could be worked out. The suitability of open source software in the mesh processing chain from modelling up to displaying on computer monitors should be answered. Raspberry Pi, a computer in SoC technology works as a media server under Linux applying Python scripts. Will the little computer meet the requirements for a museum and is the handling comfortable enough for staff and visitors? This contribution reports about the case study.

  20. Images Revealing More Than a Thousand Words

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A unique sensor developed by ProVision Technologies, a NASA Commercial Space Center housed by the Institute for Technology Development, produces hyperspectral images with cutting-edge applications in food safety, skin health, forensics, and anti-terrorism activities. While hyperspectral imaging technology continues to make advances with ProVision Technologies, it has also been transferred to the commercial sector through a spinoff company, Photon Industries, Inc.

  1. [Computer eyeglasses--aspects of a confusing topic].

    PubMed

    Huber-Spitzy, V; Janeba, E

    1997-01-01

    With the coming into force of the new Austrian Employee Protection Act the issue of the so called "computer glasses" will also gain added importance in our country. Such glasses have been defined as vision aids to be exclusively used for the work on computer monitors and include single-vision glasses solely intended for reading computer screen, glasses with bifocal lenses for reading computer screen and hard-copy documents as well as those with varifocal lenses featuring a thickened central section. There is still a considerable controversy among those concerned as to who will bear the costs for such glasses--most likely it will be the employer. Prescription of such vision aids will be exclusively restricted to ophthalmologists, based on a thorough ophthalmological examination under adequate consideration of the specific working environment and the workplace requirements of the individual employee concerned.

  2. Computer Vision for High-Throughput Quantitative Phenotyping: A Case Study of Grapevine Downy Mildew Sporulation and Leaf Trichomes.

    PubMed

    Divilov, Konstantin; Wiesner-Hanks, Tyr; Barba, Paola; Cadle-Davidson, Lance; Reisch, Bruce I

    2017-12-01

    Quantitative phenotyping of downy mildew sporulation is frequently used in plant breeding and genetic studies, as well as in studies focused on pathogen biology such as chemical efficacy trials. In these scenarios, phenotyping a large number of genotypes or treatments can be advantageous but is often limited by time and cost. We present a novel computational pipeline dedicated to estimating the percent area of downy mildew sporulation from images of inoculated grapevine leaf discs in a manner that is time and cost efficient. The pipeline was tested on images from leaf disc assay experiments involving two F 1 grapevine families, one that had glabrous leaves (Vitis rupestris B38 × 'Horizon' [RH]) and another that had leaf trichomes (Horizon × V. cinerea B9 [HC]). Correlations between computer vision and manual visual ratings reached 0.89 in the RH family and 0.43 in the HC family. Additionally, we were able to use the computer vision system prior to sporulation to measure the percent leaf trichome area. We estimate that an experienced rater scoring sporulation would spend at least 90% less time using the computer vision system compared with the manual visual method. This will allow more treatments to be phenotyped in order to better understand the genetic architecture of downy mildew resistance and of leaf trichome density. We anticipate that this computer vision system will find applications in other pathosystems or traits where responses can be imaged with sufficient contrast from the background.

  3. Ethical, environmental and social issues for machine vision in manufacturing industry

    NASA Astrophysics Data System (ADS)

    Batchelor, Bruce G.; Whelan, Paul F.

    1995-10-01

    Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.

  4. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  5. Computer use and vision-related problems among university students in ajman, United arab emirate.

    PubMed

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P < 0.05. The crude odds ratio (OR) was determined using simple binary logistic regression and adjusted OR was calculated using the multiple logistic regression. The mean age of participants was 20.4 (3.2) years. The analysis of racial data reveals that 50% (236/471) students were from Middle East, 32% (151/471) from other parts of Asia, 11% (52/471) from Africa, 4% (19/471) from America and 3% (14/471) from Europe. The most common visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems.

  6. Neurobionics and the brain-computer interface: current applications and future horizons.

    PubMed

    Rosenfeld, Jeffrey V; Wong, Yan Tat

    2017-05-01

    The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.

  7. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  8. Building a Semantic Framework for eScience

    NASA Astrophysics Data System (ADS)

    Movva, S.; Ramachandran, R.; Maskey, M.; Li, X.

    2009-12-01

    The e-Science vision focuses on the use of advanced computing technologies to support scientists. Recent research efforts in this area have focused primarily on “enabling” use of infrastructure resources for both data and computational access especially in Geosciences. One of the existing gaps in the existing e-Science efforts has been the failure to incorporate stable semantic technologies within the design process itself. In this presentation, we describe our effort in designing a framework for e-Science built using Service Oriented Architecture. Our framework provides users capabilities to create science workflows and mine distributed data. Our e-Science framework is being designed around a mass market tool to promote reusability across many projects. Semantics is an integral part of this framework and our design goal is to leverage the latest stable semantic technologies. The use of these stable semantic technologies will provide the users of our framework the useful features such as: allow search engines to find their content with RDFa tags; create RDF triple data store for their content; create RDF end points to share with others; and semantically mash their content with other online content available as RDF end point.

  9. Computer Vision Photogrammetry for Underwater Archaeological Site Recording in a Low-Visibility Environment

    NASA Astrophysics Data System (ADS)

    Van Damme, T.

    2015-04-01

    Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.

  10. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  11. Reservoir Maintenance and Development Task Report for the DOE Geothermal Technologies Office GeoVision Study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowry, Thomas Stephen; Finger, John T.; Carrigan, Charles R.

    This report documents the key findings from the Reservoir Maintenance and Development (RM&D) Task of the U.S. Department of Energy's (DOE), Geothermal Technologies Office (GTO) Geothermal Vision Study (GeoVision Study). The GeoVision Study had the objective of conducting analyses of future geothermal growth based on sets of current and future geothermal technology developments. The RM&D Task is one of seven tasks within the GeoVision Study with the others being, Exploration and Confirmation, Potential to Penetration, Institutional Market Barriers, Environmental and Social Impacts, Thermal Applications, and Hybrid Systems. The full set of findings and the details of the GeoVision Study canmore » be found in the final GeoVision Study report on the DOE-GTO website. As applied here, RM&D refers to the activities associated with developing, exploiting, and maintaining a known geothermal resource. It assumes that the site has already been vetted and that the resource has been evaluated to be of sufficient quality to move towards full-scale development. It also assumes that the resource is to be developed for power generation, as opposed to low-temperature or direct use applications. This document presents the key factors influencing RM&D from both a technological and operational standpoint and provides a baseline of its current state. It also looks forward to describe areas of research and development that must be pursued if the development geothermal energy is to reach its full potential.« less

  12. Visions 2020.2: Student Views on Transforming Education and Training through Advanced Technologies

    ERIC Educational Resources Information Center

    US Department of Education, 2004

    2004-01-01

    The U.S. Departments of Commerce and Education (who co-chair the NSTC Working Group) and NetDay formed a partnership aimed at analyzing K-12 student views about technology for learning. These views are analyzed in this second report, "Visions 2020.2: Student Views on Transforming Education and Training Through Advanced Technologies." In…

  13. Giving Machines the Vision

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Amherst Systems manufactures foveal machine vision technology and systems commercially available to end-users and system integrators. This technology was initially developed under NASA contracts NAS9-19335 (Johnson Space Center) and NAS1-20841 (Langley Research Center). This technology is currently being delivered to university research facilities and military sites. More information may be found in www.amherst.com.

  14. Framing Innovation: Does an Instructional Vision Help Superintendents Gain Acceptance for a Large-Scale Technology Initiative?

    ERIC Educational Resources Information Center

    Flanagan, Gina E.

    2014-01-01

    There is limited research that outlines how a superintendent's instructional vision can help to gain acceptance of a large-scale technology initiative. This study explored how superintendents gain acceptance for a large-scale technology initiative (specifically a 1:1 device program) through various leadership actions. The role of the instructional…

  15. Machine Vision Technology for the Forest Products Industry

    Treesearch

    Richard W. Conners; D.Earl Kline; Philip A. Araman; Thomas T. Drayer

    1997-01-01

    From forest to finished product, wood is moved from one processing stage to the next, subject to the decisions of individuals along the way. While this process has worked for hundreds of years, the technology exists today to provide more complete information to the decision makers. Virginia Tech has developed this technology, creating a machine vision prototype for...

  16. Visual-conformal display format for helicopter guidance

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Schmerwitz, Sven; Lueken, Thomas

    2014-06-01

    Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to "see" through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. "Situational awareness" of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot's head orientation relative to the aircraft reference frame. Together with the aircraft's position and orientation relative to the world's reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, "visual-conformal". Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.

  17. Dynamic Estimation of Rigid Motion from Perspective Views via Recursive Identification of Exterior Differential Systems with Parameters on a Topological Manifold

    DTIC Science & Technology

    1994-02-15

    0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of

  18. Computational Vision: A Critical Review

    DTIC Science & Technology

    1989-10-01

    Optic News, 15:9-25, 1989. [8] H. B . Barlow and R. W. Levick . The mechanism of directional selectivity in the rabbit’s retina. J. Physiol., 173:477...comparison, other formulations, e.g., [64], used 16 @V A \\E(t=t2) (a) \\ E(t-tl) ( b ) Figure 7: An illustration of the aperture problem. Left: a bar E is...Ballard and C. M. Brown. Computer Vision. Prentice-Hall, Englewood Cliffs, NJ, 1982. [7] D. H. Ballard, R. C. Nelson, and B . Yamauchi. Animate vision

  19. Marking parts to aid robot vision

    NASA Technical Reports Server (NTRS)

    Bales, J. W.; Barker, L. K.

    1981-01-01

    The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.

  20. eHealth technologies to support nutrition and physical activity behaviors in diabetes self-management.

    PubMed

    Rollo, Megan E; Aguiar, Elroy J; Williams, Rebecca L; Wynne, Katie; Kriss, Michelle; Callister, Robin; Collins, Clare E

    2016-01-01

    Diabetes is a chronic, complex condition requiring sound knowledge and self-management skills to optimize glycemic control and health outcomes. Dietary intake and physical activity are key diabetes self-management (DSM) behaviors that require tailored education and support. Electronic health (eHealth) technologies have a demonstrated potential for assisting individuals with DSM behaviors. This review provides examples of technologies used to support nutrition and physical activity behaviors in the context of DSM. Technologies covered include those widely used for DSM, such as web-based programs and mobile phone and smartphone applications. In addition, examples of novel tools such as virtual and augmented reality, video games, computer vision for dietary carbohydrate monitoring, and wearable devices are provided. The challenges to, and facilitators for, the use of eHealth technologies in DSM are discussed. Strategies to support the implementation of eHealth technologies within practice and suggestions for future research to enhance nutrition and physical activity behaviors as a part of broader DSM are provided.

  1. eHealth technologies to support nutrition and physical activity behaviors in diabetes self-management

    PubMed Central

    Rollo, Megan E; Aguiar, Elroy J; Williams, Rebecca L; Wynne, Katie; Kriss, Michelle; Callister, Robin; Collins, Clare E

    2016-01-01

    Diabetes is a chronic, complex condition requiring sound knowledge and self-management skills to optimize glycemic control and health outcomes. Dietary intake and physical activity are key diabetes self-management (DSM) behaviors that require tailored education and support. Electronic health (eHealth) technologies have a demonstrated potential for assisting individuals with DSM behaviors. This review provides examples of technologies used to support nutrition and physical activity behaviors in the context of DSM. Technologies covered include those widely used for DSM, such as web-based programs and mobile phone and smartphone applications. In addition, examples of novel tools such as virtual and augmented reality, video games, computer vision for dietary carbohydrate monitoring, and wearable devices are provided. The challenges to, and facilitators for, the use of eHealth technologies in DSM are discussed. Strategies to support the implementation of eHealth technologies within practice and suggestions for future research to enhance nutrition and physical activity behaviors as a part of broader DSM are provided. PMID:27853384

  2. A Practical Solution Using A New Approach To Robot Vision

    NASA Astrophysics Data System (ADS)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.

  3. The challenge of ubiquitous computing in health care: technology, concepts and solutions. Findings from the IMIA Yearbook of Medical Informatics 2005.

    PubMed

    Bott, O J; Ammenwerth, E; Brigl, B; Knaup, P; Lang, E; Pilgram, R; Pfeifer, B; Ruderich, F; Wolff, A C; Haux, R; Kulikowski, C

    2005-01-01

    To review recent research efforts in the field of ubiquitous computing in health care. To identify current research trends and further challenges for medical informatics. Analysis of the contents of the Yearbook on Medical Informatics 2005 of the International Medical Informatics Association (IMIA). The Yearbook of Medical Informatics 2005 includes 34 original papers selected from 22 peer-reviewed scientific journals related to several distinct research areas: health and clinical management, patient records, health information systems, medical signal processing and biomedical imaging, decision support, knowledge representation and management, education and consumer informatics as well as bioinformatics. A special section on ubiquitous health care systems is devoted to recent developments in the application of ubiquitous computing in health care. Besides additional synoptical reviews of each of the sections the Yearbook includes invited reviews concerning E-Health strategies, primary care informatics and wearable healthcare. Several publications demonstrate the potential of ubiquitous computing to enhance effectiveness of health services delivery and organization. But ubiquitous computing is also a societal challenge, caused by the surrounding but unobtrusive character of this technology. Contributions from nearly all of the established sub-disciplines of medical informatics are demanded to turn the visions of this promising new research field into reality.

  4. Data Acquisition and Mass Storage

    NASA Astrophysics Data System (ADS)

    Vande Vyvre, P.

    2004-08-01

    The experiments performed at supercolliders will constitute a new challenge in several disciplines of High Energy Physics and Information Technology. This will definitely be the case for data acquisition and mass storage. The microelectronics, communication, and computing industries are maintaining an exponential increase of the performance of their products. The market of commodity products remains the largest and the most competitive market of technology products. This constitutes a strong incentive to use these commodity products extensively as components to build the data acquisition and computing infrastructures of the future generation of experiments. The present generation of experiments in Europe and in the US already constitutes an important step in this direction. The experience acquired in the design and the construction of the present experiments has to be complemented by a large R&D effort executed with good awareness of industry developments. The future experiments will also be expected to follow major trends of our present world: deliver physics results faster and become more and more visible and accessible. The present evolution of the technologies and the burgeoning of GRID projects indicate that these trends will be made possible. This paper includes a brief overview of the technologies currently used for the different tasks of the experimental data chain: data acquisition, selection, storage, processing, and analysis. The major trends of the computing and networking technologies are then indicated with particular attention paid to their influence on the future experiments. Finally, the vision of future data acquisition and processing systems and their promise for future supercolliders is presented.

  5. AstroCV: Astronomy computer vision library

    NASA Astrophysics Data System (ADS)

    González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.

    2018-04-01

    AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

  6. Scene analysis for effective visual search in rough three-dimensional-modeling scenes

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hu, Xiaopeng

    2016-11-01

    Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.

  7. An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Zhou, Ning

    With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less

  8. Development of dog-like retrieving capability in a ground robot

    NASA Astrophysics Data System (ADS)

    MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary

    2013-01-01

    This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.

  9. Fusion of Synthetic and Enhanced Vision for All-Weather Commercial Aviation Operations

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence, III

    2007-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were not adversely impacted by the display concepts although the addition of Enhanced Vision did not, unto itself, provide an improvement in runway incursion detection.

  10. Therapist-Assisted Rehabilitation of Visual Function and Hemianopia after Brain Injury: Intervention Study on the Effect of the Neuro Vision Technology Rehabilitation Program.

    PubMed

    Rasmussen, Rune Skovgaard; Schaarup, Anne Marie Heltoft; Overgaard, Karsten

    2018-02-27

    Serious and often lasting vision impairments affect 30% to 35% of people following stroke. Vision may be considered the most important sense in humans, and even smaller permanent injuries can drastically reduce quality of life. Restoration of visual field impairments occur only to a small extent during the first month after brain damage, and therefore the time window for spontaneous improvements is limited. One month after brain injury causing visual impairment, patients usually will experience chronically impaired vision and the need for compensatory vision rehabilitation is substantial. The purpose of this study is to investigate whether rehabilitation with Neuro Vision Technology will result in a significant and lasting improvement in functional capacity in persons with chronic visual impairments after brain injury. Improving eyesight is expected to increase both physical and mental functioning, thus improving the quality of life. This is a prospective open label trial in which participants with chronic visual field impairments are examined before and after the intervention. Participants typically suffer from stroke or traumatic brain injury and will be recruited from hospitals and The Institute for the Blind and Partially Sighted. Treatment is based on Neuro Vision Technology, which is a supervised training course, where participants are trained in compensatory techniques using specially designed equipment. Through the Neuro Vision Technology procedure, the vision problems of each individual are carefully investigated, and personal data is used to organize individual training sessions. Cognitive face-to-face assessments and self-assessed questionnaires about both life and vision quality are also applied before and after the training. Funding was provided in June 2017. Results are expected to be available in 2020. Sample size is calculated to 23 participants. Due to age, difficulty in transport, and the time-consuming intervention, up to 25% dropouts are expected; thus, we aim to include at least 29 participants. This investigation will evaluate the effects of Neuro Vision Technology therapy on compensatory vision rehabilitation. Additionally, quality of life and cognitive improvements associated to increased quality of life will be explored. ClinicalTrials.gov NCT03160131; https://clinicaltrials.gov/ct2/show/NCT03160131 (Archived by WebCite at http://www.webcitation.org/6x3f5HnCv). ©Rune Skovgaard Rasmussen, Anne Marie Heltoft Schaarup, Karsten Overgaard. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 27.02.2018.

  11. Topographic Mapping of Residual Vision by Computer

    ERIC Educational Resources Information Center

    MacKeben, Manfred

    2008-01-01

    Many persons with low vision have diseases that damage the retina only in selected areas, which can lead to scotomas (blind spots) in perception. The most frequent of these diseases is age-related macular degeneration (AMD), in which foveal vision is often impaired by a central scotoma that impairs vision of fine detail and causes problems with…

  12. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  13. Transforming American Education

    ERIC Educational Resources Information Center

    Horn, Michael B.; Mackey, Katherine

    2011-01-01

    In this article the authors accept as a given the National Education Technology Plan's vision of a transformed education system powered by technology such that learners receive personalized and engaging learning experiences, and where assessment, teaching, infrastructure, and productivity are redefined. The article analyzes this vision of a…

  14. Evaluation of Fused Synthetic and Enhanced Vision Display Concepts for Low-Visibility Approach and Landing

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III; Wilz, Susan J.

    2009-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. Improvements in lateral path control performance were realized when the Head-Up Display concepts included a tunnel, independent of the imagery (enhanced vision or fusion of enhanced and synthetic vision) presented with it. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, of itself, provide an improvement in runway incursion detection without being specifically tailored for this application.

  15. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    PubMed Central

    Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul

    2012-01-01

    Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548

  16. Artificial intelligence, expert systems, computer vision, and natural language processing

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  17. Computing Real-time Streamflow Using Emerging Technologies: Non-contact Radars and the Probability Concept

    NASA Astrophysics Data System (ADS)

    Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.

    2015-12-01

    Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.

  18. The Effect of the Usage of Computer-Based Assistive Devices on the Functioning and Quality of Life of Individuals Who Are Blind or Have Low Vision

    ERIC Educational Resources Information Center

    Rosner, Yotam; Perlman, Amotz

    2018-01-01

    Introduction: The Israel Ministry of Social Affairs and Social Services subsidizes computer-based assistive devices for individuals with visual impairments (that is, those who are blind or have low vision) to assist these individuals in their interactions with computers and thus to enhance their independence and quality of life. The aim of this…

  19. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  20. [Ophthalmologist and "computer vision syndrome"].

    PubMed

    Barar, A; Apatachioaie, Ioana Daniela; Apatachioaie, C; Marceanu-Brasov, L

    2007-01-01

    The authors had tried to collect the data available on the Internet about a subject that we consider as being totally ignored in the Romanian scientific literature and unexpectedly insufficiently treated in the specialized ophthalmologic literature. Known in the specialty literature under the generic name of "Computer vision syndrome", it is defined by the American Optometric Association as a complex of eye and vision problems related to the activities which stress the near vision and which are experienced in relation, or during, the use of the computer. During the consultations we hear frequent complaints of eye-strain - asthenopia, headaches, blurred distance and/or near vision, dry and irritated eyes, slow refocusing, neck and backache, photophobia, sensation of diplopia, light sensitivity, and double vision, but because of the lack of information, we overlooked them too easily, without going thoroughly into the real motives. In most of the developed countries, there are recommendations issued by renowned medical associations with regard to the definition, the diagnosis, and the methods for the prevention, treatment and periodical control of the symptoms found in computer users, in conjunction with an extremely detailed ergonomic legislation. We found out that these problems incite a much too low interest in our country. We would like to rouse the interest of our ophthalmologist colleagues in the understanding and the recognition of these symptoms and in their treatment, or at least their improvement, through specialized measures or through the cooperation with our specialist occupational medicine colleagues.

  1. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  2. High-fidelity, low-cost, automated method to assess laparoscopic skills objectively.

    PubMed

    Gray, Richard J; Kahol, Kanav; Islam, Gazi; Smith, Marshall; Chapital, Alyssa; Ferrara, John

    2012-01-01

    We sought to define the extent to which a motion analysis-based assessment system constructed with simple equipment could measure technical skill objectively and quantitatively. An "off-the-shelf" digital video system was used to capture the hand and instrument movement of surgical trainees (beginner level = PGY-1, intermediate level = PGY-3, and advanced level = PGY-5/fellows) while they performed a peg transfer exercise. The video data were passed through a custom computer vision algorithm that analyzed incoming pixels to measure movement smoothness objectively. The beginner-level group had the poorest performance, whereas those in the advanced group generated the highest scores. Intermediate-level trainees scored significantly (p < 0.04) better than beginner trainees. Advanced-level trainees scored significantly better than intermediate-level trainees and beginner-level trainees (p < 0.04 and p < 0.03, respectively). A computer vision-based analysis of surgical movements provides an objective basis for technical expertise-level analysis with construct validity. The technology to capture the data is simple, low cost, and readily available, and it obviates the need for expert human assessment in this setting. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  3. Metal surface corrosion grade estimation from single image

    NASA Astrophysics Data System (ADS)

    Chen, Yijun; Qi, Lin; Sun, Huyuan; Fan, Hao; Dong, Junyu

    2018-04-01

    Metal corrosion can cause many problems, how to quickly and effectively assess the grade of metal corrosion and timely remediation is a very important issue. Typically, this is done by trained surveyors at great cost. Assisting them in the inspection process by computer vision and artificial intelligence would decrease the inspection cost. In this paper, we propose a dataset of metal surface correction used for computer vision detection and present a comparison between standard computer vision techniques by using OpenCV and deep learning method for automatic metal surface corrosion grade estimation from single image on this dataset. The test has been performed by classifying images and calculating the accuracy for the two different approaches.

  4. Office of Biological and Physical Research: Overview Transitioning to the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Crouch, Roger

    2004-01-01

    Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.

  5. A comparison of symptoms after viewing text on a computer screen and hardcopy.

    PubMed

    Chu, Christina; Rosenfield, Mark; Portello, Joan K; Benzoni, Jaclyn A; Collier, Juanita D

    2011-01-01

    Computer vision syndrome (CVS) is a complex of eye and vision problems experienced during or related to computer use. Ocular symptoms may include asthenopia, accommodative and vergence difficulties and dry eye. CVS occurs in up to 90% of computer workers, and given the almost universal use of these devices, it is important to identify whether these symptoms are specific to computer operation, or are simply a manifestation of performing a sustained near-vision task. This study compared ocular symptoms immediately following a sustained near task. 30 young, visually-normal subjects read text aloud either from a desktop computer screen or a printed hardcopy page at a viewing distance of 50 cm for a continuous 20 min period. Identical text was used in the two sessions, which was matched for size and contrast. Target viewing angle and luminance were similar for the two conditions. Immediately following completion of the reading task, subjects completed a written questionnaire asking about their level of ocular discomfort during the task. When comparing the computer and hardcopy conditions, significant differences in median symptom scores were reported with regard to blurred vision during the task (t = 147.0; p = 0.03) and the mean symptom score (t = 102.5; p = 0.04). In both cases, symptoms were higher during computer use. Symptoms following sustained computer use were significantly worse than those reported after hard copy fixation under similar viewing conditions. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will allow practitioners to optimize visual comfort and efficiency during computer operation.

  6. Heuristics in primary care for recognition of unreported vision loss in older people: a technology development study.

    PubMed

    Wijeyekoon, Skanda; Kharicha, Kalpa; Iliffe, Steve

    2015-09-01

    To evaluate heuristics (rules of thumb) for recognition of undetected vision loss in older patients in primary care. Vision loss is associated with ageing, and its prevalence is increasing. Visual impairment has a broad impact on health, functioning and well-being. Unrecognised vision loss remains common, and screening interventions have yet to reduce its prevalence. An alternative approach is to enhance practitioners' skills in recognising undetected vision loss, by having a more detailed picture of those who are likely not to act on vision changes, report symptoms or have eye tests. This paper describes a qualitative technology development study to evaluate heuristics for recognition of undetected vision loss in older patients in primary care. Using a previous modelling study, two heuristics in the form of mnemonics were developed to aid pattern recognition and allow general practitioners to identify potential cases of unreported vision loss. These heuristics were then analysed with experts. Findings It was concluded that their implementation in modern general practice was unsuitable and an alternative solution should be sort.

  7. IEEE 1982. Proceedings of the international conference on cybernetics and society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.

  8. Flight Test Comparison of Synthetic Vision Display Concepts at Dallas/Fort Worth International Airport

    NASA Technical Reports Server (NTRS)

    Glaab, Louis J.; Kramer, Lynda J.; Arthur, Trey; Parrish, Russell V.; Barry, John S.

    2003-01-01

    Limited visibility is the single most critical factor affecting the safety and capacity of worldwide aviation operations. Synthetic Vision Systems (SVS) technology can solve this visibility problem with a visibility solution. These displays employ computer-generated terrain imagery to present 3D, perspective out-the-window scenes with sufficient information and realism to enable operations equivalent to those of a bright, clear day, regardless of weather conditions. To introduce SVS display technology into as many existing aircraft as possible, a retrofit approach was defined that employs existing HDD display capabilities for glass cockpits and HUD capabilities for the other aircraft. This retrofit approach was evaluated for typical nighttime airline operations at a major international airport. Overall, 6 evaluation pilots performed 75 research approaches, accumulating 18 hours flight time evaluating SVS display concepts that used the NASA LaRC's Boeing B-757-200 aircraft at Dallas/Fort Worth International Airport. Results from this flight test establish the SVS retrofit concept, regardless of display size, as viable for tested conditions. Future assessments need to extend evaluation of the approach to operations in an appropriate, terrain-challenged environment with daytime test conditions.

  9. Lumber Grading With A Computer Vision System

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  10. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  11. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    DTIC Science & Technology

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  12. A Vision for Ice Giant Exploration

    NASA Technical Reports Server (NTRS)

    Hofstadter, M.; Simon, A.; Atreya, S.; Banfield, D.; Fortney, J.; Hayes, A.; Hedman, M.; Hospodarsky, G.; Mandt, K.; Masters, A.; hide

    2017-01-01

    From Voyager to a Vision for 2050: NASA and ESA have just completed a study of candidate missionsto Uranus and Neptune, the so-called ice giant planets. It is a Pre-Decadal Survey Study, meant to inform the next Planetary Science Decadal Survey about opportunities for missions launching in the 2020's and early 2030's. There have been no space flight missions to the ice giants since the Voyager 2 flybys of Uranus in 1986 and Neptune in 1989. This paper presents some conclusions of that study (hereafter referred to as The Study), and how the results feed into a vision for where planetary science can be in 2050. Reaching that vision will require investments in technology andground-based science in the 2020's, flight during the 2030's along with continued technological development of both ground- and space-based capabilities, and data analysis and additional flights in the 2040's. We first discuss why exploring the ice giants is important. We then summarize the science objectives identified by The Study, and our vision of the science goals for 2050. We then review some of the technologies needed to make this vision a reality.

  13. Quality Parameters of Six Cultivars of Blueberry Using Computer Vision

    PubMed Central

    Celis Cofré, Daniela; Silva, Patricia; Enrione, Javier; Osorio, Fernando

    2013-01-01

    Background. Blueberries are considered an important source of health benefits. This work studied six blueberry cultivars: “Duke,” “Brigitta”, “Elliott”, “Centurion”, “Star,” and “Jewel”, measuring quality parameters such as °Brix, pH, moisture content using standard techniques and shape, color, and fungal presence obtained by computer vision. The storage conditions were time (0–21 days), temperature (4 and 15°C), and relative humidity (75 and 90%). Results. Significant differences (P < 0.05) were detected between fresh cultivars in pH, °Brix, shape, and color. However, the main parameters which changed depending on storage conditions, increasing at higher temperature, were color (from blue to red) and fungal presence (from 0 to 15%), both detected using computer vision, which is important to determine a shelf life of 14 days for all cultivars. Similar behavior during storage was obtained for all cultivars. Conclusion. Computer vision proved to be a reliable and simple method to objectively determine blueberry decay during storage that can be used as an alternative approach to currently used subjective measurements. PMID:26904598

  14. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    NASA Astrophysics Data System (ADS)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  15. Landforms of the conterminous United States: a digital shaded-relief portrayal

    USGS Publications Warehouse

    Thelin, Gail P.; Pike, Richard J.

    1991-01-01

    Our map was made by digital image-processing, a technical specialty related to the broader fields of computer graphics and machine vision (Dawson, 1987; Kennie and McLaren, 1988). The technology includes the many spacially based operations first brought together and developed systematically to manipulate Ranger, Mariner, Landsat, and other images that are reassembled from spacecraft telemetry in a raster or scan-line arrangement of square-grid elements (Nathan, 1966; Castleman, 1979; Sheldon, 1987). These computer procedures have been successfully transferred to landform analysis from remote-sensing applications by substituting terrain heights or sea-floor depths for the customary values of electromagnetic radiation obtained from satellites an stored in digital arrays of pixels (Batson and others, 1975).

  16. The use of morphological characteristics and texture analysis in the identification of tissue composition in prostatic neoplasia.

    PubMed

    Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W

    2004-09-01

    Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.

  17. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  18. Computer vision syndrome: a study of the knowledge, attitudes and practices in Indian ophthalmologists.

    PubMed

    Bali, Jatinder; Navin, Neeraj; Thakur, Bali Renu

    2007-01-01

    To study the knowledge, attitude and practices (KAP) towards computer vision syndrome prevalent in Indian ophthalmologists and to assess whether 'computer use by practitioners' had any bearing on the knowledge and practices in computer vision syndrome (CVS). A random KAP survey was carried out on 300 Indian ophthalmologists using a 34-point spot-questionnaire in January 2005. All the doctors who responded were aware of CVS. The chief presenting symptoms were eyestrain (97.8%), headache (82.1%), tiredness and burning sensation (79.1%), watering (66.4%) and redness (61.2%). Ophthalmologists using computers reported that focusing from distance to near and vice versa (P =0.006, chi2 test), blurred vision at a distance (P =0.016, chi2 test) and blepharospasm (P =0.026, chi2 test) formed part of the syndrome. The main mode of treatment used was tear substitutes. Half of ophthalmologists (50.7%) were not prescribing any spectacles. They did not have any preference for any special type of glasses (68.7%) or spectral filters. Computer-users were more likely to prescribe sedatives/anxiolytics (P = 0.04, chi2 test), spectacles (P = 0.02, chi2 test) and conscious frequent blinking (P = 0.003, chi2 test) than the non-computer-users. All respondents were aware of CVS. Confusion regarding treatment guidelines was observed in both groups. Computer-using ophthalmologists were more informed of symptoms and diagnostic signs but were misinformed about treatment modalities.

  19. Buildings of the Future Scoping Study: A Framework for Vision Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Na; Goins, John D.

    2015-02-01

    The Buildings of the Future Scoping Study, funded by the U.S. Department of Energy (DOE) Building Technologies Office, seeks to develop a vision for what U.S. mainstream commercial and residential buildings could become in 100 years. This effort is not intended to predict the future or develop a specific building design solution. Rather, it will explore future building attributes and offer possible pathways of future development. Whether we achieve a more sustainable built environment depends not just on technologies themselves, but on how effectively we envision the future and integrate these technologies in a balanced way that generates economic, social,more » and environmental value. A clear, compelling vision of future buildings will attract the right strategies, inspire innovation, and motivate action. This project will create a cross-disciplinary forum of thought leaders to share their views. The collective views will be integrated into a future building vision and published in September 2015. This report presents a research framework for the vision development effort based on a literature survey and gap analysis. This document has four objectives. First, it defines the project scope. Next, it identifies gaps in the existing visions and goals for buildings and discusses the possible reasons why some visions did not work out as hoped. Third, it proposes a framework to address those gaps in the vision development. Finally, it presents a plan for a series of panel discussions and interviews to explore a vision that mitigates problems with past building paradigms while addressing key areas that will affect buildings going forward.« less

  20. Fusion of Multiple Sensing Modalities for Machine Vision

    DTIC Science & Technology

    1994-05-31

    Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer

  1. Vision Science and Technology at NASA: Results of a Workshop

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Editor); Mulligan, Jeffrey B. (Editor)

    1990-01-01

    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program.

  2. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  3. Riemann tensor of motion vision revisited.

    PubMed

    Brill, M

    2001-07-02

    This note shows that the Riemann-space interpretation of motion vision developed by Barth and Watson is neither necessary for their results, nor sufficient to handle an intrinsic coordinate problem. Recasting the Barth-Watson framework as a classical velocity-solver (as in computer vision) solves these problems.

  4. Evaluation of the Waggoner Computerized Color Vision Test.

    PubMed

    Ng, Jason S; Self, Eriko; Vanston, John E; Nguyen, Andrew L; Crognale, Michael A

    2015-04-01

    Clinical color vision evaluation has been based primarily on the same set of tests for the past several decades. Recently, computer-based color vision tests have been devised, and these have several advantages but are still not widely used. In this study, we evaluated the Waggoner Computerized Color Vision Test (CCVT), which was developed for widespread use with common computer systems. A sample of subjects with (n = 59) and without (n = 361) color vision deficiency (CVD) were tested on the CCVT, the anomaloscope, the Richmond HRR (Hardy-Rand-Rittler) (4th edition), and the Ishihara test. The CCVT was administered in two ways: (1) on a computer monitor using its default settings and (2) on one standardized to a correlated color temperature (CCT) of 6500 K. Twenty-four subjects with CVD performed the CCVT both ways. Sensitivity, specificity, and correct classification rates were determined. The screening performance of the CCVT was good (95% sensitivity, 100% specificity). The CCVT classified subjects as deutan or protan in agreement with anomaloscopy 89% of the time. It generally classified subjects as having a more severe defect compared with other tests. Results from 18 of the 24 subjects with CVD tested under both default and calibrated CCT conditions were the same, whereas the results from 6 subjects had better agreement with other test results when the CCT was set. The Waggoner CCVT is an adequate color vision screening test with several advantages and appears to provide a fairly accurate diagnosis of deficiency type. Used in conjunction with other color vision tests, it may be a useful addition to a color vision test battery.

  5. Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

    NASA Astrophysics Data System (ADS)

    Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana; Goodner, Ryan N.

    2017-09-01

    Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

  6. Mobile Life - Innovation in the Wild

    NASA Astrophysics Data System (ADS)

    Höök, Kristina

    After a decade of work in our research labs on mobile and ubiquitous technology, often formed by the early visions of ubiquitous computing, with the urge to move interaction from the desktop out into the wild, these technologies have now moved out into the world - into the wild. We are in the middle of a second IT-revolution, caused by the spread of mobile and ubiquitous services, in combination with a broad consumer-oriented market pull. The first ITrevolution, the introduction and deployment of Internet and the World Wide Web during the 1990’s, had a major impact on all parts of our society. As mobile, ubiquitous technology now becomes wide-spread, the design and evaluation of mobile services - i.e. information technology that can be accessed and used in virtually any setting - represents an important business arena for the IT- and telecom industry. Together we have to look for a sustainable web of work, leisure and ubiquitous technology we can call the mobile life.

  7. COBALT: Development of a Platform to Flight Test Lander GN&C Technologies on Suborbital Rockets

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Seubert, Carl R.; Amzajerdian, Farzin; Bergh, Chuck; Kourchians, Ara; Restrepo, Carolina I.; Villapando, Carlos Y.; O'Neal, Travis V.; Robertson, Edward A.; Pierrottet, Diego; hide

    2017-01-01

    The NASA COBALT Project (CoOperative Blending of Autonomous Landing Technologies) is developing and integrating new precision-landing Guidance, Navigation and Control (GN&C) technologies, along with developing a terrestrial fight-test platform for Technology Readiness Level (TRL) maturation. The current technologies include a third- generation Navigation Doppler Lidar (NDL) sensor for ultra-precise velocity and line- of-site (LOS) range measurements, and the Lander Vision System (LVS) that provides passive-optical Terrain Relative Navigation (TRN) estimates of map-relative position. The COBALT platform is self contained and includes the NDL and LVS sensors, blending filter, a custom compute element, power unit, and communication system. The platform incorporates a structural frame that has been designed to integrate with the payload frame onboard the new Masten Xodiac vertical take-o, vertical landing (VTVL) terrestrial rocket vehicle. Ground integration and testing is underway, and terrestrial fight testing onboard Xodiac is planned for 2017 with two flight campaigns: one open-loop and one closed-loop.

  8. Recent developments of artificial intelligence in drying of fresh food: A review.

    PubMed

    Sun, Qing; Zhang, Min; Mujumdar, Arun S

    2018-03-01

    Intellectualization is an important direction of drying development and artificial intelligence (AI) technologies have been widely used to solve problems of nonlinear function approximation, pattern detection, data interpretation, optimization, simulation, diagnosis, control, data sorting, clustering, and noise reduction in different food drying technologies due to the advantages of self-learning ability, adaptive ability, strong fault tolerance and high degree robustness to map the nonlinear structures of arbitrarily complex and dynamic phenomena. This article presents a comprehensive review on intelligent drying technologies and their applications. The paper starts with the introduction of basic theoretical knowledge of ANN, fuzzy logic and expert system. Then, we summarize the AI application of modeling, predicting, and optimization of heat and mass transfer, thermodynamic performance parameters, and quality indicators as well as physiochemical properties of dried products in artificial biomimetic technology (electronic nose, computer vision) and different conventional drying technologies. Furthermore, opportunities and limitations of AI technique in drying are also outlined to provide more ideas for researchers in this area.

  9. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  10. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  11. Dynamical Systems and Motion Vision.

    DTIC Science & Technology

    1988-04-01

    TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is

  12. Principles for the wise use of computers by children.

    PubMed

    Straker, L; Pollock, C; Maslen, B

    2009-11-01

    Computer use by children at home and school is now common in many countries. Child computer exposure varies with the type of computer technology available and the child's age, gender and social group. This paper reviews the current exposure data and the evidence for positive and negative effects of computer use by children. Potential positive effects of computer use by children include enhanced cognitive development and school achievement, reduced barriers to social interaction, enhanced fine motor skills and visual processing and effective rehabilitation. Potential negative effects include threats to child safety, inappropriate content, exposure to violence, bullying, Internet 'addiction', displacement of moderate/vigorous physical activity, exposure to junk food advertising, sleep displacement, vision problems and musculoskeletal problems. The case for child specific evidence-based guidelines for wise use of computers is presented based on children using computers differently to adults, being physically, cognitively and socially different to adults, being in a state of change and development and the potential to impact on later adult risk. Progress towards child-specific guidelines is reported. Finally, a set of guideline principles is presented as the basis for more detailed guidelines on the physical, cognitive and social impact of computer use by children. The principles cover computer literacy, technology safety, child safety and privacy and appropriate social, cognitive and physical development. The majority of children in affluent communities now have substantial exposure to computers. This is likely to have significant effects on child physical, cognitive and social development. Ergonomics can provide and promote guidelines for wise use of computers by children and by doing so promote the positive effects and reduce the negative effects of computer-child, and subsequent computer-adult, interaction.

  13. Detecting personnel around UGVs using stereo vision

    NASA Astrophysics Data System (ADS)

    Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.

    2008-04-01

    Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.

  14. A Local Vision on Soil Hydrology (John Dalton Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Roth, K.

    2012-04-01

    After shortly looking back to some research trails of the past decades, and touching on the role of soils in our environmental machinery, a vision on the future of soil hydrology is offered. It is local in the sense of being based on limited experience as well as in the sense of focussing on local spatial scales, from 1 m to 1 km. Cornerstones of this vision are (i) rapid developments of quantitative observation technology, illustrated with the example of ground-penetrating radar (GPR), and (ii) the availability of ever more powerful compute facilities which allow to simulate increasingly complicated model representations in unprecedented detail. Together, they open a powerful and flexible approach to the quantitative understanding of soil hydrology where two lines are fitted: (i) potentially diverse measurements of the system of interest and their analysis and (ii) a comprehensive model representation, including architecture, material properties, forcings, and potentially unknown aspects, together with the same analysis as for (i). This approach pushes traditional inversion to operate on analyses, not on the underlying state variables, and to become flexible with respect to architecture and unknown aspects. The approach will be demonstrated for simple situations at test sites.

  15. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  16. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  17. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  18. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  19. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  20. Real-time unconstrained object recognition: a processing pipeline based on the mammalian visual system.

    PubMed

    Aguilar, Mario; Peot, Mark A; Zhou, Jiangying; Simons, Stephen; Liao, Yuwei; Metwalli, Nader; Anderson, Mark B

    2012-03-01

    The mammalian visual system is still the gold standard for recognition accuracy, flexibility, efficiency, and speed. Ongoing advances in our understanding of function and mechanisms in the visual system can now be leveraged to pursue the design of computer vision architectures that will revolutionize the state of the art in computer vision.

  1. Automated Grading of Rough Hardwood Lumber

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Any automatic hardwood grading system must have two components. The first of these is a computer vision system for locating and identifying defects on rough lumber. The second is a system for automatically grading boards based on the output of the computer vision system. This paper presents research results aimed at developing the first of these components. The...

  2. Computer Vision Systems for Hardwood Logs and Lumber

    Treesearch

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  3. In Situ Fabrication Technologies: Meeting the Challenge for Exploration

    NASA Technical Reports Server (NTRS)

    Howard, Richard W.

    2005-01-01

    A viewgraph presentation on Lunar and Martian in situ fabrication technologies meeting the challenges for exploration is shown. The topics include: 1) Exploration Vision; 2) Vision Requirements Early in the Program; 3) Vision Requirements Today; 4) Why is ISFR Technology Needed? 5) ISFR and In Situ Resource Utilization (ISRU); 6) Fabrication Feedstock Considerations; 7) Planetary Resource Primer; 8) Average Chemical Element Abundances in Lunar Soil; 9) Chemical Elements in Aerospace Engineering Materials; 10) Schematic of Raw Regolith Processing into Constituent Components; 11) Iron, Aluminum, and Basalt Processing from Separated Elements and Compounds; 12) Space Power Systems; 13) Power Source Applicability; 14) Fabrication Systems Technologies; 15) Repair and Nondestructive Evaluation (NDE); and 16) Habitat Structures. A development overview of Lunar and Martian repair and nondestructive evaluation is also presented.

  4. Quantification of color vision using a tablet display.

    PubMed

    Chacon, Alicia; Rabin, Jeff; Yu, Dennis; Johnston, Shawn; Bradshaw, Timothy

    2015-01-01

    Accurate color vision is essential for optimal performance in aviation and space environments using nonredundant color coding to convey critical information. Most color tests detect color vision deficiency (CVD) but fail to diagnose type or severity of CVD, which are important to link performance to occupational demands. The computer-based Cone Contrast Test (CCT) diagnoses type and severity of CVD. It is displayed on a netbook computer for clinical application, but a more portable version may prove useful for deployments, space and aviation cockpits, as well as accident and sports medicine settings. Our purpose was to determine if the CCT can be conducted on a tablet display (Windows 8, Microsoft, Seattle, WA) using touch-screen response input. The CCT presents colored letters visible only to red (R), green (G), and blue (B) sensitive retinal cones to determine the lowest R, G, and B cone contrast visible to the observer. The CCT was measured in 16 color vision normals (CVN) and 16 CVDs using the standard netbook computer and a Windows 8 tablet display calibrated to produce equal color contrasts. Both displays showed 100% specificity for confirming CVN and 100% sensitivity for detecting CVD. In CVNs there was no difference between scores on netbook vs. tablet displays. G cone CVDs showed slightly lower G cone CCT scores on the tablet. CVD can be diagnosed with a tablet display. Ease-of-use, portability, and complete computer capabilities make tablets ideal for multiple settings, including aviation, space, military deployments, accidents and rescue missions, and sports vision. Chacon A, Rabin J, Yu D, Johnston S, Bradshaw T. Quantification of color vision using a tablet display.

  5. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision.

    PubMed

    Heinrich, Andreas; Güttler, Felix; Wendt, Sebastian; Schenkl, Sebastian; Hubig, Michael; Wagner, Rebecca; Mall, Gita; Teichgräber, Ulf

    2018-06-18

     In forensic odontology the comparison between antemortem and postmortem panoramic radiographs (PRs) is a reliable method for person identification. The purpose of this study was to improve and automate identification of unknown people by comparison between antemortem and postmortem PR using computer vision.  The study includes 43 467 PRs from 24 545 patients (46 % females/54 % males). All PRs were filtered and evaluated with Matlab R2014b including the toolboxes image processing and computer vision system. The matching process used the SURF feature to find the corresponding points between two PRs (unknown person and database entry) out of the whole database.  From 40 randomly selected persons, 34 persons (85 %) could be reliably identified by corresponding PR matching points between an already existing scan in the database and the most recent PR. The systematic matching yielded a maximum of 259 points for a successful identification between two different PRs of the same person and a maximum of 12 corresponding matching points for other non-identical persons in the database. Hence 12 matching points are the threshold for reliable assignment.  Operating with an automatic PR system and computer vision could be a successful and reliable tool for identification purposes. The applied method distinguishes itself by virtue of its fast and reliable identification of persons by PR. This Identification method is suitable even if dental characteristics were removed or added in the past. The system seems to be robust for large amounts of data.   · Computer vision allows an automated antemortem and postmortem comparison of panoramic radiographs (PRs) for person identification.. · The present method is able to find identical matching partners among huge datasets (big data) in a short computing time.. · The identification method is suitable even if dental characteristics were removed or added.. · Heinrich A, Güttler F, Wendt S et al. Forensic Odontology: Automatic Identification of Persons Comparing Antemortem and Postmortem Panoramic Radiographs Using Computer Vision. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0632-4744. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Air and Water System (AWS) Design and Technology Selection for the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Kliss, Mark

    2005-01-01

    This paper considers technology selection for the crew air and water recycling systems to be used in long duration human space exploration. The specific objectives are to identify the most probable air and water technologies for the vision for space exploration and to identify the alternate technologies that might be developed. The approach is to conduct a preliminary first cut systems engineering analysis, beginning with the Air and Water System (AWS) requirements and the system mass balance, and then define the functional architecture, review the International Space Station (ISS) technologies, and discuss alternate technologies. The life support requirements for air and water are well known. The results of the mass flow and mass balance analysis help define the system architectural concept. The AWS includes five subsystems: Oxygen Supply, Condensate Purification, Urine Purification, Hygiene Water Purification, and Clothes Wash Purification. AWS technologies have been evaluated in the life support design for ISS node 3, and in earlier space station design studies, in proposals for the upgrade or evolution of the space station, and in studies of potential lunar or Mars missions. The leading candidate technologies for the vision for space exploration are those planned for Node 3 of the ISS. The ISS life support was designed to utilize Space Station Freedom (SSF) hardware to the maximum extent possible. The SSF final technology selection process, criteria, and results are discussed. Would it be cost-effective for the vision for space exploration to develop alternate technology? This paper will examine this and other questions associated with AWS design and technology selection.

  7. Enhanced Vision for All-Weather Operations Under NextGen

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.

    2010-01-01

    Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.

  8. Understanding initial undergraduate expectations and identity in computing studies

    NASA Astrophysics Data System (ADS)

    Kinnunen, Päivi; Butler, Matthew; Morgan, Michael; Nylen, Aletta; Peters, Anne-Kathrin; Sinclair, Jane; Kalvala, Sara; Pesonen, Erkki

    2018-03-01

    There is growing appreciation of the importance of understanding the student perspective in Higher Education (HE) at both institutional and international levels. This is particularly important in Science, Technology, Engineering and Mathematics subjects such as Computer Science (CS) and Engineering in which industry needs are high but so are student dropout rates. An important factor to consider is the management of students' initial expectations of university study and career. This paper reports on a study of CS first-year students' expectations across three European countries using qualitative data from student surveys and essays. Expectation is examined from both short-term (topics to be studied) and long-term (career goals) perspectives. Tackling these issues will help paint a picture of computing education through students' eyes and explore their vision of its and their role in society. It will also help educators prepare students more effectively for university study and to improve the student experience.

  9. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  10. Machine vision for digital microfluidics

    NASA Astrophysics Data System (ADS)

    Shin, Yong-Jun; Lee, Jeong-Bong

    2010-01-01

    Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.

  11. Understanding of how older adults with low vision obtain, process, and understand health information and services.

    PubMed

    Kim, Hyung Nam

    2017-10-16

    Twenty-five years after the Americans with Disabilities Act, there has still been a lack of advancement of accessibility in healthcare for people with visual impairments, particularly older adults with low vision. This study aims to advance understanding of how older adults with low vision obtain, process, and use health information and services, and to seek opportunities of information technology to support them. A convenience sample of 10 older adults with low vision participated in semi-structured phone interviews, which were audio-recorded and transcribed verbatim for analysis. Participants shared various concerns in accessing, understanding, and using health information, care services, and multimedia technologies. Two main themes and nine subthemes emerged from the analysis. Due to the concerns, older adults with low vision tended to fail to obtain the full range of all health information and services to meet their specific needs. Those with low vision still rely on residual vision such that multimedia-based information which can be useful, but it should still be designed to ensure its accessibility, usability, and understandability.

  12. The role of information technology in the development of community pharmacy services: visions and strategic views of international experts.

    PubMed

    Westerling, Anna M; Haikala, Veikko; Airaksinen, Marja

    2011-12-01

    Community pharmacy's strategic vision has been to extend practice responsibilities beyond dispensing and provide patient care services. Few studies have evaluated the strategic and long-term development of information technology (IT) systems to support this vision. The objective of this study was to explore international experts' visions and strategic views on IT development needs in relation to service provision in community pharmacies. Semistructured interviews were conducted with a purposive sample of 14 experts from 8 countries in 2007-2008. These experts had expertise in the development of community pharmacy services and IT. The interviews were content analyzed using a constant comparison approach and a SWOT (strengths, weaknesses, opportunities, threats) analysis was undertaken. Most of the experts shared the vision for community pharmacy adopting a patient care orientation; supported by IT-based documentation, new technological solutions, access to information, and shared patient data. Opportunities to achieve this vision included IT solutions, professional skills, and interprofessional collaboration. Threats included costs, pharmacists' attitude, and the absence of IT solutions. Those responsible for IT development in community pharmacy sector should create long-term IT development strategies that are in line with community pharmacy service development strategies. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Vision 2015: The West Virginia Science and Technology Strategic Plan. Progress Report

    ERIC Educational Resources Information Center

    West Virginia Higher Education Policy Commission, 2014

    2014-01-01

    In 2005, West Virginia science and education leaders developed a strategic plan entitled: "Vision 2015: The West Virginia Science and Technology Strategic Plan." The plan is comprised of five (5) target areas for infrastructure development, with 14 goals for action by designated leaders from higher education, state government, and…

  14. AsTeRICS.

    PubMed

    Drajsajtl, Tomáš; Struk, Petr; Bednárová, Alice

    2013-01-01

    AsTeRICS - "The Assistive Technology Rapid Integration & Construction Set" is a construction set for assistive technologies which can be adapted to the motor abilities of end-users. AsTeRICS allows access to different devices such as PCs, cell phones and smart home devices, with all of them integrated in a platform adapted as much as possible to each user. People with motor disabilities in the upper limbs, with no cognitive impairment, no perceptual limitations (neither visual nor auditory) and with basic skills in using technologies such as PCs, cell phones, electronic agendas, etc. have available a flexible and adaptable technology which enables them to access the Human-Machine-Interfaces (HMI) on the standard desktop and beyond. AsTeRICS provides graphical model design tools, a middleware and hardware support for the creation of tailored AT-solutions involving bioelectric signal acquisition, Brain-/Neural Computer Interfaces, Computer-Vision techniques and standardized actuator and device controls and allows combining several off-the-shelf AT-devices in every desired combination. Novel, end-user ready solutions can be created and adapted via a graphical editor without additional programming efforts. The AsTeRICS open-source framework provides resources for utilization and extension of the system to developers and researches. AsTeRICS was developed by the AsTeRICS project and was partially funded by EC.

  15. Social energy: mining energy from the society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jun Jason; Gao, David Wenzhong; Zhang, Yingchen

    The inherent nature of energy, i.e., physicality, sociality and informatization, implies the inevitable and intensive interaction between energy systems and social systems. From this perspective, we define 'social energy' as a complex sociotechnical system of energy systems, social systems and the derived artificial virtual systems which characterize the intense intersystem and intra-system interactions. The recent advancement in intelligent technology, including artificial intelligence and machine learning technologies, sensing and communication in Internet of Things technologies, and massive high performance computing and extreme-scale data analytics technologies, enables the possibility of substantial advancement in socio-technical system optimization, scheduling, control and management. In thismore » paper, we provide a discussion on the nature of energy, and then propose the concept and intention of social energy systems for electrical power. A general methodology of establishing and investigating social energy is proposed, which is based on the ACP approach, i.e., 'artificial systems' (A), 'computational experiments' (C) and 'parallel execution' (P), and parallel system methodology. A case study on the University of Denver (DU) campus grid is provided and studied to demonstrate the social energy concept. In the concluding remarks, we discuss the technical pathway, in both social and nature sciences, to social energy, and our vision on its future.« less

  16. Computing Optic Flow with ArduEye Vision Sensor

    DTIC Science & Technology

    2013-01-01

    processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the

  17. 78 FR 26376 - Center for Scientific Review; Notice of Closed Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-06

    ...; Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: May 30-31, 2013. Time: 8... of Committee: Integrative, Functional and Cognitive Neuroscience Integrated Review Group..., [email protected] . Name of Committee: Center for Scientific Review Special Emphasis Panel; Vision...

  18. Why Vision 2020?

    ERIC Educational Resources Information Center

    Hinckley, June

    2000-01-01

    Discusses changes in technology, information, and people and the impact on music programs. The Vision 2020 project focuses on the future of music education. Addresses the events that created Vision 2020. Includes "The Housewright Declaration," a summarization of agreements from the Housewright Symposium on the Future of Music Education. (CMK)

  19. Insect vision as model for machine vision

    NASA Astrophysics Data System (ADS)

    Osorio, D.; Sobey, Peter J.

    1992-11-01

    The neural architecture, neurophysiology and behavioral abilities of insect vision are described, and compared with that of mammals. Insects have a hardwired neural architecture of highly differentiated neurons, quite different from the cerebral cortex, yet their behavioral abilities are in important respects similar to those of mammals. These observations challenge the view that the key to the power of biological neural computation is distributed processing by a plastic, highly interconnected, network of individually undifferentiated and unreliable neurons that has been a dominant picture of biological computation since Pitts and McCulloch's seminal work in the 1940's.

  20. Optimized feature-detection for on-board vision-based surveillance

    NASA Astrophysics Data System (ADS)

    Gond, Laetitia; Monnin, David; Schneider, Armin

    2012-06-01

    The detection and matching of robust features in images is an important step in many computer vision applications. In this paper, the importance of the keypoint detection algorithms and their inherent parameters in the particular context of an image-based change detection system for IED detection is studied. Through extensive application-oriented experiments, we draw an evaluation and comparison of the most popular feature detectors proposed by the computer vision community. We analyze how to automatically adjust these algorithms to changing imaging conditions and suggest improvements in order to achieve more exibility and robustness in their practical implementation.

  1. Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras

    NASA Astrophysics Data System (ADS)

    Quinn, Mark Kenneth

    2018-05-01

    Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.

  2. Differences in children and adolescents' ability of reporting two CVS-related visual problems.

    PubMed

    Hu, Liang; Yan, Zheng; Ye, Tiantian; Lu, Fan; Xu, Peng; Chen, Hao

    2013-01-01

    The present study examined whether children and adolescents can correctly report dry eyes and blurred distance vision, two visual problems associated with computer vision syndrome. Participants are 913 children and adolescents aged 6-17. They were asked to report their visual problems, including dry eyes and blurred distance vision, and received an eye examination, including tear film break-up time (TFBUT) and visual acuity (VA). Inconsistency was found between participants' reports of dry eyes and TFBUT results among all 913 participants as well as for all of four subgroups. In contrast, consistency was found between participants' reports of blurred distance vision and VA results among 873 participants who had never worn glasses as well as for the four subgroups. It was concluded that children and adolescents are unable to report dry eyes correctly; however, they are able to report blurred distance vision correctly. Three practical implications of the findings were discussed. Little is known about children's ability to report their visual problems, an issue critical to diagnosis and treatment of children's computer vision syndrome. This study compared children's self-reports and clinic examination results and found children can correctly report blurred distance vision but not dry eyes.

  3. DOE Big Idea Summit III: Solving the Information Technology Challenge Beyond Moore's Law: A New Path to Scaling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCormick, Frederick B.; Shalf, John; Mitchell, Alan

    This report captures the initial conclusions of the DOE seven National Lab team collaborating on the “Solving the Information Technology Energy Challenge Beyond Moore’s Law” initiative from the DOE Big Idea Summit III held in April of 2016. The seven Labs held a workshop in Albuquerque, NM in late July 2016 and gathered 40 researchers into 5 working groups: 4 groups spanning the levels of the co-design framework shown below, and a 5th working group focused on extending and advancing manufacturing approaches and coupling their constraints to all of the framework levels. These working groups have identified unique capabilities withinmore » the Labs to support the key challenges of this Beyond Moore’s Law Computing (BMC) vision, as well as example first steps and potential roadmaps for technology development.« less

  4. US general aviation: The ingredients for a renaissance. A vision and technology strategy for US industry, NASA, FAA, universities

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce

    1993-01-01

    General aviation today is a vital component in the nation's air transportation system. It is threatened for survival but has enormous potential for expansion in utility and use. This potential for expansion is fueled by new satellite navigation and communication systems, small computers, flat panel displays, and advanced aerodynamics, materials and manufacturing methods, and propulsion technologies which create opportunities for new levels of environmental and economic acceptability. Expanded general aviation utility and use could have a large impact on the nation's jobs, commerce, industry, airspace capacity, trade balance, and quality of life. This paper presents, in viewgraph form, a general overview of U.S. general aviation. Topics covered include general aviation shipment and billings; airport and general aviation infrastructure; cockpit, airplane, and airspace technologies; market demand; air traffic operations and aviation accidents; fuel efficiency comparisons; and general aviation goals and strategy.

  5. Analysis of Global Properties of Shapes

    DTIC Science & Technology

    2010-06-01

    Conference on Computer Vision (ICCV) ( Bejing , China , 2005), IEEE. [113] Thrun, S., and Wegbreit, B. Shape from symmetry. In Proceedings of the...International Conference on Computer Vision (ICCV) ( Bejing , China , 2005), IEEE. [114] Toshev, A., Shi, J., and Daniilidis, K. Image matching via saliency...applications ranging from sampling points to finding correspondences to shape simplification. Discrete variants of the Laplace-Beltrami opera - tor [108] and

  6. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    ERIC Educational Resources Information Center

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  7. Computational Nanoelectronics and Nanotechnology at NASA ARC

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Kutler, Paul (Technical Monitor)

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technology are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotechnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  8. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  9. Computer Use and Vision-Related Problems Among University Students In Ajman, United Arab Emirate

    PubMed Central

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-01-01

    Background: The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. Aim: This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. Materials and Methods: A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P < 0.05. The crude odds ratio (OR) was determined using simple binary logistic regression and adjusted OR was calculated using the multiple logistic regression. Results: The mean age of participants was 20.4 (3.2) years. The analysis of racial data reveals that 50% (236/471) students were from Middle East, 32% (151/471) from other parts of Asia, 11% (52/471) from Africa, 4% (19/471) from America and 3% (14/471) from Europe. The most common visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm – OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). Conclusion: High prevalence of vision related problems was noted among university students. Sustained periods of close screen work without screen filters were found to be associated with occurrence of the symptoms and increased interruptions of work of the students. There is a need to increase the ergonomic awareness among students and corrective measures need to be implemented to reduce the impact of computer related vision problems. PMID:24761249

  10. The role of vision processing in prosthetic vision.

    PubMed

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  11. Intelligent transportation systems and intermodal freight transportation. Final report, May-December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aylward, A.D.

    1996-12-01

    This paper describes the various advanced technologies already in use in the intermodal freight transportation industry and addresses the opportunity for improved communication between the public and private sector regarding technology applications to the freight transportation system that could enhance the capacity of the system as a whole. The current public interest in freight transportation policy creates an opportunity to develop a shared vision of the future needs of international intermodal freight transportation in the United States. The Federal government can impact this vision by taking action in the following areas: Provide Infrastructure Funding to Support Efficiency and Global Competitiveness;more » Support Regional and Corridor Efforts; Understand the Freight Sector and Develop a Shared Vision of Technology Benefits; Lead Transportation Technology Efforts of Federal Agencies; and Maintain Commitment to Open ITS Architecture.« less

  12. Monitoring system of multiple fire fighting based on computer vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  13. Intelligent systems technology infrastructure for integrated systems

    NASA Technical Reports Server (NTRS)

    Lum, Henry

    1991-01-01

    A system infrastructure must be properly designed and integrated from the conceptual development phase to accommodate evolutionary intelligent technologies. Several technology development activities were identified that may have application to rendezvous and capture systems. Optical correlators in conjunction with fuzzy logic control might be used for the identification, tracking, and capture of either cooperative or non-cooperative targets without the intensive computational requirements associated with vision processing. A hybrid digital/analog system was developed and tested with a robotic arm. An aircraft refueling application demonstration is planned within two years. Initially this demonstration will be ground based with a follow-on air based demonstration. System dependability measurement and modeling techniques are being developed for fault management applications. This involves usage of incremental solution/evaluation techniques and modularized systems to facilitate reuse and to take advantage of natural partitions in system models. Though not yet commercially available and currently subject to accuracy limitations, technology is being developed to perform optical matrix operations to enhance computational speed. Optical terrain recognition using camera image sequencing processed with optical correlators is being developed to determine position and velocity in support of lander guidance. The system is planned for testing in conjunction with Dryden Flight Research Facility. Advanced architecture technology is defining open architecture design constraints, test bed concepts (processors, multiple hardware/software and multi-dimensional user support, knowledge/tool sharing infrastructure), and software engineering interface issues.

  14. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  15. SPACE: Vision and Reality: Face to Face. Proceedings Report

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The proceedings of the 11th National Space Symposium entitled 'Vision and Reality: Face to Face' is presented. Technological areas discussed include the following sections: Vision for the future; Positioning for the future; Remote sensing, the emerging era; space opportunities, Competitive vision with acquisition reality; National security requirements in space; The world is into space; and The outlook for space. An appendice is also attached.

  16. First-in-Human Trial of a Novel Suprachoroidal Retinal Prosthesis

    PubMed Central

    Ayton, Lauren N.; Blamey, Peter J.; Guymer, Robyn H.; Luu, Chi D.; Nayagam, David A. X.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Yeoh, Jonathan; McCombe, Mark F.; Briggs, Robert J.; Opie, Nicholas L.; Villalobos, Joel; Dimitrov, Peter N.; Varsamidis, Mary; Petoe, Matthew A.; McCarthy, Chris D.; Walker, Janine G.; Barnes, Nick; Burkitt, Anthony N.; Williams, Chris E.; Shepherd, Robert K.; Allen, Penelope J.

    2014-01-01

    Retinal visual prostheses (“bionic eyes”) have the potential to restore vision to blind or profoundly vision-impaired patients. The medical bionic technology used to design, manufacture and implant such prostheses is still in its relative infancy, with various technologies and surgical approaches being evaluated. We hypothesised that a suprachoroidal implant location (between the sclera and choroid of the eye) would provide significant surgical and safety benefits for patients, allowing them to maintain preoperative residual vision as well as gaining prosthetic vision input from the device. This report details the first-in-human Phase 1 trial to investigate the use of retinal implants in the suprachoroidal space in three human subjects with end-stage retinitis pigmentosa. The success of the suprachoroidal surgical approach and its associated safety benefits, coupled with twelve-month post-operative efficacy data, holds promise for the field of vision restoration. Trial Registration Clinicaltrials.gov NCT01603576 PMID:25521292

  17. Choroideremia

    MedlinePlus

    ... in Your Area Stories of Hope Videos Resources Low Vision Specialists Retinal Physicians My Retina Tracker Registry Genetic ... a treatment is discovered, help is available through low-vision aids, including optical, electronic, and computer-based devices. ...

  18. Synthetic Vision CFIT Experiments for GA and Commercial Aircraft: "A Picture Is Worth A Thousand Lives"

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Hughes, Monica F.; Arthur, Jarvis J., III; Kramer, Lynda J.; Glaab, Louis J.; Bailey, Randy E.; Parrish, Russell V.; Uenking, Michael D.

    2003-01-01

    Because restricted visibility has been implicated in the majority of commercial and general aviation accidents, solutions will need to focus on how to enhance safety during instrument meteorological conditions (IMC). The NASA Synthetic Vision Systems (SVS) project is developing technologies to help achieve these goals through the synthetic presentation of how the outside world would look to the pilot if vision were not reduced. The potential safety outcome would be a significant reduction in several accident categories, such as controlled-flight-into-terrain (CFIT), that have restricted visibility as a causal factor. The paper describes two experiments that demonstrated the efficacy of synthetic vision technology to prevent CFIT accidents for both general aviation and commercial aircraft.

  19. Eyesight quality and Computer Vision Syndrome.

    PubMed

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 - 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 - 30 patients evaluated in the Ophthalmology Clinic, "Sf. Spiridon" Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer's test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget's impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight.

  20. Eyesight quality and Computer Vision Syndrome

    PubMed Central

    Bogdănici, Camelia Margareta; Săndulache, Diana Elena; Nechita, Corina Andreea

    2017-01-01

    The aim of the study was to analyze the effects that gadgets have on eyesight quality. A prospective observational study was conducted from January to July 2016, on 60 people who were divided into two groups: Group 1 – 30 middle school pupils with a mean age of 11.9 ± 1.86 and Group 2 – 30 patients evaluated in the Ophthalmology Clinic, “Sf. Spiridon” Hospital, Iași, with a mean age of 21.36 ± 7.16 years. The clinical parameters observed were the following: visual acuity (VA), objective refraction, binocular vision (BV), fusional amplitude (FA), Schirmer’s test. A questionnaire was also distributed, which contained 8 questions that highlighted the gadget’s impact on the eyesight. The use of different gadgets, such as computer, laptops, mobile phones or other displays become part of our everyday life and people experience a variety of ocular symptoms or vision problems related to these. Computer Vision Syndrome (CVS) represents a group of visual and extraocular symptoms associated with sustained use of visual display terminals. Headache, blurred vision, and ocular congestion are the most frequent manifestations determined by the long time use of gadgets. Mobile phones and laptops are the most frequently used gadgets. People who use gadgets for a long time have a sustained effort for accommodation. A small amount of refractive errors (especially myopic shift) was objectively recorded by various studies on near work. Dry eye syndrome could also be identified, and an improvement of visual comfort could be observed after the instillation of artificial tears drops. Computer Vision Syndrome is still under-diagnosed, and people should be made aware of the bad effects the prolonged use of gadgets has on eyesight. PMID:29450383

Top