Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
A Vision in Jeopardy: Royal Navy Maritime Autonomous Systems (MAS)
2017-03-31
Chapter 6 will propose a new MAS vision for the RN. However, before doing so, a fresh look at the problem is required. Consensus of the Problem, Not the... assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in 2014. Yet, the vision...continuous investment and assessment , the RN has failed to deliver any sustainable MAS operational capability. A vision for MAS finally materialized in
GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa
2004-01-01
The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.
An embedded vision system for an unmanned four-rotor helicopter
NASA Astrophysics Data System (ADS)
Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James
2006-10-01
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
Evolving EO-1 Sensor Web Testbed Capabilities in Pursuit of GEOSS
NASA Technical Reports Server (NTRS)
Mandi, Dan; Ly, Vuong; Frye, Stuart; Younis, Mohamed
2006-01-01
A viewgraph presentation to evolve sensor web capabilities in pursuit of capabilities to support Global Earth Observing System of Systems (GEOSS) is shown. The topics include: 1) Vision to Enable Sensor Webs with "Hot Spots"; 2) Vision Extended for Communication/Control Architecture for Missions to Mars; 3) Key Capabilities Implemented to Enable EO-1 Sensor Webs; 4) One of Three Experiments Conducted by UMBC Undergraduate Class 12-14-05 (1 - 3); 5) Closer Look at our Mini-Rovers and Simulated Mars Landscae at GSFC; 6) Beginning to Implement Experiments with Standards-Vision for Integrated Sensor Web Environment; 7) Goddard Mission Services Evolution Center (GMSEC); 8) GMSEC Component Catalog; 9) Core Flight System (CFS) and Extension for GMSEC for Flight SW; 10) Sensor Modeling Language; 11) Seamless Ground to Space Integrated Message Bus Demonstration (completed December 2005); 12) Other Experiments in Queue; 13) Acknowledgements; and 14) References.
NASA Astrophysics Data System (ADS)
Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.
1989-03-01
The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.
Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer
2005-01-01
Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.
Vision technology/algorithms for space robotics applications
NASA Technical Reports Server (NTRS)
Krishen, Kumar; Defigueiredo, Rui J. P.
1987-01-01
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.
A lightweight, inexpensive robotic system for insect vision.
Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex
2017-09-01
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)
NASA Astrophysics Data System (ADS)
Ashcraft, Todd W.; Atac, Robert
2012-06-01
Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.
NASA Technical Reports Server (NTRS)
Drake, Bret G.; Josten, B. Kent; Monell, Donald W.
2004-01-01
The Vision for Space Exploration provides direction for the National Aeronautics and Space Administration to embark on a robust space exploration program that will advance the Nation s scientific, security, and economic interests. This plan calls for a progressive expansion of human capabilities beyond low earth orbit seeking to answer profound scientific and philosophical questions while responding to discoveries along the way. In addition, the Vision articulates the strategy for developing the revolutionary new technologies and capabilities required for the future exploration of the solar system. The National Aeronautics and Space Administration faces new challenges in successfully implementing the Vision. In order to implement a sustained and affordable exploration endeavor it is vital for NASA to do business differently. This paper provides an overview of the strategy-to-task-to-technology process being used by NASA s Exploration Systems Mission Directorate to develop the requirements and system acquisition details necessary for implementing a sustainable exploration vision.
NASA Astrophysics Data System (ADS)
Razdan, Vikram; Bateman, Richard
2015-05-01
This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Szofran, Frank; Bassler, Julie A.; Schlagheck, Ronald A.; Cook, Mary Beth
2005-01-01
The Microgravity Materials Science Program established a strong research capability through partnerships between NASA and the scientific research community. With the announcement of the vision for space exploration, additional emphasis in strategic materials science areas was necessary. The President's Commission recognized that achieving its exploration objectives would require significant technical innovation, research, and development in focal areas defined as "enabling technologies." Among the 17 enabling technologies identified for initial focus were: advanced structures, advanced power and propulsion; closed-loop life support and habitability; extravehicular activity systems; autonomous systems and robotics; scientific data collection and analysis, biomedical risk mitigation; and planetary in situ resource utilization. Mission success may depend upon use of local resources to fabricate a replacement part to repair a critical system. Future propulsion systems will require materials with a wide range of mechanical, thermophysical, and thermochemical properties, many of them well beyond capabilities of today's materials systems. Materials challenges have also been identified by experts working to develop advanced life support systems. In responding to the vision for space exploration, the Microgravity Materials Science Program aggressively transformed its research portfolio and focused materials science areas of emphasis to include space radiation shielding; in situ fabrication and repair for life support systems; in situ resource utilization for life support consumables; and advanced materials for exploration, including materials science for space propulsion systems and for life support systems. The purpose of this paper is to inform the scientific community of these new research directions and opportunities to utilize their materials science expertise and capabilities to support the vision for space exploration.
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
A computer vision system for the recognition of trees in aerial photographs
NASA Technical Reports Server (NTRS)
Pinz, Axel J.
1991-01-01
Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.
Basic design principles of colorimetric vision systems
NASA Astrophysics Data System (ADS)
Mumzhiu, Alex M.
1998-10-01
Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.
Eye vision system using programmable micro-optics and micro-electronics
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.
2014-02-01
Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.
The role of vision processing in prosthetic vision.
Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette
2012-01-01
Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.
Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas
2018-01-01
The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.
The 3D laser radar vision processor system
NASA Astrophysics Data System (ADS)
Sebok, T. M.
1990-10-01
Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.
The 3D laser radar vision processor system
NASA Technical Reports Server (NTRS)
Sebok, T. M.
1990-01-01
Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.
Synthetic vision in the cockpit: 3D systems for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth
2001-08-01
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
Multispectral Image Processing for Plants
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1991-01-01
The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Panoramic stereo sphere vision
NASA Astrophysics Data System (ADS)
Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian
2013-01-01
Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
Effective implementation of health information technologies in U.S. hospitals.
Khatri, Naresh; Gupta, Vishal
2016-01-01
Two issues pertaining to the effective implementation of health information technologies (HITs) in U.S. hospitals are examined. First, which information technology (IT) system is better--a homegrown or an outsourced one? In the second issue, the critical role of in-house IT expertise/capabilities in the effective implementation of HITs is investigated. The data on type of HIT system and IT expertise/capabilities were collected from a national sample of senior executives of U.S. hospitals. The data on quality of patient care were gathered from the Hospital Compare Web site. The quality of patient care was significantly higher in hospitals deploying a homegrown HIT system than hospitals deploying an outsourced HIT system. Furthermore, the professional competence and compelling vision of the chief information officer was found to be a major driver of another key IT capability of hospitals-professionalism of IT staff. The positive relationship of professionalism of IT staff with quality of patient care was mediated by proactive employee behavior. A homegrown HIT system achieves better quality of patient care than an outsourced one. The chief information officer's IT vision and the professional expertise and professionalism of IT staff are important IT capabilities in U.S. hospitals.
Implementing an International Consultation on Earth System Research Priorities Using Web 2.0 Tools
NASA Astrophysics Data System (ADS)
Goldfarb, L.; Yang, A.
2009-12-01
Leah Goldfarb, Paul Cutler, Andrew Yang*, Mustapha Mokrane, Jacinta Legg and Deliang Chen The scientific community has been engaged in developing an international strategy on Earth system research. The initial consultation in this “visioning” process focused on gathering suggestions for Earth system research priorities that are interdisciplinary and address the most pressing societal issues. It was implemented this through a website that utilized Web 2.0 capabilities. The website (http://www.icsu-visioning.org/) collected input from 15 July to 1 September 2009. This consultation was the first in which the international scientific community was asked to help shape the future of a research theme. The site attracted over 7000 visitors from 133 countries, more than 1000 of whom registered and took advantage of the site’s functionality to contribute research questions (~300 questions), comment on posts, and/or vote on questions. To facilitate analysis of results, the site captured a small set of voluntary information about each contributor and their contribution. A group of ~50 international experts were invited to analyze the inputs at a “Visioning Earth System Research” meeting held in September 2009. The outcome of this meeting—a prioritized list of research questions to be investigated over the next decade—was then posted on the visioning website for additional comment from the community through an online survey tool. In general, many lessons were learned in the development and implementation of this website, both in terms of the opportunities offered by Web 2.0 capabilities and the application of these capabilities. It is hoped that this process may serve as a model for other scientific communities. The International Council for Science (ICSU) in cooperation with the International Social Science Council (ISSC) is responsible for organizing this Earth system visioning process.
Computer vision for automatic inspection of agricultural produce
NASA Astrophysics Data System (ADS)
Molto, Enrique; Blasco, Jose; Benlloch, Jose V.
1999-01-01
Fruit and vegetables suffer different manipulations from the field to the final consumer. These are basically oriented towards the cleaning and selection of the product in homogeneous categories. For this reason, several research projects, aimed at fast, adequate produce sorting and quality control are currently under development around the world. Moreover, it is possible to find manual and semi- automatic commercial system capable of reasonably performing these tasks.However, in many cases, their accuracy is incompatible with current European market demands, which are constantly increasing. IVIA, the Valencian Research Institute of Agriculture, located in Spain, has been involved in several European projects related with machine vision for real-time inspection of various agricultural produces. This paper will focus on the work related with two products that have different requirements: fruit and olives. In the case of fruit, the Institute has developed a vision system capable of providing assessment of the external quality of single fruit to a robot that also receives information from other senors. The system use four different views of each fruit and has been tested on peaches, apples and citrus. Processing time of each image is under 500 ms using a conventional PC. The system provides information about primary and secondary color, blemishes and their extension, and stem presence and position, which allows further automatic orientation of the fruit in the final box using a robotic manipulator. Work carried out in olives was devoted to fast sorting of olives for consumption at table. A prototype has been developed to demonstrate the feasibility of a machine vision system capable of automatically sorting 2500 kg/h olives using low-cost conventional hardware.
NASA Technical Reports Server (NTRS)
Crouch, Roger
2004-01-01
Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.
Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring
NASA Technical Reports Server (NTRS)
Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.
2015-01-01
Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
Machine Learning, deep learning and optimization in computer vision
NASA Astrophysics Data System (ADS)
Canu, Stéphane
2017-03-01
As quoted in the Large Scale Computer Vision Systems NIPS workshop, computer vision is a mature field with a long tradition of research, but recent advances in machine learning, deep learning, representation learning and optimization have provided models with new capabilities to better understand visual content. The presentation will go through these new developments in machine learning covering basic motivations, ideas, models and optimization in deep learning for computer vision, identifying challenges and opportunities. It will focus on issues related with large scale learning that is: high dimensional features, large variety of visual classes, and large number of examples.
Understanding of and applications for robot vision guidance at KSC
NASA Technical Reports Server (NTRS)
Shawaga, Lawrence M.
1988-01-01
The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.
A color-coded vision scheme for robotics
NASA Technical Reports Server (NTRS)
Johnson, Kelley Tina
1991-01-01
Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.
Remote-controlled vision-guided mobile robot system
NASA Astrophysics Data System (ADS)
Ande, Raymond; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Chen, Alexander Y. K.
1991-01-01
Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.
Latency in Visionic Systems: Test Methods and Requirements
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.
2005-01-01
A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.
FELIN: tailored optronics and systems solutions for dismounted combat
NASA Astrophysics Data System (ADS)
Milcent, A. M.
2009-05-01
The FELIN French modernization program for dismounted combat provides the Armies with info-centric systems which dramatically enhance the performances of the soldier and the platoon. Sagem now has available a portfolio of various equipments, providing C4I, data and voice digital communication, and enhanced vision for day and night operations, through compact high performance electro-optics. The FELIN system provides the infantryman with a high-tech integrated and modular system which increases significantly their detection, recognition, identification capabilities, their situation awareness and information sharing, and this in any dismounted close combat situation. Among the key technologies used in this system, infrared and intensified vision provide a significant improvement in capability, observation performance and protection of the ground soldiers. This paper presents in detail the developed equipments, with an emphasis on lessons learned from the technical and operational feedback from dismounted close combat field tests.
Virtual environment assessment for laser-based vision surface profiling
NASA Astrophysics Data System (ADS)
ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.
2015-03-01
Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
Many high-level vision systems use rule-based approaches to solving problems such as autonomous navigation and image understanding. The rules are usually elaborated by experts. However, this procedure may be rather tedious. In this paper, we propose a method to generate such rules automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, tours the Orbiter Processing Facility on a visit to KSC. At right (hands up) is Conrad Nagel, chief of the Shuttle Project Office. They are standing under the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
Distant touch hydrodynamic imaging with an artificial lateral line.
Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang
2006-12-12
Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.
Automation and robotics for Space Station in the twenty-first century
NASA Technical Reports Server (NTRS)
Willshire, K. F.; Pivirotto, D. L.
1986-01-01
Space Station telerobotics will evolve beyond the initial capability into a smarter and more capable system as we enter the twenty-first century. Current technology programs including several proposed ground and flight experiments to enable development of this system are described. Advancements in the areas of machine vision, smart sensors, advanced control architecture, manipulator joint design, end effector design, and artificial intelligence will provide increasingly more autonomous telerobotic systems.
Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.
2008-01-01
NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
Small Aircraft Transportation System Concept and Technologies
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.; Durham, Michael H.; Tarry, Scott E.
2005-01-01
This paper summarizes both the vision and the early public-private collaborative research for the Small Aircraft Transportation System (SATS). The paper outlines an operational definition of SATS, describes how SATS conceptually differs from current air transportation capabilities, introduces four SATS operating capabilities, and explains the relation between the SATS operating capabilities and the potential for expanded air mobility. The SATS technology roadmap encompasses on-demand, widely distributed, point-to-point air mobility, through hired-pilot modes in the nearer-term, and through self-operated user modes in the farther-term. The nearer-term concept is based on aircraft and airspace technologies being developed to make the use of smaller, more widely distributed community reliever and general aviation airports and their runways more useful in more weather conditions, in commercial hired-pilot service modes. The farther-term vision is based on technical concepts that could be developed to simplify or automate many of the operational functions in the aircraft and the airspace for meeting future public transportation needs, in personally operated modes. NASA technology strategies form a roadmap between the nearer-term concept and the farther-term vision. This paper outlines a roadmap for scalable, on-demand, distributed air mobility technologies for vehicle and airspace systems. The audiences for the paper include General Aviation manufacturers, small aircraft transportation service providers, the flight training industry, airport and transportation authorities at the Federal, state and local levels, and organizations involved in planning for future National Airspace System advancements.
Data acquisition and analysis of range-finding systems for spacing construction
NASA Technical Reports Server (NTRS)
Shen, C. N.
1981-01-01
For space missions of future, completely autonomous robotic machines will be required to free astronauts from routine chores of equipment maintenance, servicing of faulty systems, etc. and to extend human capabilities in hazardous environments full of cosmic and other harmful radiations. In places of high radiation and uncontrollable ambient illuminations, T.V. camera based vision systems cannot work effectively. However, a vision system utilizing directly measured range information with a time of flight laser rangefinder, can successfully operate under these environments. Such a system will be independent of proper illumination conditions and the interfering effects of intense radiation of all kinds will be eliminated by the tuned input of the laser instrument. Processing the range data according to certain decision, stochastic estimation and heuristic schemes, the laser based vision system will recognize known objects and thus provide sufficient information to the robot's control system which can develop strategies for various objectives.
Comparison of a multispectral vision system and a colorimeter for the assessment of meat color.
Trinderup, Camilla H; Dahl, Anders; Jensen, Kirsten; Carstensen, Jens Michael; Conradsen, Knut
2015-04-01
The color assessment ability of a multispectral vision system is investigated by a comparison study with color measurements from a traditional colorimeter. The experiment involves fresh and processed meat samples. Meat is a complex material; heterogeneous with varying scattering and reflectance properties, so several factors can influence the instrumental assessment of meat color. In order to assess whether two methods are equivalent, the variation due to these factors must be taken into account. A statistical analysis was conducted and showed that on a calibration sheet the two instruments are equally capable of measuring color. Moreover the vision system provides a more color rich assessment of fresh meat samples with a glossier surface, than the colorimeter. Careful studies of the different sources of variation enable an assessment of the order of magnitude of the variability between methods accounting for other sources of variation leading to the conclusion that color assessment using a multispectral vision system is superior to traditional colorimeter assessments. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Durfee, David; Johnson, Walter; McLeod, Scott
2007-04-01
Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.
Robust algebraic image enhancement for intelligent control systems
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morrelli, Michael
1993-01-01
Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.
Computer Vision System For Locating And Identifying Defects In Hardwood Lumber
NASA Astrophysics Data System (ADS)
Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.
1989-03-01
This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.
NASA Technical Reports Server (NTRS)
Schoeberl, Mark; Rychekewkitsch, Michael; Andrucyk, Dennis; McConaughy, Gail; Meeson, Blanche; Hildebrand, Peter; Einaudi, Franco (Technical Monitor)
2000-01-01
NASA's Earth Science Enterprise's long range vision is to enable the development of a national proactive environmental predictive capability through targeted scientific research and technological innovation. Proactive environmental prediction means the prediction of environmental events and their secondary consequences. These consequences range from disasters and disease outbreak to improved food production and reduced transportation, energy and insurance costs. The economic advantage of this predictive capability will greatly outweigh the cost of development. Developing this predictive capability requires a greatly improved understanding of the earth system and the interaction of the various components of that system. It also requires a change in our approach to gathering data about the earth and a change in our current methodology in processing that data including its delivery to the customers. And, most importantly, it requires a renewed partnership between NASA and its sister agencies. We identify six application themes that summarize the potential of proactive environmental prediction. We also identify four technology themes that articulate our approach to implementing proactive environmental prediction.
New Horizons through Systems Design.
ERIC Educational Resources Information Center
Banathy, Bela H.
1991-01-01
Continuing use of outdated design is the main source of the crisis in education. The existing system should be "trans-formed" rather than "re-formed." Transformation requires the development of organizational capacity and collective capability to engage in systems design with a broad vision of what should be. (Author/JOW)
Recent progress in millimeter-wave sensor system capabilities for enhanced (synthetic) vision
NASA Astrophysics Data System (ADS)
Hellemann, Karlheinz; Zachai, Reinhard
1999-07-01
Weather- and daylight independent operation of modern traffic systems is strongly required for an optimized and economic availability. Mainly helicopters, small aircraft and military transport aircraft operating frequently close to the ground have a need for effective and cost-effective Enhanced Vision sensors. The technical progress in sensor technology and processing speed offer today the possibility for new concepts to be realized. Derived from this background the paper reports on the improvements which are under development within the HiVision program at DaimlerChrysler Aerospace. A sensor demonstrator based on FMCW radar technology with high information update-rate and operating in the mm-wave band, has been up-graded to improve performance and fitted to fly on an experimental base. The results achieved so far demonstrate the capability to produce a weather independent enhanced vision. In addition the demonstrator has been tested on board a high- speed ferry at the Baltic sea, because fast vessels have a similar need for weather-independent operation and anti- collision measures. In the future one sensor type may serve both 'worlds' and help ease and save traffic. The described demonstrator fills up the technology gap between optical sensors (Infrared) and standard pulse radars with its specific features such as high speed scanning and weather penetration with the additional benefit of cost-effectiveness.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, tours the Orbiter Processing Facility on a visit to KSC. At left is Conrad Nagel, chief of the Shuttle Project Office. They are standing under the left wing and wheel well of the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, listens to Conrad Nagel, chief of the Shuttle Project Office (right), during a tour of the Orbiter Processing Facility on a visit to KSC. They are standing under the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
2004-05-13
KENNEDY SPACE CENTER, FLA. -- Adm. Craig E. Steidle (center), NASA’s associate administrator, Office of Exploration Systems, listens to Conrad Nagel, chief of the Shuttle Project Office (right), during a tour of the Orbiter Processing Facility on a visit to KSC. They are standing under the orbiter Discovery. The Office of Exploration Systems was established to set priorities and direct the identification, development and validation of exploration systems and related technologies to support the future space vision for America. Steidle’s visit included a tour of KSC to review the facilities and capabilities to be used to support the vision.
A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.
Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G
2015-02-01
Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.
Optomechatronic System For Automated Intra Cytoplasmic Sperm Injection
NASA Astrophysics Data System (ADS)
Shulev, Assen; Tiankov, Tihomir; Ignatova, Detelina; Kostadinov, Kostadin; Roussev, Ilia; Trifonov, Dimitar; Penchev, Valentin
2015-12-01
This paper presents a complex optomechatronic system for In-Vitro Fertilization (IVF), offering almost complete automation of the Intra Cytoplasmic Sperm Injection (ICSI) procedure. The compound parts and sub-systems, as well as some of the computer vision algorithms, are described below. System capabilities for ICSI have been demonstrated on infertile oocyte cells.
Machine Vision Applied to Navigation of Confined Spaces
NASA Technical Reports Server (NTRS)
Briscoe, Jeri M.; Broderick, David J.; Howard, Ricky; Corder, Eric L.
2004-01-01
The reliability of space related assets has been emphasized after the second loss of a Space Shuttle. The intricate nature of the hardware being inspected often requires a complete disassembly to perform a thorough inspection which can be difficult as well as costly. Furthermore, it is imperative that the hardware under inspection not be altered in any other manner than that which is intended. In these cases the use of machine vision can allow for inspection with greater frequency using less intrusive methods. Such systems can provide feedback to guide, not only manually controlled instrumentation, but autonomous robotic platforms as well. This paper serves to detail a method using machine vision to provide such sensing capabilities in a compact package. A single camera is used in conjunction with a projected reference grid to ascertain precise distance measurements. The design of the sensor focuses on the use of conventional components in an unconventional manner with the goal of providing a solution for systems that do not require or cannot accommodate more complex vision systems.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Kennedy Space Center - "America's Gateway to Space"
NASA Technical Reports Server (NTRS)
Petro, Janet; Chevalier, Mary Ann; Hurst, Chery
2011-01-01
KSC fits into the overall NASA vision and mission by moving forward so that what we do and learn will benefit all here on Earth. In January of last year, KSC revised its Mission and Vision statements to articulate our identity as we align with this new direction the Agency is heading. Currently KSC is endeavoring to form partnerships with industry, , Government, and academia, utilizing institutional assets and technical capabilities to support current and future m!issions. With a goal of safe, low-cost, and readily available access to space, KSC seeks to leverage emerging industries to initiate development of a new space launch system, oversee the development of a multipurpose crew vehicle, and assist with the efficient and timely evolution of commercial crew transportation capabilities. At the same time, KSC is pursuing modernizing the Center's infrastructure and creating a multi-user launch complex with increased onsite processing and integration capabilities.
Development of dog-like retrieving capability in a ground robot
NASA Astrophysics Data System (ADS)
MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary
2013-01-01
This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.
A vision of network-centric military communications
NASA Astrophysics Data System (ADS)
Conklin, Ross, Jr.; Burbank, Jack; Nichols, Robert, Jr.
2005-05-01
This paper presents a vision for a future capability-based military communications system that considers user requirements. Historically, the military has developed and fielded many specialized communications systems. While these systems solved immediate communications problems, they were not designed to operate with other systems. As information has become more important to the execution of war, the "stove-pipe" nature of the communications systems deployed by the military is no longer acceptable. Realizing this, the military has begun the transformation of communications to a network-centric communications paradigm. However, the specialized communications systems were developed in response to the widely varying environments related to military communications. These environments, and the necessity for effective communications within these environments, do not disappear under the network-centric paradigm. In fact, network-centric communications allows for one message to cross many of these environments by transiting multiple networks. The military would also like one communications approach that is capable of working well in multiple environments. This paper presents preliminary work on the creation of a framework that allows for a reconfigurable device that is capable of adapting to the physical and network environments. The framework returns to the Open Systems Interconnect (OSI) architecture with the addition of a standardized intra-layer control interface for control information exchange, a standardized data interface and a proposed device architecture based on the software radio.
2009-04-01
Significant and interrelated problems are hindering the Air Force’s development of cyber warfare capabilities. The first is a lack of awareness about...why the AF has chosen to take cyber warfare on as a core capability on par with air and space. The second stems from the lack of a commonly...the cyber capabilities needed in the future? The contributions of this research include a strategic vision for future cyber warfare capabilities that
Image processing for a tactile/vision substitution system using digital CNN.
Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng
2006-01-01
In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.
Fusing Quantitative Requirements Analysis with Model-based Systems Engineering
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven
2006-01-01
A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.
Integrated long-range UAV/UGV collaborative target tracking
NASA Astrophysics Data System (ADS)
Moseley, Mark B.; Grocholsky, Benjamin P.; Cheung, Carol; Singh, Sanjiv
2009-05-01
Coordinated operations between unmanned air and ground assets allow leveraging of multi-domain sensing and increase opportunities for improving line of sight communications. While numerous military missions would benefit from coordinated UAV-UGV operations, foundational capabilities that integrate stove-piped tactical systems and share available sensor data are required and not yet available. iRobot, AeroVironment, and Carnegie Mellon University are working together, partially SBIR-funded through ARDEC's small unit network lethality initiative, to develop collaborative capabilities for surveillance, targeting, and improved communications based on PackBot UGV and Raven UAV platforms. We integrate newly available technologies into computational, vision, and communications payloads and develop sensing algorithms to support vision-based target tracking. We first simulated and then applied onto real tactical platforms an implementation of Decentralized Data Fusion, a novel technique for fusing track estimates from PackBot and Raven platforms for a moving target in an open environment. In addition, system integration with AeroVironment's Digital Data Link onto both air and ground platforms has extended our capabilities in communications range to operate the PackBot as well as in increased video and data throughput. The system is brought together through a unified Operator Control Unit (OCU) for the PackBot and Raven that provides simultaneous waypoint navigation and traditional teleoperation. We also present several recent capability accomplishments toward PackBot-Raven coordinated operations, including single OCU display design and operation, early target track results, and Digital Data Link integration efforts, as well as our near-term capability goals.
2007-06-01
management issues he encountered ruled out the Expanion as a viable option for thin-client computing in the Navy. An improvement in thin-client...44 Requirements to capabilities (2004). Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004...Retrieved April 29, 2007, from Vision Presence Power: A Program Guide to the U.S. Navy – 2004 Edition, p. 128. Web site: http://www.chinfo.navy.mil
NASA Astrophysics Data System (ADS)
Stetson, Suzanne; Weber, Hadley; Crosby, Frank J.; Tinsley, Kenneth; Kloess, Edmund; Nevis, Andrew J.; Holloway, John H., Jr.; Witherspoon, Ned H.
2004-09-01
The Airborne Littoral Reconnaissance Technologies (ALRT) project has developed and tested a nighttime operational minefield detection capability using commercial off-the-shelf high-power Laser Diode Arrays (LDAs). The Coastal System Station"s ALRT project, under funding from the Office of Naval Research (ONR), has been designing, developing, integrating, and testing commercial arrays using a Cessna airborne platform over the last several years. This has led to the development of the Airborne Laser Diode Array Illuminator wide field-of-view (ALDAI-W) imaging test bed system. The ALRT project tested ALDAI-W at the Army"s Night Vision Lab"s Airborne Mine Detection Arid Test. By participating in Night Vision"s test, ALRT was able to collect initial prototype nighttime operational data using ALDAI-W, showing impressive results and pioneering the way for final test bed demonstration conducted in September 2003. This paper describes the ALDAI-W Arid Test and results, along with processing steps used to generate imagery.
Square tracking sensor for autonomous helicopter hover stabilization
NASA Astrophysics Data System (ADS)
Oertel, Carl-Henrik
1995-06-01
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
NASA Astrophysics Data System (ADS)
Phipps, Marja; Capel, David; Srinivasan, James
2014-06-01
Motion imagery capabilities within the Department of Defense/Intelligence Community (DoD/IC) have advanced significantly over the last decade, attempting to meet continuously growing data collection, video processing and analytical demands in operationally challenging environments. The motion imagery tradecraft has evolved accordingly, enabling teams of analysts to effectively exploit data and generate intelligence reports across multiple phases in structured Full Motion Video (FMV) Processing Exploitation and Dissemination (PED) cells. Yet now the operational requirements are drastically changing. The exponential growth in motion imagery data continues, but to this the community adds multi-INT data, interoperability with existing and emerging systems, expanded data access, nontraditional users, collaboration, automation, and support for ad hoc configurations beyond the current FMV PED cells. To break from the legacy system lifecycle, we look towards a technology application and commercial adoption model course which will meet these future Intelligence, Surveillance and Reconnaissance (ISR) challenges. In this paper, we explore the application of cutting edge computer vision technology to meet existing FMV PED shortfalls and address future capability gaps. For example, real-time georegistration services developed from computer-vision-based feature tracking, multiple-view geometry, and statistical methods allow the fusion of motion imagery with other georeferenced information sources - providing unparalleled situational awareness. We then describe how these motion imagery capabilities may be readily deployed in a dynamically integrated analytical environment; employing an extensible framework, leveraging scalable enterprise-wide infrastructure and following commercial best practices.
2013 Progress Report -- DOE Joint Genome Institute
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-11-01
In October 2012, we introduced a 10-Year Strategic Vision [http://bit.ly/JGI-Vision] for the Institute. A central focus of this Strategic Vision is to bridge the gap between sequenced genomes and an understanding of biological functions at the organism and ecosystem level. This involves the continued massive-scale generation of sequence data, complemented by orthogonal new capabilities to functionally annotate these large sequence data sets. Our Strategic Vision lays out a path to guide our decisions and ensure that the evolving set of experimental and computational capabilities available to DOE JGI users will continue to enable groundbreaking science.
EVA Communications Avionics and Informatics
NASA Technical Reports Server (NTRS)
Carek, David Andrew
2005-01-01
The Glenn Research Center is investigating and developing technologies for communications, avionics, and information systems that will significantly enhance extra vehicular activity capabilities to support the Vision for Space Exploration. Several of the ongoing research and development efforts are described within this presentation including system requirements formulation, technology development efforts, trade studies, and operational concept demonstrations.
Development of embedded real-time and high-speed vision platform
NASA Astrophysics Data System (ADS)
Ouyang, Zhenxing; Dong, Yimin; Yang, Hua
2015-12-01
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Development of a Vision-Based Situational Awareness Capability for Unmanned Surface Vessels
2017-09-01
used to provide an SA capability for USVs. This thesis addresses the following research questions: (1) Can a computer vision– based technique be...BLANK 51 VI. CONCLUSION AND RECOMMENDATIONS A. CONCLUSION This research demonstrated the feasibility of using a computer vision– based ...VISION- BASED SITUATIONAL AWARENESS CAPABILITY FOR UNMANNED SURFACE VESSELS by Ying Jie Benjemin Toh September 2017 Thesis Advisor: Oleg
Vision-based system identification technique for building structures using a motion capture system
NASA Astrophysics Data System (ADS)
Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon
2015-11-01
This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.
Brewer, Margo
2016-09-01
Creating a vision (visioning) and sensemaking have been described as key leadership practices in the leadership literature. A vision provides clarity, motivation, and direction for staff, and is essential particularly in times of significant change. Closely related to visioning is sensemaking (the organisation of stimuli into a framework allowing people to understand, explain, attribute, extrapolate, and predict). The application of these strategies to leadership within the interprofessional field is yet to be scrutinised. This study examines an interprofessional capability framework as a visioning and sensemaking tool for use by leaders within a university health science curriculum. Interviews with 11 faculty members revealed that the framework had been embedded across multiple years and contexts within the curriculum. Furthermore, a range of responses to the framework were evoked in relation to its use to make sense of interprofessional practice and to provide a vision, guide, and focus for faculty. Overall the findings indicate that the framework can function as both a visioning and sensemaking tool.
Integrated navigation, flight guidance, and synthetic vision system for low-level flight
NASA Astrophysics Data System (ADS)
Mehler, Felix E.
2000-06-01
Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.
Vision requirements for Space Station applications
NASA Technical Reports Server (NTRS)
Crouse, K. R.
1985-01-01
Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.
NASA Technical Reports Server (NTRS)
Decker, T. A.; Williams, R. E.; Kuether, C. L.; Logar, N. D.; Wyman-Cornsweet, D.
1975-01-01
A computer-operated binocular vision testing device was developed as one part of a system designed for NASA to evaluate the visual function of astronauts during spaceflight. This particular device, called the Mark 3 Haploscope, employs semi-automated psychophysical test procedures to measure visual acuity, stereopsis, phoria, fixation disparity, refractive state and accommodation/convergence relationships. Test procedures are self-administered and can be used repeatedly without subject memorization. The Haploscope was designed as one module of the complete NASA Vision Testing System. However, it is capable of stand-alone operation. Moreover, the compactness and portability of the Haploscope make possible its use in a broad variety of testing environments.
Telerobotic controller development
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, Ken; Rhoades, Don
1987-01-01
To meet NASA's space station's needs and growth, a modular and generic approach to robotic control which provides near-term implementation with low development cost and capability for growth into more autonomous systems was developed. The method uses a vision based robotic controller and compliant hand integrated with the Remote Manipulator System arm on the Orbiter. A description of the hardware and its system integration is presented.
Pérez i de Lanuza, Guillem; Font, Enrique
2014-08-15
Ultraviolet (UV) vision and UV colour patches have been reported in a wide range of taxa and are increasingly appreciated as an integral part of vertebrate visual perception and communication systems. Previous studies with Lacertidae, a lizard family with diverse and complex coloration, have revealed the existence of UV-reflecting patches that may function as social signals. However, confirmation of the signalling role of UV coloration requires demonstrating that the lizards are capable of vision in the UV waveband. Here we use a multidisciplinary approach to characterize the visual sensitivity of a diverse sample of lacertid species. Spectral transmission measurements of the ocular media show that wavelengths down to 300 nm are transmitted in all the species sampled. Four retinal oil droplet types can be identified in the lacertid retina. Two types are pigmented and two are colourless. Fluorescence microscopy reveals that a type of colourless droplet is UV-transmitting and may thus be associated with UV-sensitive cones. DNA sequencing shows that lacertids have a functional SWS1 opsin, very similar at 13 critical sites to that in the presumed ancestral vertebrate (which was UV sensitive) and other UV-sensitive lizards. Finally, males of Podarcis muralis are capable of discriminating between two views of the same stimulus that differ only in the presence/absence of UV radiance. Taken together, these results provide convergent evidence of UV vision in lacertids, very likely by means of an independent photopigment. Moreover, the presence of four oil droplet types suggests that lacertids have a four-cone colour vision system. © 2014. Published by The Company of Biologists Ltd.
Neural system applied on an invariant industrial character recognition
NASA Astrophysics Data System (ADS)
Lecoeuche, Stephane; Deguillemont, Denis; Dubus, Jean-Paul
1997-04-01
Besides the variety of fonts, character recognition systems for the industrial world are confronted with specific problems like: the variety of support (metal, wood, paper, ceramics . . .) as well as the variety of marking (printing, engraving, . . .) and conditions of lighting. We present a system that is able to solve a part of this problem. It implements a collaboration between two neural networks. The first network specialized in vision allows the system to extract the character from an image. Besides this capability, we have equipped our system with characteristics allowing it to obtain an invariant model from the presented character. Thus, whatever the position, the size and the orientation of the character during the capture are, the model presented to the input of the second network will be identical. The second network, thanks to a learning phase, permits us to obtain a character recognition system independent of the type of fonts used. Furthermore, its capabilities of generalization permit us to recognize degraded and/or distorted characters. A feedback loop between the two networks permits the first one to modify the quality of vision.The cooperation between these two networks allows us to recognize characters whatever the support and the marking.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
NASA Technical Reports Server (NTRS)
Kearney, Lara
2004-01-01
In January 2004, the President announced a new Vision for Space Exploration. NASA's Office of Exploration Systems has identified Extravehicular Activity (EVA) as a critical capability for supporting the Vision for Space Exploration. EVA is required for all phases of the Vision, both in-space and planetary. Supporting the human outside the protective environment of the vehicle or habitat and allow ing him/her to perform efficient and effective work requires an integrated EVA "System of systems." The EVA System includes EVA suits, airlocks, tools and mobility aids, and human rovers. At the core of the EVA System is the highly technical EVA suit, which is comprised mainly of a life support system and a pressure/environmental protection garment. The EVA suit, in essence, is a miniature spacecraft, which combines together many different sub-systems such as life support, power, communications, avionics, robotics, pressure systems and thermal systems, into a single autonomous unit. Development of a new EVA suit requires technology advancements similar to those required in the development of a new space vehicle. A majority of the technologies necessary to develop advanced EVA systems are currently at a low Technology Readiness Level of 1-3. This is particularly true for the long-pole technologies of the life support system.
Compact Microscope Imaging System with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.
NASA Technical Reports Server (NTRS)
Van Baalen, Mary; Mason, Sara; Foy, Millennia; Wear, Mary; Taiym, Wafa; Moynihan, Shannan; Alexander, David; Hart, Steve; Tarver, William
2015-01-01
Due to recently identified vision changes associated with space flight, JSC Space and Clinical Operations (SCO) implemented broad mission-related vision testing starting in 2009. Optical Coherence Tomography (OCT), 3 Tesla Brain and Orbit MRIs, Optical Biometry were implemented terrestrially for clinical monitoring. While no inflight vision testing was in place, already available onorbit technology was leveraged to facilitate in-flight clinical monitoring, including visual acuity, Amsler grid, tonometry, and ultrasonography. In 2013, on-orbit testing capabilities were expanded to include contrast sensitivity testing and OCT. As these additional testing capabilities have been added, resource prioritization, particularly crew time, is under evaluation.
National Aeronautics and Space Administration Exploration Systems Interim Strategy
NASA Technical Reports Server (NTRS)
2004-01-01
Contents include the following: 1. The Exploration Systems Mission Directorate within NASA. Enabling the Vision for Space Exploration. The Role of the Directorate. 2. Strategic Context and Approach. Corporate Focus. Focused, Prioritized Requirements. Spiral Transformation. Management Rigor. 3. Achieving Directorate Objectives. Strategy to Task Process. Capability Development. Research and Technology Development. 4. Beyond the Horizon. Appendices.
Synthetic Vision Enhances Situation Awareness and RNP Capabilities for Terrain-Challenged Approaches
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III
2003-01-01
The Synthetic Vision Systems (SVS) Project of Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-Up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation / Terrain Awareness and Warning System displays. These independent variables were evaluated for situation awareness, path error, and workload while making approaches to Runway 25 and 07 and during simulated engine-out Cottonwood 2 and KREMM departures. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the pathway and pursuit guidance used within the SVS concepts achieved required navigation performance (RNP) criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noakes, Mark W; Garcia, Pablo; Rosen, Jacob
The Trauma Pod (TP) vision is to develop a rapidly deployable robotic system to perform critical acute stabilization and/or surgical procedures autonomously or in a teleoperative mode on wounded soldiers in the battlefield who might otherwise die before treatment in a combat hospital can be provided. In the first phase of a project pursuing this vision, a robotic TP system was developed and its capability demonstrated by performing select surgical procedures on a patient phantom. The system demonstrates the feasibility of performing acute stabilization procedures with the patient being the only human in the surgical cell. The teleoperated surgical robotmore » is supported by autonomous arms that carry out scrub-nurse and circulating-nurse functions. Tool change and supply delivery are performed automatically and at least as fast as those performed manually by nurses. The TP system also includes tomographic X-ray facility for patient diagnosis and 2-D fluoroscopic data to support interventions. The vast amount of clinical protocols generated in the TP system are recorded automatically. These capabilities form the basis for a more comprehensive acute diagnostic and management platform that will provide life-saving care in environments where surgical personnel are not present.« less
Toward Head-Up and Head-Worn Displays for Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Arthur, Jarvis J.; Bailey, Randall E.; Shelton, Kevin J.; Kramer, Lynda J.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.; Ellis, Kyle K.
2015-01-01
A key capability envisioned for the future air transportation system is the concept of equivalent visual operations (EVO). EVO is the capability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. Enhanced Flight Vision Systems (EFVS) offer a path to achieve EVO. NASA has successfully tested EFVS for commercial flight operations that has helped establish the technical merits of EFVS, without reliance on natural vision, to runways without category II/III ground-based navigation and lighting requirements. The research has tested EFVS for operations with both Head-Up Displays (HUDs) and "HUD equivalent" Head-Worn Displays (HWDs). The paper describes the EVO concept and representative NASA EFVS research that demonstrate the potential of these technologies to safely conduct operations in visibilities as low as 1000 feet Runway Visual Range (RVR). Future directions are described including efforts to enable low-visibility approach, landing, and roll-outs using EFVS under conditions as low as 300 feet RVR.
Flight Test Evaluation of Synthetic Vision Concepts at a Terrain Challenged Airport
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prince, Lawrence J., III; Bailey, Randell E.; Arthur, Jarvis J., III; Parrish, Russell V.
2004-01-01
NASA's Synthetic Vision Systems (SVS) Project is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation/Terrain Awareness and Warning System displays. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the tunnel guidance display concept used within the SVS concepts achieved required navigation performance (RNP) criteria.
Technology Assessment in Support of the Presidential Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Weisbin, Charles R.; Lincoln, William; Mrozinski, Joe; Hua, Hook; Merida, Sofia; Shelton, Kacie; Adumitroaie, Virgil; Derleth, Jason; Silberg, Robert
2006-01-01
This paper discusses the process and results of technology assessment in support of the United States Vision for Space Exploration of the Moon, Mars and Beyond. The paper begins by reviewing the Presidential Vision: a major endeavor in building systems of systems. It discusses why we wish to return to the Moon, and the exploration architecture for getting there safely, sustaining a presence, and safely returning. Next, a methodology for optimal technology investment is proposed with discussion of inputs including a capability hierarchy, mission importance weightings, available resource profiles as a function of time, likelihoods of development success, and an objective function. A temporal optimization formulation is offered, and the investment recommendations presented along with sensitivity analyses. Key questions addressed are sensitivity of budget allocations to cost uncertainties, reduction in available budget levels, and shifting funding within constraints imposed by mission timeline.
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
Development of a machine vision system for automated structural assembly
NASA Technical Reports Server (NTRS)
Sydow, P. Daniel; Cooper, Eric G.
1992-01-01
Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.
2006-11-01
engines will involve a family of common components. It will consist of a real - time operating system and partitioned application software (AS...system will employ a standard hardware and software architecture. It will consist of a real time operating system and partitioned application...Inputs - Enables Large Cost Reduction 3. Software - FAA Certified Auto Code - Real Time Operating System - Commercial
Real-time tracking using stereo and motion: Visual perception for space robotics
NASA Technical Reports Server (NTRS)
Nishihara, H. Keith; Thomas, Hans; Huber, Eric; Reid, C. Ann
1994-01-01
The state-of-the-art in computing technology is rapidly attaining the performance necessary to implement many early vision algorithms at real-time rates. This new capability is helping to accelerate progress in vision research by improving our ability to evaluate the performance of algorithms in dynamic environments. In particular, we are becoming much more aware of the relative stability of various visual measurements in the presence of camera motion and system noise. This new processing speed is also allowing us to raise our sights toward accomplishing much higher-level processing tasks, such as figure-ground separation and active object tracking, in real-time. This paper describes a methodology for using early visual measurements to accomplish higher-level tasks; it then presents an overview of the high-speed accelerators developed at Teleos to support early visual measurements. The final section describes the successful deployment of a real-time vision system to provide visual perception for the Extravehicular Activity Helper/Retriever robotic system in tests aboard NASA's KC135 reduced gravity aircraft.
Identification Of Cells With A Compact Microscope Imaging System With Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2006-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking mic?oscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
NASA Astrophysics Data System (ADS)
Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki
2006-01-01
In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.
Tracking of Cells with a Compact Microscope Imaging System with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2007-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously
Tracking of cells with a compact microscope imaging system with intelligent controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2007-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to auto-focus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
Operation of a Cartesian Robotic System in a Compact Microscope with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark (Inventor)
2006-01-01
A Microscope Imaging System (CMIS) with intelligent controls is disclosed that provides techniques for scanning, identifying, detecting and tracking microscopic changes in selected characteristics or features of various surfaces including, but not limited to, cells, spheres, and manufactured products subject to difficult-to-see imperfections. The practice of the present invention provides applications that include colloidal hard spheres experiments, biological cell detection for patch clamping, cell movement and tracking, as well as defect identification in products, such as semiconductor devices, where surface damage can be significant, but difficult to detect. The CMIS system is a machine vision system, which combines intelligent image processing with remote control capabilities and provides the ability to autofocus on a microscope sample, automatically scan an image, and perform machine vision analysis on multiple samples simultaneously.
Huang, Kuo-Sen; Mark, David; Gandenberger, Frank Ulrich
2006-01-01
The plate::vision is a high-throughput multimode reader capable of reading absorbance, fluorescence, fluorescence polarization, time-resolved fluorescence, and luminescence. Its performance has been shown to be quite comparable with other readers. When the reader is integrated into the plate::explorer, an ultrahigh-throughput screening system with event-driven software and parallel plate-handling devices, it becomes possible to run complicated assays with kinetic readouts in high-density microtiter plate formats for high-throughput screening. For the past 5 years, we have used the plate::vision and the plate::explorer to run screens and have generated more than 30 million data points. Their throughput, performance, and robustness have speeded up our drug discovery process greatly.
Simulation Based Acquisition for NASA's Office of Exploration Systems
NASA Technical Reports Server (NTRS)
Hale, Joe
2004-01-01
In January 2004, President George W. Bush unveiled his vision for NASA to advance U.S. scientific, security, and economic interests through a robust space exploration program. This vision includes the goal to extend human presence across the solar system, starting with a human return to the Moon no later than 2020, in preparation for human exploration of Mars and other destinations. In response to this vision, NASA has created the Office of Exploration Systems (OExS) to develop the innovative technologies, knowledge, and infrastructures to explore and support decisions about human exploration destinations, including the development of a new Crew Exploration Vehicle (CEV). Within the OExS organization, NASA is implementing Simulation Based Acquisition (SBA), a robust Modeling & Simulation (M&S) environment integrated across all acquisition phases and programs/teams, to make the realization of the President s vision more certain. Executed properly, SBA will foster better informed, timelier, and more defensible decisions throughout the acquisition life cycle. By doing so, SBA will improve the quality of NASA systems and speed their development, at less cost and risk than would otherwise be the case. SBA is a comprehensive, Enterprise-wide endeavor that necessitates an evolved culture, a revised spiral acquisition process, and an infrastructure of advanced Information Technology (IT) capabilities. SBA encompasses all project phases (from requirements analysis and concept formulation through design, manufacture, training, and operations), professional disciplines, and activities that can benefit from employing SBA capabilities. SBA capabilities include: developing and assessing system concepts and designs; planning manufacturing, assembly, transport, and launch; training crews, maintainers, launch personnel, and controllers; planning and monitoring missions; responding to emergencies by evaluating effects and exploring solutions; and communicating across the OExS enterprise, within the Government, and with the general public. The SBA process features empowered collaborative teams (including industry partners) to integrate requirements, acquisition, training, operations, and sustainment. The SBA process also utilizes an increased reliance on and investment in M&S to reduce design risk. SBA originated as a joint Industry and Department of Defense (DoD) initiative to define and integrate an acquisition process that employs robust, collaborative use of M&S technology across acquisition phases and programs. The SBA process was successfully implemented in the Air Force s Joint Strike Fighter (JSF) Program.
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
Data-driven ranch management: A vision for sustainable ranching
USDA-ARS?s Scientific Manuscript database
Introduction The 21st century has ushered in an era of tiny, inexpensive electronics with impressive capabilities for sensing the environment. Also emerging are new technologies for communicating data to computer systems where new analytical tools can process the data. Many of these technologies w...
Data, Analysis, and Visualization | Computational Science | NREL
Data, Analysis, and Visualization Data, Analysis, and Visualization Data management, data analysis . At NREL, our data management, data analysis, and scientific visualization capabilities help move the approaches to image analysis and computer vision. Data Management and Big Data Systems, software, and tools
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
Tick, David; Satici, Aykut C; Shen, Jinglin; Gans, Nicholas
2013-08-01
This paper presents a novel navigation and control system for autonomous mobile robots that includes path planning, localization, and control. A unique vision-based pose and velocity estimation scheme utilizing both the continuous and discrete forms of the Euclidean homography matrix is fused with inertial and optical encoder measurements to estimate the pose, orientation, and velocity of the robot and ensure accurate localization and control signals. A depth estimation system is integrated in order to overcome the loss of scale inherent in vision-based estimation. A path following control system is introduced that is capable of guiding the robot along a designated curve. Stability analysis is provided for the control system and experimental results are presented that prove the combined localization and control system performs with high accuracy.
Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.
2014-01-01
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.
Effects of realistic force feedback in a robotic assisted minimally invasive surgery system.
Moradi Dalvand, Mohsen; Shirinzadeh, Bijan; Nahavandi, Saeid; Smith, Julian
2014-06-01
Robotic assisted minimally invasive surgery systems not only have the advantages of traditional laparoscopic procedures but also restore the surgeon's hand-eye coordination and improve the surgeon's precision by filtering hand tremors. Unfortunately, these benefits have come at the expense of the surgeon's ability to feel. Several research efforts have already attempted to restore this feature and study the effects of force feedback in robotic systems. The proposed methods and studies have some shortcomings. The main focus of this research is to overcome some of these limitations and to study the effects of force feedback in palpation in a more realistic fashion. A parallel robot assisted minimally invasive surgery system (PRAMiSS) with force feedback capabilities was employed to study the effects of realistic force feedback in palpation of artificial tissue samples. PRAMiSS is capable of actually measuring the tip/tissue interaction forces directly from the surgery site. Four sets of experiments using only vision feedback, only force feedback, simultaneous force and vision feedback and direct manipulation were conducted to evaluate the role of sensory feedback from sideways tip/tissue interaction forces with a scale factor of 100% in characterising tissues of varying stiffness. Twenty human subjects were involved in the experiments for at least 1440 trials. Friedman and Wilcoxon signed-rank tests were employed to statistically analyse the experimental results. Providing realistic force feedback in robotic assisted surgery systems improves the quality of tissue characterization procedures. Force feedback capability also increases the certainty of characterizing soft tissues compared with direct palpation using the lateral sides of index fingers. The force feedback capability can improve the quality of palpation and characterization of soft tissues of varying stiffness by restoring sense of touch in robotic assisted minimally invasive surgery operations.
Miniaturized unified imaging system using bio-inspired fluidic lens
NASA Astrophysics Data System (ADS)
Tsai, Frank S.; Cho, Sung Hwan; Qiao, Wen; Kim, Nam-Hyong; Lo, Yu-Hwa
2008-08-01
Miniaturized imaging systems have become ubiquitous as they are found in an ever-increasing number of devices, such as cellular phones, personal digital assistants, and web cameras. Until now, the design and fabrication methodology of such systems have not been significantly different from conventional cameras. The only established method to achieve focusing is by varying the lens distance. On the other hand, the variable-shape crystalline lens found in animal eyes offers inspiration for a more natural way of achieving an optical system with high functionality. Learning from the working concepts of the optics in the animal kingdom, we developed bio-inspired fluidic lenses for a miniature universal imager with auto-focusing, macro, and super-macro capabilities. Because of the enormous dynamic range of fluidic lenses, the miniature camera can even function as a microscope. To compensate for the image quality difference between the central vision and peripheral vision and the shape difference between a solid-state image sensor and a curved retina, we adopted a hybrid design consisting of fluidic lenses for tunability and fixed lenses for aberration and color dispersion correction. A design of the world's smallest surgical camera with 3X optical zoom capabilities is also demonstrated using the approach of hybrid lenses.
The influence of active vision on the exoskeleton of intelligent agents
NASA Astrophysics Data System (ADS)
Smith, Patrice; Terry, Theodore B.
2016-04-01
Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.
Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations
NASA Astrophysics Data System (ADS)
Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.
2016-04-01
This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).
Artificial vision support system (AVS(2)) for improved prosthetic vision.
Fink, Wolfgang; Tarbell, Mark A
2014-11-01
State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.
Trauma Pod: a semi-automated telerobotic surgical system.
Garcia, Pablo; Rosen, Jacob; Kapoor, Chetan; Noakes, Mark; Elbert, Greg; Treat, Michael; Ganous, Tim; Hanson, Matt; Manak, Joe; Hasser, Chris; Rohler, David; Satava, Richard
2009-06-01
The Trauma Pod (TP) vision is to develop a rapidly deployable robotic system to perform critical acute stabilization and/or surgical procedures, autonomously or in a teleoperative mode, on wounded soldiers in the battlefield who might otherwise die before treatment in a combat hospital could be provided. In the first phase of a project pursuing this vision, a robotic TP system was developed and its capability demonstrated by performing selected surgical procedures on a patient phantom. The system demonstrates the feasibility of performing acute stabilization procedures with the patient being the only human in the surgical cell. The teleoperated surgical robot is supported by autonomous robotic arms and subsystems that carry out scrub-nurse and circulating-nurse functions. Tool change and supply delivery are performed automatically and at least as fast as performed manually by nurses. Tracking and counting of the supplies is performed automatically. The TP system also includes a tomographic X-ray facility for patient diagnosis and two-dimensional (2D) fluoroscopic data to support interventions. The vast amount of clinical protocols generated in the TP system are recorded automatically. Automation and teleoperation capabilities form the basis for a more comprehensive acute diagnostic and management platform that will provide life-saving care in environments where surgical personnel are not present.
HALO: a reconfigurable image enhancement and multisensor fusion system
NASA Astrophysics Data System (ADS)
Wu, F.; Hickman, D. L.; Parker, Steve J.
2014-06-01
Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.
Exploration Medical Capability System Engineering Introduction and Vision
NASA Technical Reports Server (NTRS)
Mindock, J.; Reilly, J.
2017-01-01
Human exploration missions to beyond low Earth orbit destinations such as Mars will require more autonomous capability compared to current low Earth orbit operations. For the medical system, lack of consumable resupply, evacuation opportunities, and real-time ground support are key drivers toward greater autonomy. Recognition of the limited mission and vehicle resources available to carry out exploration missions motivates the Exploration Medical Capability (ExMC) Element's approach to enabling the necessary autonomy. The Element's work must integrate with the overall exploration mission and vehicle design efforts to successfully provide exploration medical capabilities. ExMC is applying systems engineering principles and practices to accomplish its integrative goals. This talk will briefly introduce the discipline of systems engineering and key points in its application to exploration medical capability development. It will elucidate technical medical system needs to be met by the systems engineering work, and the structured and integrative science and engineering approach to satisfying those needs, including the development of shared mental and qualitative models within and external to the human health and performance community. These efforts are underway to ensure relevancy to exploration system maturation and to establish medical system development that is collaborative with vehicle and mission design and engineering efforts.
Bray, Nathan; Brand, Andrew; Taylor, John; Hoare, Zoe; Dickinson, Christine; Edwards, Rhiannon T
2017-08-01
To determine the incremental cost-effectiveness of portable electronic vision enhancement system (p-EVES) devices compared with optical low vision aids (LVAs), for improving near vision visual function, quality of life and well-being of people with a visual impairment. An AB/BA randomized crossover trial design was used. Eighty-two participants completed the study. Participants were current users of optical LVAs who had not tried a p-EVES device before and had a stable visual impairment. The trial intervention was the addition of a p-EVES device to the participant's existing optical LVA(s) for 2 months, and the control intervention was optical LVA use only, for 2 months. Cost-effectiveness and cost-utility analyses were conducted from a societal perspective. The mean cost of the p-EVES intervention was £448. Carer costs were £30 (4.46 hr) less for the p-EVES intervention compared with the LVA only control. The mean difference in total costs was £417. Bootstrapping gave an incremental cost-effectiveness ratio (ICER) of £736 (95% CI £481 to £1525) for a 7% improvement in near vision visual function. Cost per quality-adjusted life year (QALY) ranged from £56 991 (lower 95% CI = £19 801) to £66 490 (lower 95% CI = £23 055). Sensitivity analysis varying the commercial price of the p-EVES device reduced ICERs by up to 75%, with cost per QALYs falling below £30 000. Portable electronic vision enhancement system (p-EVES) devices are likely to be a cost-effective use of healthcare resources for improving near vision visual function, but this does not translate into cost-effective improvements in quality of life, capability or well-being. © 2016 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation and European Association for Vision & Eye Research.
Three-dimensional displays and stereo vision
Westheimer, Gerald
2011-01-01
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023
The Role of X-Rays in Future Space Navigation and Communication
NASA Technical Reports Server (NTRS)
Winternitz, Luke M. B.; Gendreau, Keith C.; Hasouneh, Monther A.; Mitchell, Jason W.; Fong, Wai H.; Lee, Wing-Tsz; Gavriil, Fotis; Arzoumanian, Zaven
2013-01-01
In the near future, applications using X-rays will enable autonomous navigation and time distribution throughout the solar system, high capacity and low-power space data links, highly accurate attitude sensing, and extremely high-precision formation flying capabilities. Each of these applications alone has the potential to revolutionize mission capabilities, particularly beyond Earth orbit. This paper will outline the NASA Goddard Space Flight Center vision and efforts toward realizing the full potential of X-ray navigation and communications.
NASA Capability Roadmaps Executive Summary
NASA Technical Reports Server (NTRS)
Willcoxon, Rita; Thronson, Harley; Varsi, Guilio; Mueller, Robert; Regenie, Victoria; Inman, Tom; Crooke, Julie; Coulter, Dan
2005-01-01
This document is the result of eight months of hard work and dedication from NASA, industry, other government agencies, and academic experts from across the nation. It provides a summary of the capabilities necessary to execute the Vision for Space Exploration and the key architecture decisions that drive the direction for those capabilities. This report is being provided to the Exploration Systems Architecture Study (ESAS) team for consideration in development of an architecture approach and investment strategy to support NASA future mission, programs and budget requests. In addition, it will be an excellent reference for NASA's strategic planning. A more detailed set of roadmaps at the technology and sub-capability levels are available on CD. These detailed products include key driving assumptions, capability maturation assessments, and technology and capability development roadmaps.
Episodic Reasoning for Vision-Based Human Action Recognition
Martinez-del-Rincon, Jesus
2014-01-01
Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning. PMID:24959602
The Advanced Modeling, Simulation and Analysis Capability Roadmap Vision for Engineering
NASA Technical Reports Server (NTRS)
Zang, Thomas; Lieber, Mike; Norton, Charles; Fucik, Karen
2006-01-01
This paper summarizes a subset of the Advanced Modeling Simulation and Analysis (AMSA) Capability Roadmap that was developed for NASA in 2005. The AMSA Capability Roadmap Team was chartered to "To identify what is needed to enhance NASA's capabilities to produce leading-edge exploration and science missions by improving engineering system development, operations, and science understanding through broad application of advanced modeling, simulation and analysis techniques." The AMSA roadmap stressed the need for integration, not just within the science, engineering and operations domains themselves, but also across these domains. Here we discuss the roadmap element pertaining to integration within the engineering domain, with a particular focus on implications for future observatory missions. The AMSA products supporting the system engineering function are mission information, bounds on information quality, and system validation guidance. The Engineering roadmap element contains 5 sub-elements: (1) Large-Scale Systems Models, (2) Anomalous Behavior Models, (3) advanced Uncertainty Models, (4) Virtual Testing Models, and (5) space-based Robotics Manufacture and Servicing Models.
Overview of the Small Aircraft Transportation System Project Four Enabling Operating Capabilities
NASA Technical Reports Server (NTRS)
Viken, Sally A.; Brooks, Frederick M.; Johnson, Sally C.
2005-01-01
It has become evident that our commercial air transportation system is reaching its peak in terms of capacity, with numerous delays in the system and the demand still steadily increasing. NASA, FAA, and the National Consortium for Aviation Mobility (NCAM) have partnered to aid in increasing the mobility throughout the United States through the Small Aircraft Transportation System (SATS) project. The SATS project has been a five-year effort to provide the technical and economic basis for further national investment and policy decisions to support a small aircraft transportation system. The SATS vision is to enable people and goods to have the convenience of on-demand point-to-point travel, anywhere, anytime for both personal and business travel. This vision can be obtained by expanding near all-weather access to more than 3,400 small community airports that are currently under-utilized throughout the United States. SATS has focused its efforts on four key operating capabilities that have addressed new emerging technologies, procedures, and concepts to pave the way for small aircraft to operate in nearly all weather conditions at virtually any runway in the United States. These four key operating capabilities are: Higher Volume Operations at Non-Towered/Non-Radar Airports, En Route Procedures and Systems for Integrated Fleet Operations, Lower Landing Minimums at Minimally Equipped Landing Facilities, and Increased Single Pilot Performance. The SATS project culminated with the 2005 SATS Public Demonstration in Danville, Virginia on June 5th-7th, by showcasing the accomplishments achieved throughout the project and demonstrating that a small aircraft transportation system could be viable. The technologies, procedures, and concepts were successfully demonstrated to show that they were safe, effective, and affordable for small aircraft in near all weather conditions. The focus of this paper is to provide an overview of the technical and operational feasibility of the four operating capabilities, and explain how they can enable a small aircraft transportation system.
A High Performance Micro Channel Interface for Real-Time Industrial Image Processing
Thomas H. Drayer; Joseph G. Tront; Richard W. Conners
1995-01-01
Data collection and transfer devices are critical to the performance of any machine vision system. The interface described in this paper collects image data from a color line scan camera and transfers the data obtained into the system memory of a Micro Channel-based host computer. A maximum data transfer rate of 20 Mbytes/sec can be achieved using the DMA capabilities...
Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system
NASA Astrophysics Data System (ADS)
Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping
2015-05-01
Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.
Testing and evaluation of a wearable augmented reality system for natural outdoor environments
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Cook, James; Sherrill, Todd; Snarski, Stephen; Russler, Pat; Clipp, Brian; Karl, Robert; Wenger, Eric; Bennett, Matthew; Mauger, Jennifer; Church, William; Towles, Herman; MacCabe, Stephen; Webb, Jeffrey; Lupo, Jasper; Frahm, Jan-Michael; Dunn, Enrique; Leslie, Christopher; Welch, Greg
2013-05-01
This paper describes performance evaluation of a wearable augmented reality system for natural outdoor environments. Applied Research Associates (ARA), as prime integrator on the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program, is developing a soldier-worn system to provide intuitive `heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered iconography (e.g., navigation waypoints, sensor points of interest, blue forces, aircraft) on the soldier's view of reality. We achieve accurate pose estimation through fusion of inertial, magnetic, GPS, terrain data, and computer-vision inputs. We leverage a helmet-mounted camera and custom computer vision algorithms to provide terrain-based measurements of absolute orientation (i.e., orientation of the helmet with respect to the earth). These orientation measurements, which leverage mountainous terrain horizon geometry and mission planning landmarks, enable our system to operate robustly in the presence of external and body-worn magnetic disturbances. Current field testing activities across a variety of mountainous environments indicate that we can achieve high icon geo-registration accuracy (<10mrad) using these vision-based methods.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1973-01-01
A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.
Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.
2011-01-01
This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.
Helicopter flights with night-vision goggles: Human factors aspects
NASA Technical Reports Server (NTRS)
Brickner, Michael S.
1989-01-01
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.
A light-stimulated synaptic device based on graphene hybrid phototransistor
NASA Astrophysics Data System (ADS)
Qin, Shuchao; Wang, Fengqiu; Liu, Yujie; Wan, Qing; Wang, Xinran; Xu, Yongbing; Shi, Yi; Wang, Xiaomu; Zhang, Rong
2017-09-01
Neuromorphic chips refer to an unconventional computing architecture that is modelled on biological brains. They are increasingly employed for processing sensory data for machine vision, context cognition, and decision making. Despite rapid advances, neuromorphic computing has remained largely an electronic technology, making it a challenge to access the superior computing features provided by photons, or to directly process vision data that has increasing importance to artificial intelligence. Here we report a novel light-stimulated synaptic device based on a graphene-carbon nanotube hybrid phototransistor. Significantly, the device can respond to optical stimuli in a highly neuron-like fashion and exhibits flexible tuning of both short- and long-term plasticity. These features combined with the spatiotemporal processability make our device a capable counterpart to today’s electrically-driven artificial synapses, with superior reconfigurable capabilities. In addition, our device allows for generic optical spike processing, which provides a foundation for more sophisticated computing. The silicon-compatible, multifunctional photosensitive synapse opens up a new opportunity for neural networks enabled by photonics and extends current neuromorphic systems in terms of system complexities and functionalities.
Jensen, Jan L; Travers, Andrew H
2017-05-01
Nationally, emphasis on the importance of evidence-based practice (EBP) in emergency medicine and emergency medical services (EMS) has continuously increased. However, meaningful incorporation of effective and sustainable EBP into clinical and administrative decision-making remains a challenge. We propose a vision for EBP in EMS: Canadian EMS clinicians and leaders will understand and use the best available evidence for clinical and administrative decision-making, to improve patient health outcomes, the capability and quality of EMS systems of care, and safety of patients and EMS professionals. This vision can be implemented with the use of a structure, process, system, and outcome taxonomy to identify current barriers to true EBP, to recognize the opportunities that exist, and propose corresponding recommended strategies for local EMS agencies and at the national level. Framing local and national discussions with this approach will be useful for developing a cohesive and collaborative Canadian EBP strategy.
The 3-D vision system integrated dexterous hand
NASA Technical Reports Server (NTRS)
Luo, Ren C.; Han, Youn-Sik
1989-01-01
Most multifingered hands use a tendon mechanism to minimize the size and weight of the hand. Such tendon mechanisms suffer from the problems of striction and friction of the tendons resulting in a reduction of control accuracy. A design for a 3-D vision system integrated dexterous hand with motor control is described which overcomes these problems. The proposed hand is composed of three three-jointed grasping fingers with tactile sensors on their tips, a two-jointed eye finger with a cross-shaped laser beam emitting diode in its distal part. The two non-grasping fingers allow 3-D vision capability and can rotate around the hand to see and measure the sides of grasped objects and the task environment. An algorithm that determines the range and local orientation of the contact surface using a cross-shaped laser beam is introduced along with some potential applications. An efficient method for finger force calculation is presented which uses the measured contact surface normals of an object.
Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces
NASA Technical Reports Server (NTRS)
Altschuler, M. D.; Altschuler, B. R.; Taboada, J.
1981-01-01
It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
Intelligent surgical laser system configuration and software implementation
NASA Astrophysics Data System (ADS)
Hsueh, Chi-Fu T.; Bille, Josef F.
1992-06-01
An intelligent surgical laser system, which can help the ophthalmologist to achieve higher precision and control during their procedures, has been developed by ISL as model CLS 4001. In addition to the laser and laser delivery system, the system is also equipped with a vision system (IPU), robotics motion control (MCU), and a tracking closed loop system (ETS) that tracks the eye in three dimensions (X, Y and Z). The initial patient setup is computer controlled with guidance from the vision system. The tracking system is automatically engaged when the target is in position. A multi-level tracking system is developed by integrating our vision and tracking systems which have been able to maintain our laser beam precisely on target. The capabilities of the automatic eye setup and the tracking in three dimensions provides for improved accuracy and measurement repeatability. The system is operated through the Surgical Control Unit (SCU). The SCU communicates with the IPU and the MCU through both ethernet and RS232. Various scanning pattern (i.e., line, curve, circle, spiral, etc.) can be selected with given parameters. When a warning is activated, a voice message is played that will normally require a panel touch acknowledgement. The reliability of the system is ensured in three levels: (1) hardware, (2) software real time monitoring, and (3) user. The system is currently under clinical validation.
Robot vision system programmed in Prolog
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Hack, Ralf
1995-10-01
This is the latest in a series of publications which develop the theme of programming a machine vision system using the artificial intelligence language Prolog. The article states the long-term objective of the research program of which this work forms part. Many but not yet all of the goals laid out in this plan have already been achieved in an integrated system, which uses a multi-layer control hierarchy. The purpose of the present paper is to demonstrate that a system based upon a Prolog controller is capable of making complex decisions and operating a standard robot. The authors chose, as a vehicle for this exercise, the task of playing dominoes against a human opponent. This game was selected for this demonstration since it models a range of industrial assembly tasks, where parts are to be mated together. (For example, a 'daisy chain' of electronic equipment and the interconnecting cables/adapters may be likened to a chain of dominoes.)
Theory on data processing and instrumentation. [remote sensing
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1978-01-01
A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.
1988-04-30
side it necessary and Identify’ by’ block n~nmbot) haptic hand, touch , vision, robot, object recognition, categorization 20. AGSTRPACT (Continue an...established that the haptic system has remarkable capabilities for object recognition. We define haptics as purposive touch . The basic tactual system...gathered ratings of the importance of dimensions for categorizing common objects by touch . Texture and hardness ratings strongly co-vary, which is
Efficacy of Low Vision Services for Visually Impaired Children.
ERIC Educational Resources Information Center
Hofstetter, H. W.
1991-01-01
Low vision children (ages 4-19, n=137) were screened, and 77 percent were advised to have comprehensive clinical evaluations or ophthalmology services. The visual capability of the referred children was determined, low vision aids were prescribed for 56 children, and the degree of successful utilization of aids was evaluated. (JDD)
NASA Astrophysics Data System (ADS)
Hofmann, Ulrich; Siedersberger, Karl-Heinz
2003-09-01
Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.
Alternatives for Future U.S. Space-Launch Capabilities
2006-10-01
directive issued on January 14, 2004—called the new Vision for Space Exploration (VSE)—set out goals for future exploration of the solar system using...of the solar system using manned spacecraft. Among those goals was a proposal to return humans to the moon no later than 2020. The ultimate goal...U.S. launch capacity exclude the Sea Launch system operated by Boeing in partnership with RSC- Energia (based in Moscow), Kvaerner ASA (based in Oslo
Instrumentation, Control, and Intelligent Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2005-09-01
Abundant and affordable energy is required for U.S. economic stability and national security. Advanced nuclear power plants offer the best near-term potential to generate abundant, affordable, and sustainable electricity and hydrogen without appreciable generation of greenhouse gases. To that end, Idaho National Laboratory (INL) has been charged with leading the revitalization of nuclear power in the U.S. The INL vision is to become the preeminent nuclear energy laboratory with synergistic, world-class, multi-program capabilities and partnerships by 2015. The vision focuses on four essential destinations: (1) Be the preeminent internationally-recognized nuclear energy research, development, and demonstration laboratory; (2) Be a majormore » center for national security technology development and demonstration; (3) Be a multi-program national laboratory with world-class capabilities; (4) Foster academic, industry, government, and international collaborations to produce the needed investment, programs, and expertise. Crucial to that effort is the inclusion of research in advanced instrumentation, control, and intelligent systems (ICIS) for use in current and advanced power and energy security systems to enable increased performance, reliability, security, and safety. For nuclear energy plants, ICIS will extend the lifetime of power plant systems, increase performance and power output, and ensure reliable operation within the system's safety margin; for national security applications, ICIS will enable increased protection of our nation's critical infrastructure. In general, ICIS will cost-effectively increase performance for all energy security systems.« less
Demonstration of Four Operating Capabilities to Enable a Small Aircraft Transportation System
NASA Technical Reports Server (NTRS)
Viken, Sally A.; Brooks, Frederick M.
2005-01-01
The Small Aircraft Transportation System (SATS) project has been a five-year effort fostering research and development that could lead to the transformation of our country s air transportation system. It has become evident that our commercial air transportation system is reaching its peak in terms of capacity, with numerous delays in the system and the demand keeps steadily increasing. The SATS vision is to increase mobility in our nation s transportation system by expanding access to more than 3400 small community airports that are currently under-utilized. The SATS project has focused its efforts on four key operating capabilities that have addressed new emerging technologies and procedures to pave the way for a new way of air travel. The four key operating capabilities are: Higher Volume Operations at Non-Towered/Non-Radar Airports, En Route Procedures and Systems for Integrated Fleet Operations, Lower Landing Minimums at Minimally Equipped Landing Facilities, and Increased Single Pilot Performance. These four capabilities are key to enabling low-cost, on-demand, point-to-point transportation of goods and passengers utilizing small aircraft operating from small airports. The focus of this paper is to discuss the technical and operational feasibility of the four operating capabilities and demonstrate how they can enable a small aircraft transportation system.
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Sunny; Jha, Sumit Kumar; Pullum, Laura L.
Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on themore » pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.« less
Modeling, Simulation, and Characterization of Distributed Multi-Agent Systems
2012-01-01
capabilities (vision, LIDAR , differential global positioning, ultrasonic proximity sensing, etc.), the agents comprising a MAS tend to have somewhat lesser...on the simultaneous localization and mapping ( SLAM ) problem [19]. SLAM acknowledges that externally-provided localization information is not...continually-updated mapping databases, generates a comprehensive representation of the spatial and spectral environment. Many times though, inherent SLAM
Nanotechnology for the forest products industry: vision and technology roadmap
Inc. Atlanta Prepared by Energetics
2005-01-01
Nanotechnology is defined as the manipulation of materials measuring 100 nanometers or less in at least one dimension. Nanotechnology is expected to be a critical driver of global economic growth and development in this century. Already, this broad multi-disciplinary field is providing glimpses of exciting new capabilities, enabling materials, devices, and systems that...
The Combined Armor Regiment: The Future of USMC Armor?
2010-05-13
dollars per vehicle?0 STRATEGIC SETTING: The CMC’s Vision Statement from the Marine Corps Vision and Strategy 2025 publication represents the construct... strategy of the United States Marine Corps.24 It is especially important for the Marine Corps to be able to adapt to these enduring requirements as...Secretary of Defense and later codified his vision for balancing the Marine Corps’ future capabilities in the Marine Corps Vision and Strategy 2025
Humanoids for lunar and planetary surface operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier; Csaszar, Ambrus; Gan, Quan; Hidalgo, Timothy; Moore, Jeff; Newton, Jason; Sandoval, Steven; Xu, Jiajing
2005-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans, for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project in this direction is outlined.
Man-machine interactive imaging and data processing using high-speed digital mass storage
NASA Technical Reports Server (NTRS)
Alsberg, H.; Nathan, R.
1975-01-01
The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.
NASA Astrophysics Data System (ADS)
Boye, Michael W.; Zwick, Harry; Stuck, Bruce E.; Edsall, Peter R.; Akers, Andre
2007-02-01
The need for tools that can assist in evaluating visual function is an essential and a growing requirement as lasers on the modern battlefield mature and proliferate. The requirement for rapid and sensitive vision assessment under field conditions produced the USAMRD Aidman Vision Screener (AVS), designed to be used as a field diagnostic tool for assessing laser induced retinal damage. In this paper, we describe additions to the AVS designed to provide a more sensitive assessment of laser induced retinal dysfunction. The AVS incorporates spectral LogMar Acuity targets without and with neural opponent chromatic backgrounds. Thus, it provides the capability of detecting selective photoreceptor damage and its functional consequences at the level of both the outer and inner retina. Modifications to the original achromatic AVS have been implemented to detect selective cone system dysfunction by providing LogMar acuity Landolt rings associated with the peak spectral absorption regions of the S (short), M (middle), and L (long) wavelength cone photoreceptor systems. Evaluation of inner retinal dysfunction associated with selective outer cone damage employs LogMar spectral acuity charts with backgrounds that are neurally opponent. Thus, the AVS provides the capability to assess the effect of selective cone dysfunction on the normal neural balance at the level of the inner retinal interactions. Test and opponent background spectra have been optimized by using color space metrics. A minimal number of three AVS evaluations will be utilized to provide an estimate of false alarm level.
New frontiers for intelligent content-based retrieval
NASA Astrophysics Data System (ADS)
Benitez, Ana B.; Smith, John R.
2001-01-01
In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.
New frontiers for intelligent content-based retrieval
NASA Astrophysics Data System (ADS)
Benitez, Ana B.; Smith, John R.
2000-12-01
In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.
Safe traffic : Vision Zero on the move
DOT National Transportation Integrated Search
2006-03-01
Vision Zero is composed of several basic : elements, each of which affects safety in : road traffic. These concerns ethics, human : capability and tolerance, responsibility, : scientific facts and a realisation that the : different components in the ...
NASA Technical Reports Server (NTRS)
Mondt, Jack F.; Zubrin, Robert M.
1996-01-01
The vision for the future of the planetary exploration program includes the capability to deliver 'constellations' or 'fleets' of microspacecraft to a planetary destination. These fleets will act in a coordinated manner to gather science data from a variety of locations on or around the target body, thus providing detailed, global coverage without requiring development of a single large, complex and costly spacecraft. Such constellations of spacecraft, coupled with advanced information processing and visualization techniques and high-rate communications, could provide the basis for development of a 'virtual presence' in the solar system. A goal could be the near real-time delivery of planetary images and video to a wide variety of users in the general public and the science community. This will be a major step in making the solar system accessible to the public and will help make solar system exploration a part of the human experience on Earth.
Data Fusion for a Vision-Radiological System for Source Tracking and Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev
2015-07-01
A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less
NASA Astrophysics Data System (ADS)
McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.
2010-01-01
In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.
NASA Astrophysics Data System (ADS)
Crawford, Bobby Grant
In an effort to field smaller and cheaper Uninhabited Aerial Vehicles (UAVs), the Army has expressed an interest in an ability of the vehicle to autonomously detect and avoid obstacles. Current systems are not suitable for small aircraft. NASA Langley Research Center has developed a vision sensing system that uses small semiconductor cameras. The feasibility of using this sensor for the purpose of autonomous obstacle avoidance by a UAV is the focus of the research presented in this document. The vision sensor characteristics are modeled and incorporated into guidance and control algorithms designed to generate flight commands based on obstacle information received from the sensor. The system is evaluated by simulating the response to these flight commands using a six degree-of-freedom, non-linear simulation of a small, fixed wing UAV. The simulation is written using the MATLAB application and runs on a PC. Simulations were conducted to test the longitudinal and lateral capabilities of the flight control for a range of airspeeds, camera characteristics, and wind speeds. Results indicate that the control system is suitable for obstacle avoiding flight control using the simulated vision system. In addition, a method for designing and evaluating the performance of such a system has been developed that allows the user to easily change component characteristics and evaluate new systems through simulation.
Revolutionary Propulsion Systems for 21st Century Aviation
NASA Technical Reports Server (NTRS)
Sehra, Arun K.; Shin, Jaiwon
2003-01-01
The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA's vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and Knox). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines, and the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions. This paper reviews future propulsion and power concepts that are currently under development at NASA Glenn Research Center.
Mobile Autonomous Humanoid Assistant
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Ambrose, R. O.; Tyree, K. S.; Goza, S. M.; Huber, E. L.
2004-01-01
A mobile autonomous humanoid robot is assisting human co-workers at the Johnson Space Center with tool handling tasks. This robot combines the upper body of the National Aeronautics and Space Administration (NASA)/Defense Advanced Research Projects Agency (DARPA) Robonaut system with a Segway(TradeMark) Robotic Mobility Platform yielding a dexterous, maneuverable humanoid perfect for aiding human co-workers in a range of environments. This system uses stereo vision to locate human team mates and tools and a navigation system that uses laser range and vision data to follow humans while avoiding obstacles. Tactile sensors provide information to grasping algorithms for efficient tool exchanges. The autonomous architecture utilizes these pre-programmed skills to form human assistant behaviors. The initial behavior demonstrates a robust capability to assist a human by acquiring a tool from a remotely located individual and then following the human in a cluttered environment with the tool for future use.
Model-based object classification using unification grammars and abstract representations
NASA Astrophysics Data System (ADS)
Liburdy, Kathleen A.; Schalkoff, Robert J.
1993-04-01
The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.
Beyond the computer-based patient record: re-engineering with a vision.
Genn, B; Geukers, L
1995-01-01
In order to achieve real benefit from the potential offered by a Computer-Based Patient Record, the capabilities of the technology must be applied along with true re-engineering of healthcare delivery processes. University Hospital recognizes this and is using systems implementation projects, such as the catalyst, for transforming the way we care for our patients. Integration is fundamental to the success of these initiatives and this must be explicitly planned against an organized systems architecture whose standards are market-driven. University Hospital also recognizes that Community Health Information Networks will offer improved quality of patient care at a reduced overall cost to the system. All of these implementation factors are considered up front as the hospital makes its initial decisions on to how to computerize its patient records. This improves our chances for success and will provide a consistent vision to guide the hospital's development of new and better patient care.
A compact CCD-monitored atomic force microscope with optical vision and improved performances.
Mingyue, Liu; Haijun, Zhang; Dongxian, Zhang
2013-09-01
A novel CCD-monitored atomic force microscope (AFM) with optical vision and improved performances has been developed. Compact optical paths are specifically devised for both tip-sample microscopic monitoring and cantilever's deflection detecting with minimized volume and optimal light-amplifying ratio. The ingeniously designed AFM probe with such optical paths enables quick and safe tip-sample approaching, convenient and effective tip-sample positioning, and high quality image scanning. An image stitching method is also developed to build a wider-range AFM image under monitoring. Experiments show that this AFM system can offer real-time optical vision for tip-sample monitoring with wide visual field and/or high lateral optical resolution by simply switching the objective; meanwhile, it has the elegant performances of nanometer resolution, high stability, and high scan speed. Furthermore, it is capable of conducting wider-range image measurement while keeping nanometer resolution. Copyright © 2013 Wiley Periodicals, Inc.
A Vision for the Future of Environmental Research: Creating Environmental Intelligence Centers
NASA Astrophysics Data System (ADS)
Barron, E. J.
2002-12-01
The nature of the environmental issues facing our nation demands a capability that allows us to enhance economic vitality, maintain environmental quality, and limit threats to life and property through more fundamental understanding of the Earth. It is "advanced" knowledge of how the system may respond that gives environmental information most of its power and utility. This fact is evident in the demand for new forecasting products, involving air quality, energy demand, water quality and quantity, ultraviolet radiation, and human health indexes. As we demonstrate feasibility and benefit, society is likely to demand a growing number of new operational forecast products on prediction time scales of days to decades into the future. The driving forces that govern our environment are widely recognized, involving primarily weather and climate, patterns of land use and land cover, and resource use with its associated waste products. The importance of these driving forces has been demonstrated by a decade of research on greenhouse gas emissions, ozone depletion and deforestation, and through the birth of Earth System Science. But, there are also major challenges. We find the strongest intersection between human activity, environmental stresses, system interactions and human decision-making in regional analysis coupled to larger spatial scales. In addition, most regions are influenced by multiple-stresses. Multiple, cumulative, and interactive stresses are clearly the most difficult to understand and hence the most difficult to assess and to manage. Currently, we are incapable of addressing these issues in a truly integrated fashion at global scales. The lack of an ability to combine global and regional forcing and to assess the response of the system to multiple stresses at the spatial and temporal scales of interest to humans limits our ability to assess the impacts of specific human perturbations, to assess advantages and risks, and to enhance economic and societal well being in the context of global, national and regional stewardship. These societal needs lead to a vision that uses a regional framework as a stepping-stone to a comprehensive national or global capability. The development of a comprehensive regional framework depends on a new approach to environmental research - the creation of regional Environmental Intelligence Centers. A key objective is to bring a demanding level of discipline to "forecasting" in a broad arena of environmental issues. The regional vision described above is designed to address a broad range of current and future environmental issues by creating a capability based on integrating diverse observing systems, making data readily accessible, developing an increasingly comprehensive predictive capability at the spatial and temporal scales appropriate for examining societal issues, and creating a vigorous intersection with decision-makers. With demonstrated success over a few large-scale regions of the U.S., this strategy will very likely grow into a national capability that far exceeds current capabilities.
Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat
Casanova, Joaquin J.; O'Shaughnessy, Susan A.; Evett, Steven R.; Rush, Charles M.
2014-01-01
Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p < 0.0001). Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications. PMID:25251410
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir
2014-06-01
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
Baranov, A V; Troianovskiĭ, R L
2012-01-01
Functional results of staged surgical treatment of advanced retinopathy of prematurity (ROP) are analyzed between 2005 and 2010 in ophthalmologic department of city children's hospital ( St. Petersburg). A total of 154 children (303 eyes) were operated. The assessment of visual functions was performed using proposed original method. Subject vision was achieved in 65% of 20 eyes (6,6%) with 4B stage ROP. Fair anatomic results were achieved in 131 eyes of children with 5 stage ROP (283 eyes), light perception was preserved in 52 eyes (39,7%), capability to distinguish large objects appeared in 40 eyes (30,5%) and subject vision developed in 39 eyes (29,8%). Correlation between visual functions and environmental conditions was found, in particular presence or absence of long-term period of training in color and individual objects distinguishing. In a group of children training resulted in achievement of fair functions (subject vision, capability to distinguish large objects) in 81,2% of patients, where as in a group without training the same capabilities developed in 31,8% of cases only. Functional outcomes were also found to depend on CNS condition and time of surgery.
Simple laser vision sensor calibration for surface profiling applications
NASA Astrophysics Data System (ADS)
Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.
2016-09-01
Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.
Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin
2017-03-27
Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.
2006-01-01
enabling technologies such as built-in-test, advanced health monitoring algorithms, reliability and component aging models, prognostics methods, and...deployment and acceptance. This framework and vision is consistent with the onboard PHM ( Prognostic and Health Management) as well as advanced... monitored . In addition to the prognostic forecasting capabilities provided by monitoring system power, multiple confounding errors by electronic
2002-01-08
new PAL with a total viewing angle of around 80° and suitable for foveal vision, it turned out that the optical design program ZEMAX EE we intended to...use was not capable for optimization. The reason was that ZEMAX -EE and all present optical design programs are based on see-through-window (STW
Emission Measurements of Ultracell XX25 Reformed Methanol Fuel Cell System
2008-06-01
combination of electrochemical devices such as fuel cell and battery. Polymer electrolyte membrane fuel cells ( PEMFC ) using hydrogen or liquid...communications and computers, sensors and night vision capabilities. High temperature PEMFC offers some advantages such as enhanced electrode kinetics and better...tolerance of carbon monoxide that will poison the conventional PEMFC . Ultracell Corporation, Livermore, California has developed a first
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-01-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety
NASA Astrophysics Data System (ADS)
Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.
1994-02-01
Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.
NASA Astrophysics Data System (ADS)
Lewis, Keith
2014-10-01
Biological systems exploiting light have benefitted from thousands of years of genetic evolution and can provide insight to support the development of new approaches for imaging, image processing and communication. For example, biological vision systems can provide significant diversity, yet are able to function with only a minimal degree of neural processing. Examples will be described underlying the processes used to support the development of new concepts for photonic systems, ranging from uncooled bolometers and tunable filters, to asymmetric free-space optical communication systems and new forms of camera capable of simultaneously providing spectral and polarimetric diversity.
3D vision system for intelligent milking robot automation
NASA Astrophysics Data System (ADS)
Akhloufi, M. A.
2013-12-01
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.
2010-01-01
It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.
Fink, Wolfgang; You, Cindy X; Tarbell, Mark A
2010-01-01
It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.
Humanoids in Support of Lunar and Planetary Surface Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier
2006-01-01
This paper presents a vision of humanoid robots as human's key partners in future space exploration, in particular for construction, maintenance/repair and operation of lunar/planetary habitats, bases and settlements. It integrates this vision with the recent plans for human and robotic exploration, aligning a set of milestones for operational capability of humanoids with the schedule for the next decades and development spirals in the Project Constellation. These milestones relate to a set of incremental challenges, for the solving of which new humanoid technologies are needed. A system of systems integrative approach that would lead to readiness of cooperating humanoid crews is sketched. Robot fostering, training/education techniques, and improved cognitive/sensory/motor development techniques are considered essential elements for achieving intelligent humanoids. A pilot project using small-scale Fujitsu HOAP-2 humanoid is outlined.
Novel Propulsion and Power Concepts for 21st Century Aviation
NASA Technical Reports Server (NTRS)
Sehra, Arun K.
2003-01-01
The air transportation for the new millennium will require revolutionary solutions to meeting public demand for improving safety, reliability, environmental compatibility, and affordability. NASA s vision for 21st Century Aircraft is to develop propulsion systems that are intelligent, virtually inaudible (outside the airport boundaries), and have near zero harmful emissions (CO2 and NO(x)). This vision includes intelligent engines that will be capable of adapting to changing internal and external conditions to optimally accomplish the mission with minimal human intervention. The distributed vectored propulsion will replace two to four wing mounted or fuselage mounted engines by a large number of small, mini, or micro engines. And the electric drive propulsion based on fuel cell power will generate electric power, which in turn will drive propulsors to produce the desired thrust. Such a system will completely eliminate the harmful emissions.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Airborne Use of Night Vision Systems
NASA Astrophysics Data System (ADS)
Mepham, S.
1990-04-01
Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.
Systems and Techniques for Identifying and Avoiding Ice
NASA Technical Reports Server (NTRS)
Hansman, R. John
1995-01-01
In-flight icing is one of the most difficult aviation weather hazards facing general aviation. Because most aircraft in the general aviation category are not certified for flight into known icing conditions, techniques for identifying and avoiding in-flight ice are important to maintain safety while increasing the utility and dispatch capability which is part of the AGATE vision. This report summarizes a brief study effort which: (1) Reviewed current ice identification, forecasting, and avoidance techniques; (2) Assessed feasibility of improved forecasting and ice avoidance procedures; and (3) Identified key issues for the development of improved capability with regard to in-flight icing.
Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded
NASA Technical Reports Server (NTRS)
Culley, Dennis
2010-01-01
Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders
Smart unattended sensor networks with scene understanding capabilities
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2006-05-01
Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.
A traffic situation analysis system
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin
2011-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.
Material requirements for bio-inspired sensing systems
NASA Astrophysics Data System (ADS)
Biggins, Peter; Lloyd, Peter; Salmond, David; Kusterbeck, Anne
2008-10-01
The aim of developing bio-inspired sensing systems is to try and emulate the amazing sensitivity and specificity observed in the natural world. These capabilities have evolved, often for specific tasks, which provide the organism with an advantage in its fight to survive and prosper. Capabilities cover a wide range of sensing functions including vision, temperature, hearing, touch, taste and smell. For some functions, the capabilities of natural systems are still greater than that achieved by traditional engineering solutions; a good example being a dog's sense of smell. Furthermore, attempting to emulate aspects of biological optics, processing and guidance may lead to more simple and effective devices. A bio-inspired sensing system is much more than the sensory mechanism. A system will need to collect samples, especially if pathogens or chemicals are of interest. Other functions could include the provision of power, surfaces and receptors, structure, locomotion and control. In fact it is possible to conceive of a complete bio-inspired system concept which is likely to be radically different from more conventional approaches. This concept will be described and individual component technologies considered.
Results from the NASA Capability Roadmap Team for In-Situ Resource Utilization (ISRU)
NASA Technical Reports Server (NTRS)
Sanders, Gerald B.; Romig, Kris A.; Larson, William E.; Johnson, Robert; Rapp, Don; Johnson, Ken R.; Sacksteder, Kurt; Linne, Diane; Curreri, Peter; Duke, Michael;
2005-01-01
On January 14, 2004, the President of the United States unveiled a new vision for robotic and human exploration of space entitled, "A Renewed Spirit of Discovery". As stated by the President in the Vision for Space Exploration (VSE), NASA must "... implement a sustained and affordable human and robotic program to explore the solar system and beyond " and ".. .develop new technologies and harness the moon's abundant resources to allow manned exploration of more challenging environments." A key to fulfilling the goal of sustained and affordable human and robotic exploration will be the ability to use resources that are available at the site of exploration to "live off the land" instead of bringing everything from Earth, known as In-Situ Resource Utilization (ISRU). ISRU can significantly reduce the mass, cost, and risk of exploration through capabilities such as: mission consumable production (propellants, fuel cell reagents, life support consumables, and feedstock for manufacturing & construction); surface construction (radiation shields, landing pads, walls, habitats, etc.); manufacturing and repair with in-situ resources (spare parts, wires, trusses, integrated systems etc.); and space utilities and power from space resources. On January 27th, 2004 the President's Commission on Implementation of U.S. Space Exploration Policy (Aldridge Committee) was created and its final report was released in June 2004. One of the report's recommendations was to establish special project teams to evaluate enabling technologies, of which "Planetary in situ resource utilization" was one of them. Based on the VSE and the commission's final report, NASA established fifteen Capability Roadmap teams, of which ISRU was one of the teams established. From Oct. 2004 to May 2005 the ISRU Capability Roadmap team examined the capabilities, benefits, architecture and mission implementation strategy, critical decisions, current state-of-the-art (SOA), challenges, technology gaps, and risks of ISRU for future human Moon and Mars exploration. This presentation will provide an overview of the ISRU capability, architecture, and implementation strategy examined by the ISRU Capability Roadmap team, along with a top-level review of ISRU benefits, resources and products of interest, and the current SOA in ISRU processes and systems. The presentation will also highlight the challenges of incorporating ISRU into future missions and the gaps in technologies and capabilities that need to be filled to enable ISRU.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
NASA Technical Reports Server (NTRS)
Shelton, Kevin J.; Kramer, Lynda J.; Ellis,Kyle K.; Rehfeld, Sherri A.
2012-01-01
The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.
A Vision of Quantitative Imaging Technology for Validation of Advanced Flight Technologies
NASA Technical Reports Server (NTRS)
Horvath, Thomas J.; Kerns, Robert V.; Jones, Kenneth M.; Grinstead, Jay H.; Schwartz, Richard J.; Gibson, David M.; Taylor, Jeff C.; Tack, Steve; Dantowitz, Ronald F.
2011-01-01
Flight-testing is traditionally an expensive but critical element in the development and ultimate validation and certification of technologies destined for future operational capabilities. Measurements obtained in relevant flight environments also provide unique opportunities to observe flow phenomenon that are often beyond the capabilities of ground testing facilities and computational tools to simulate or duplicate. However, the challenges of minimizing vehicle weight and internal complexity as well as instrumentation bandwidth limitations often restrict the ability to make high-density, in-situ measurements with discrete sensors. Remote imaging offers a potential opportunity to noninvasively obtain such flight data in a complementary fashion. The NASA Hypersonic Thermodynamic Infrared Measurements Project has demonstrated such a capability to obtain calibrated thermal imagery on a hypersonic vehicle in flight. Through the application of existing and accessible technologies, the acreage surface temperature of the Shuttle lower surface was measured during reentry. Future hypersonic cruise vehicles, launcher configurations and reentry vehicles will, however, challenge current remote imaging capability. As NASA embarks on the design and deployment of a new Space Launch System architecture for access beyond earth orbit (and the commercial sector focused on low earth orbit), an opportunity exists to implement an imagery system and its supporting infrastructure that provides sufficient flexibility to incorporate changing technology to address the future needs of the flight test community. A long term vision is offered that supports the application of advanced multi-waveband sensing technology to aid in the development of future aerospace systems and critical technologies to enable highly responsive vehicle operations across the aerospace continuum, spanning launch, reusable space access and global reach. Motivations for development of an Agency level imagery-based measurement capability to support cross cutting applications that span the Agency mission directorates as well as meeting potential needs of the commercial sector and national interests of the Intelligence, Surveillance and Reconnaissance community are explored. A recommendation is made for an assessment study to baseline current imaging technology including the identification of future mission requirements. Development of requirements fostered by the applications suggested in this paper would be used to identify technology gaps and direct roadmapping for implementation of an affordable and sustainable next generation sensor/platform system.
University NanoSat Program: AggieSat3
2009-06-01
commercially available product for stereo machine vision developed by Point Grey Research. The current binocular BumbleBee2® system incorporates two...and Fellow of the American Society of Mechanical Engineers (ASME) in 1997. She was awarded the 2007 J. Leland "Lee" Atwood Award from the ASEE...AggieSat2 satellite programs. Additional experience gained in the area of drawing standards, machining capabilities, solid modeling, safety
Computer hardware and software for robotic control
NASA Technical Reports Server (NTRS)
Davis, Virgil Leon
1987-01-01
The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.
Control electronics for a multi-laser/multi-detector scanning system
NASA Technical Reports Server (NTRS)
Kennedy, W.
1980-01-01
The Mars Rover Laser Scanning system uses a precision laser pointing mechanism, a photodetector array, and the concept of triangulation to perform three dimensional scene analysis. The system is used for real time terrain sensing and vision. The Multi-Laser/Multi-Detector laser scanning system is controlled by a digital device called the ML/MD controller. A next generation laser scanning system, based on the Level 2 controller, is microprocessor based. The new controller capabilities far exceed those of the ML/MD device. The first draft circuit details and general software structure are presented.
VIPER: Virtual Intelligent Planetary Exploration Rover
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard
2001-01-01
Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...
Bio-inspired approach for intelligent unattended ground sensors
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre
2015-05-01
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.
1971-01-01
Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.
NASA Astrophysics Data System (ADS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike
2011-06-01
NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzell, Lawrence J.; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike
2011-01-01
NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.
Simulation Evaluation of Equivalent Vision Technologies for Aerospace Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Wilz, Susan J.; Arthur, Jarvis J.
2009-01-01
A fixed-based simulation experiment was conducted in NASA Langley Research Center s Integration Flight Deck simulator to investigate enabling technologies for equivalent visual operations (EVO) in the emerging Next Generation Air Transportation System operating environment. EVO implies the capability to achieve or even improve on the safety of current-day Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and perhaps even retain VFR procedures - all independent of the actual weather and visibility conditions. Twenty-four air transport-rated pilots evaluated the use of Synthetic/Enhanced Vision Systems (S/EVS) and eXternal Vision Systems (XVS) technologies as enabling technologies for future all-weather operations. The experimental objectives were to determine the feasibility of XVS/SVS/EVS to provide for all weather (visibility) landing capability without the need (or ability) for a visual approach segment and to determine the interaction of XVS/EVS and peripheral vision cues for terminal area and surface operations. Another key element of the testing investigated the pilot's awareness and reaction to non-normal events (i.e., failure conditions) that were unexpectedly introduced into the experiment. These non-normal runs served as critical determinants in the underlying safety of all-weather operations. Experimental data from this test are cast into performance-based approach and landing standards which might establish a basis for future all-weather landing operations. Glideslope tracking performance appears to have improved with the elimination of the approach visual segment. This improvement can most likely be attributed to the fact that the pilots didn't have to simultaneously perform glideslope corrections and find required visual landing references in order to continue a landing. Lateral tracking performance was excellent regardless of the display concept being evaluated or whether or not there were peripheral cues in the side window. Although workload ratings were significantly less when peripheral cues were present compared to when there were none, these differences appear to be operationally inconsequential. Larger display concepts tested in this experiment showed significant situation awareness (SA) improvements and workload reductions compared to smaller display concepts. With a fixed display size, a color display was more influential in SA and workload ratings than a collimated display.
Transformation: growing role of sensor networks in defense applications
NASA Astrophysics Data System (ADS)
Gunzelman, Karl J.; Kwok, Kwan S.; Krotkov, Eric P.
2003-12-01
The Department of Defense (DoD) is undergoing a transformation. What began as theoretical thinking, under the notion of a Revolution in Military Affairs (RMA) is now beginning to manifest itself in a "Transformation." The overall goal of the transformation described in Joint Vision 2020 is the creation of a force that is dominant across the full spectrum of military operations. The warfighting concept that will allow us to achieve Joint Vision 2020 operational capabilities is Network Centric Warfare (NCW). NCW is no less than the embodiment of an Information Age transformation of the DoD. It involves a new way of thinking about how we accomplish our missions, how we organize and interrelate, and how we acquire, field and use the systems that support us. It will involve ways of operating that have yet to be conceived, and it will employ technologies yet to be invented. NCW has the potential to increase warfighting capabilities by orders of magnitude, and it will do so by leveraging information superiority. A major condition to success is an infostructure that is robustly networked to support information collection, sharing and collaboration; which will require increased emphasis on sensor research, development and implementation. DARPA is taking steps today to research, develop and implement those sensor capabilities. The Multi-Body Control program is a step in that direction.
Development of Moire machine vision
NASA Technical Reports Server (NTRS)
Harding, Kevin G.
1987-01-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations
NASA Technical Reports Server (NTRS)
Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.
2003-01-01
Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.
An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Zhou, Ning
With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less
Ingestible wireless capsules for enhanced diagnostic inspection of gastrointestinal tract
NASA Astrophysics Data System (ADS)
Rasouli, Mahdi; Kencana, Andy Prima; Huynh, Van An; Ting, Eng Kiat; Lai, Joshua Chong Yue; Wong, Kai Juan; Tan, Su Lim; Phee, Soo Jay
2011-03-01
Wireless capsule endoscopy has become a common procedure for diagnostic inspection of gastrointestinal tract. This method offers a less-invasive alternative to traditional endoscopy by eliminating uncomfortable procedures of the traditional endoscopy. Moreover, it provides the opportunity for exploring inaccessible areas of the small intestine. Current capsule endoscopes, however, move by peristalsis and are not capable of detailed and on-demand inspection of desired locations. Here, we propose and develop two wireless endoscopes with maneuverable vision systems to enhance diagnosis of gastrointestinal disorders. The vision systems in these capsules are equipped with mechanical actuators to adjust the position of the camera. This may help to cover larger areas of the digestive tract and investigate desired locations. The preliminary experimental results showed that the developed platform could successfully communicate with the external control unit via human body and adjust the position of camera to limited degrees.
Development of Moire machine vision
NASA Astrophysics Data System (ADS)
Harding, Kevin G.
1987-10-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Dissolvable tattoo sensors: from science fiction to a viable technology
NASA Astrophysics Data System (ADS)
Cheng, Huanyu; Yi, Ning
2017-01-01
Early surrealistic painting and science fiction movies have envisioned dissolvable tattoo electronic devices. In this paper, we will review the recent advances that transform that vision into a viable technology, with extended capabilities even beyond the early vision. Specifically, we focus on the discussion of a stretchable design for tattoo sensors and degradable materials for dissolvable sensors, in the form of inorganic devices with a performance comparable to modern electronics. Integration of these two technologies as well as the future developments of bio-integrated devices is also discussed. Many of the appealing ideas behind developments of these devices are drawn from nature and especially biological systems. Thus, bio-inspiration is believed to continue playing a key role in future devices for bio-integration and beyond.
NASA Strategic Roadmap Summary Report
NASA Technical Reports Server (NTRS)
Wilson, Scott; Bauer, Frank; Stetson, Doug; Robey, Judee; Smith, Eric P.; Capps, Rich; Gould, Dana; Tanner, Mike; Guerra, Lisa; Johnston, Gordon
2005-01-01
In response to the Vision, NASA commissioned strategic and capability roadmap teams to develop the pathways for turning the Vision into a reality. The strategic roadmaps were derived from the Vision for Space Exploration and the Aldrich Commission Report dated June 2004. NASA identified 12 strategic areas for roadmapping. The Agency added a thirteenth area on nuclear systems because the topic affects the entire program portfolio. To ensure long-term public visibility and engagement, NASA established a committee for each of the 13 areas. These committees - made up of prominent members of the scientific and aerospace industry communities and senior government personnel - worked under the Federal Advisory Committee Act. A committee was formed for each of the following program areas: 1) Robotic and Human Lunar Exploration; 2) Robotic and Human Exploration of Mars; 3) Solar System Exploration; 4) Search for Earth-Like Planets; 5) Exploration Transportation System; 6) International Space Station; 7) Space Shuttle; 8) Universe Exploration; 9) Earth Science and Applications from Space; 10) Sun-Solar System Connection; 11) Aeronautical Technologies; 12) Education; 13) Nuclear Systems. This document contains roadmap summaries for 10 of these 13 program areas; The International Space Station, Space Shuttle, and Education are excluded. The completed roadmaps for the following committees: Robotic and Human Exploration of Mars; Solar System Exploration; Search for Earth-Like Planets; Universe Exploration; Earth Science and Applications from Space; Sun-Solar System Connection are collected in a separate Strategic Roadmaps volume. This document contains memebership rosters and charters for all 13 committees.
An integrated dexterous robotic testbed for space applications
NASA Technical Reports Server (NTRS)
Li, Larry C.; Nguyen, Hai; Sauer, Edward
1992-01-01
An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...
Enhanced modeling and simulation of EO/IR sensor systems
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; May, Christopher
2015-05-01
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
Adaptive multisensor fusion for planetary exploration rovers
NASA Technical Reports Server (NTRS)
Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri
1992-01-01
The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
ROVER: A prototype active vision system
NASA Astrophysics Data System (ADS)
Coombs, David J.; Marsh, Brian D.
1987-08-01
The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.
Evaluation of Candidate Millimeter Wave Sensors for Synthetic Vision
NASA Technical Reports Server (NTRS)
Alexander, Neal T.; Hudson, Brian H.; Echard, Jim D.
1994-01-01
The goal of the Synthetic Vision Technology Demonstration Program was to demonstrate and document the capabilities of current technologies to achieve safe aircraft landing, take off, and ground operation in very low visibility conditions. Two of the major thrusts of the program were (1) sensor evaluation in measured weather conditions on a tower overlooking an unused airfield and (2) flight testing of sensor and pilot performance via a prototype system. The presentation first briefly addresses the overall technology thrusts and goals of the program and provides a summary of MMW sensor tower-test and flight-test data collection efforts. Data analysis and calibration procedures for both the tower tests and flight tests are presented. The remainder of the presentation addresses the MMW sensor flight-test evaluation results, including the processing approach for determination of various performance metrics (e.g., contrast, sharpness, and variability). The variation of the very important contrast metric in adverse weather conditions is described. Design trade-off considerations for Synthetic Vision MMW sensors are presented.
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-05-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
National Positioning, Navigation, and Timing Architecture Study
NASA Astrophysics Data System (ADS)
van Dyke, K.; Vicario, J.; Hothem, L.
2007-12-01
The purpose of the National Positioning, Navigation and Timing (PNT) Architecture effort is to help guide future PNT system-of-systems investment and implementation decisions. The Assistant Secretary of Defense for Networks and Information Integration and the Under Secretary of Transportation for Policy sponsored a National PNT Architecture study to provide more effective and efficient PNT capabilities focused on the 2025 timeframe and an evolutionary path for government provided systems and services. U.S. Space-Based PNT Policy states that the U.S. must continue to improve and maintain GPS, augmentations to GPS, and back-up capabilities to meet growing national, homeland, and economic security needs. PNT touches almost every aspect of people´s lives today. PNT is essential for Defense and Civilian applications ranging from the Department of Defense´s Joint network centric and precision operations to the transportation and telecommunications sectors, improving efficiency, increasing safety, and being more productive. Absence of an approved PNT architecture results in uncoordinated research efforts, lack of clear developmental paths, potentially wasteful procurements and inefficient deployment of PNT resources. The national PNT architecture effort evaluated alternative future mixes of global (space and non space-based) and regional PNT solutions, PNT augmentations, and autonomous PNT capabilities to address priorities identified in the DoD PNT Joint Capabilities Document (JCD) and civil equivalents. The path to achieving the Should-Be architecture is described by the National PNT Architecture's Guiding Principles, representing an overarching Vision of the US' role in PNT, an architectural Strategy to fulfill that Vision, and four Vectors which support the Strategy. The National PNT Architecture effort has developed nineteen recommendations. Five foundational recommendations are tied directly to the Strategy while the remaining fourteen individually support one of the Vectors, as will be described in this presentation. The results of this effort will support future decisions of bodies such as the DoD PNT and Civil Pos/Nav Executive Committees, as well as the National Space-Based PNT Executive Committee (EXCOM).
Experiences in teleoperation of land vehicles
NASA Technical Reports Server (NTRS)
Mcgovern, Douglas E.
1989-01-01
Teleoperation of land vehicles allows the removal of the operator from the vehicle to a remote location. This can greatly increase operator safety and comfort in applications such as security patrol or military combat. The cost includes system complexity and reduced system performance. All feedback on vehicle performance and on environmental conditions must pass through sensors, a communications channel, and displays. In particular, this requires vision to be transmitted by close-circuit television with a consequent degradation of information content. Vehicular teleoperation, as a result, places severe demands on the operator. Teleoperated land vehicles have been built and tested by many organizations, including Sandia National Laboratories (SNL). The SNL fleet presently includes eight vehicles of varying capability. These vehicles have been operated using different types of controls, displays, and visual systems. Experimentation studying the effects of vision system characteristics on off-road, remote driving was performed for conditions of fixed camera versus steering-coupled camera and of color versus black and white video display. Additionally, much experience was gained through system demonstrations and hardware development trials. The preliminary experimental findings and the results of the accumulated operational experience are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Definition of display/control requirements for assault transport night/adverse weather capability
NASA Technical Reports Server (NTRS)
Milelli, R. J.; Mowery, G. W.; Pontelandolfo, C.
1982-01-01
A Helicopter Night Vision System was developed to improve low-altitude night and/or adverse weather assult transport capabilities. Man-in-the-loop simulation experiments were performed to define the minimum display and control requirements for the assult transport mission and investigate forward looking infrared sensor requirements, along with alternative displays such as panel mounted displays (PMD) helmet mounted displays (HMD), and integrated control display units. Also explored were navigation requirements, pilot/copilot interaction, and overall cockpit arrangement. Pilot use of an HMD and copilot use of a PMD appear as both the preferred and most effective night navigation combination.
In-situ Resource Utilization (ISRU) and Lunar Surface Systems
NASA Technical Reports Server (NTRS)
Sanders, Jerry; Larson, Bill; Sacksteder, Kurt
2007-01-01
This viewgraph presentation reviews the benefits of In-Situ Resource Utilization (ISRU) on the surface of the moon. Included in this review is the commercialization of Lunar ISRU. ISRU will strongly influence architecture and critical technologies. ISRU is a critical capability and key implementation of the Vision for Space Exploration (VSE). ISRU will strongly effects lunar outpost logistics, design and crew safety. ISRU will strongly effect outpost critical technologies. ISRU mass investment is minimal compared to immediate and long-term architecture delivery mass and reuse capabilities provided. Therefore, investment in ISRU constitutes a commitment to the mid and long term future of human exploration.
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
Low vision system for rapid near- and far-field magnification switching.
Ambrogi, Nicholas; Dias-Carlson, Rachel; Gantner, Karl; Gururaj, Anisha; Hanumara, Nevan; Narain, Jaya; Winter, Amos; Zielske, Iris; Satgunam, PremNandhini; Bagga, Deepak Kumar; Gothwal, Vijaya
2015-01-01
People suffering from low vision, a condition caused by a variety of eye-related diseases and/or disorders, find their ability to read greatly improved when text is magnified between 2 and 6 times. Assistive devices currently on the market are either geared towards reading text far away (~20 ft.) or very near (~2 ft.). This is a problem especially for students suffering from low vision, as they struggle to flip their focus between the chalkboard (far-field) and their notes (near- field). A solution to this problem is of high interest to eye care facilities in the developing world - no devices currently exist that have the aforementioned capabilities at an accessible price point. Through consultation with specialists at L.V. Prasad Eye Institute in India, the authors propose, design and demonstrate a device that fills this need, directed primarily at the Indian market. The device utilizes available hardware technologies to electronically capture video ahead of the user and zoom and display the image in real-time on LCD screens mounted in front of the user's eyes. This design is integrated as a wearable system in a glasses form-factor.
Institutional Vision and Academic Advising
ERIC Educational Resources Information Center
Abelman, Robert; Molina, Anthony D.
2006-01-01
Quality academic advising in higher education is the product of a multitude of elements not the least of which is institutional vision. By recognizing and embracing an institution's concept of its capabilities and the kinds of educated human beings it is attempting to cultivate, advisors gain an invaluable apparatus to guide the provision of…
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences
NASA Technical Reports Server (NTRS)
Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri
2014-01-01
This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.
Flight Research and Validation Formerly Experimental Capabilities Supersonic Project
NASA Technical Reports Server (NTRS)
Banks, Daniel
2009-01-01
This slide presentation reviews the work of the Experimental Capabilities Supersonic project, that is being reorganized into Flight Research and Validation. The work of Experimental Capabilities Project in FY '09 is reviewed, and the specific centers that is assigned to do the work is given. The portfolio of the newly formed Flight Research and Validation (FRV) group is also reviewed. The various projects for FY '10 for the FRV are detailed. These projects include: Eagle Probe, Channeled Centerbody Inlet Experiment (CCIE), Supersonic Boundary layer Transition test (SBLT), Aero-elastic Test Wing-2 (ATW-2), G-V External Vision Systems (G5 XVS), Air-to-Air Schlieren (A2A), In Flight Background Oriented Schlieren (BOS), Dynamic Inertia Measurement Technique (DIM), and Advanced In-Flight IR Thermography (AIR-T).
The flight telerobotic servicer and technology transfer
NASA Technical Reports Server (NTRS)
Andary, James F.; Bradford, Kayland Z.
1991-01-01
The Flight Telerobotic Servicer (FTS) project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station Freedom (SSF). The FTS will provide a telerobotic capability in the early phases of the SSF program and will be employed for assembly, maintenance, and inspection applications. The current state of space technology and the general nature of the FTS tasks dictate that the FTS be designed with sophisticated teleoperational capabilities for its internal primary operating mode. However, technologies such as advanced computer vision and autonomous planning techniques would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Another objective of the FTS program is to accelerate technology transfer from research to U.S. industry.
SAMURAI: Polar AUV-Based Autonomous Dexterous Sampling
NASA Astrophysics Data System (ADS)
Akin, D. L.; Roberts, B. J.; Smith, W.; Roderick, S.; Reves-Sohn, R.; Singh, H.
2006-12-01
While autonomous undersea vehicles are increasingly being used for surveying and mapping missions, as of yet there has been little concerted effort to create a system capable of performing physical sampling or other manipulation of the local environment. This type of activity has typically been performed under teleoperated control from ROVs, which provides high-bandwidth real-time human direction of the manipulation activities. Manipulation from an AUV will require a completely autonomous sampling system, which implies both advanced technologies such as machine vision and autonomous target designation, but also dexterous robot manipulators to perform the actual sampling without human intervention. As part of the NASA Astrobiology Science and Technology for Exploring the Planets (ASTEP) program, the University of Maryland Space Systems Laboratory has been adapting and extending robotics technologies developed for spacecraft assembly and maintenance to the problem of autonomous sampling of biologicals and soil samples around hydrothermal vents. The Sub-polar ice Advanced Manipulator for Universal Sampling and Autonomous Intervention (SAMURAI) system is comprised of a 6000-meter capable six-degree-of-freedom dexterous manipulator, along with an autonomous vision system, multi-level control system, and sampling end effectors and storage mechanisms to allow collection of samples from vent fields. SAMURAI will be integrated onto the Woods Hole Oceanographic Institute (WHOI) Jaguar AUV, and used in Arctic during the fall of 2007 for autonomous vent field sampling on the Gakkel Ridge. Under the current operations concept, the JAGUAR and PUMA AUVs will survey the water column and localize on hydrothermal vents. Early mapping missions will create photomosaics of the vents and local surroundings, allowing scientists on the mission to designate desirable sampling targets. Based on physical characteristics such as size, shape, and coloration, the targets will be loaded into the SAMURAI control system, and JAGUAR (with SAMURAI mounted to the lower forward hull) will return to the designated target areas. Once on site, vehicle control will be turned over to the SAMURAI controller, which will perform vision-based guidance to the sampling site and will then ground the AUV to the sea bottom for stability. The SAMURAI manipulator will collect samples, such as sessile biologicals, geological samples, and (potentially) vent fluids, and store the samples for the return trip. After several hours of sampling operations on one or several sites, JAGUAR control will be returned to the WHOI onboard controller for the return to the support ship. (Operational details of AUV operations on the Gakkel Ridge mission are presented in other papers at this conference.) Between sorties, SAMURAI end effectors can be changed out on the surface for specific targets, such as push cores or larger biologicals such as tube worms. In addition to the obvious challenges in autonomous vision-based manipulator control from a free-flying support vehicle, significant development challenges have been the design of a highly capable robotic arm within the mass limitations (both wet and dry) of the JAGUAR vehicle, the development of a highly robust manipulator with modular maintenance units for extended polar operations, and the creation of a robot-based sample collection and holding system for multiple heterogeneous samples on a single extended sortie.
Automated AFM for small-scale and large-scale surface profiling in CMP applications
NASA Astrophysics Data System (ADS)
Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il
2018-03-01
As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.
Three-dimensional tracking and imaging laser scanner for space operations
NASA Astrophysics Data System (ADS)
Laurin, Denis G.; Beraldin, J. A.; Blais, Francois; Rioux, Marc; Cournoyer, Luc
1999-05-01
This paper presents the development of a laser range scanner (LARS) as a three-dimensional sensor for space applications. The scanner is a versatile system capable of doing surface imaging, target ranging and tracking. It is capable of short range (0.5 m to 20 m) and long range (20 m to 10 km) sensing using triangulation and time-of-flight (TOF) methods respectively. At short range (1 m), the resolution is sub-millimeter and drops gradually with distance (2 cm at 10 m). For long range, the TOF provides a constant resolution of plus or minus 3 cm, independent of range. The LARS could complement the existing Canadian Space Vision System (CSVS) for robotic manipulation. As an active vision system, the LARS is immune to sunlight and adverse lighting; this is a major advantage over the CSVS, as outlined in this paper. The LARS could also replace existing radar systems used for rendezvous and docking. There are clear advantages of an optical system over a microwave radar in terms of size, mass, power and precision. Equipped with two high-speed galvanometers, the laser can be steered to address any point in a 30 degree X 30 degree field of view. The scanning can be continuous (raster scan, Lissajous) or direct (random). This gives the scanner the ability to register high-resolution 3D images of range and intensity (up to 4000 X 4000 pixels) and to perform point target tracking as well as object recognition and geometrical tracking. The imaging capability of the scanner using an eye-safe laser is demonstrated. An efficient fiber laser delivers 60 mW of CW or 3 (mu) J pulses at 20 kHz for TOF operation. Implementation of search and track of multiple targets is also demonstrated. For a single target, refresh rates up to 137 Hz is possible. Considerations for space qualification of the scanner are discussed. Typical space operations, such as docking, object attitude tracking, and inspections are described.
Night vision goggle stimulation using LCoS and DLP projection technology, which is better?
NASA Astrophysics Data System (ADS)
Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Integrated Evaluation of Closed Loop Air Revitalization System Components
NASA Technical Reports Server (NTRS)
Murdock, K.
2010-01-01
NASA s vision and mission statements include an emphasis on human exploration of space, which requires environmental control and life support technologies. This Contractor Report (CR) describes the development and evaluation of an Air Revitalization System, modeling and simulation of the components, and integrated hardware testing with the goal of better understanding the inherent capabilities and limitations of this closed loop system. Major components integrated and tested included a 4-Bed Modular Sieve, Mechanical Compressor Engineering Development Unit, Temperature Swing Adsorption Compressor, and a Sabatier Engineering and Development Unit. The requisite methodolgy and technical results are contained in this CR.
Hyperspectral Systems Increase Imaging Capabilities
NASA Technical Reports Server (NTRS)
2010-01-01
In 1983, NASA started developing hyperspectral systems to image in the ultraviolet and infrared wavelengths. In 2001, the first on-orbit hyperspectral imager, Hyperion, was launched aboard the Earth Observing-1 spacecraft. Based on the hyperspectral imaging sensors used in Earth observation satellites, Stennis Space Center engineers and Institute for Technology Development researchers collaborated on a new design that was smaller and used an improved scanner. Featured in Spinoff 2007, the technology is now exclusively licensed by Themis Vision Systems LLC, of Richmond, Virginia, and is widely used in medical and life sciences, defense and security, forensics, and microscopy.
A Graphical Operator Interface for a Telerobotic Inspection System
NASA Technical Reports Server (NTRS)
Kim, W. S.; Tso, K. S.; Hayati, S.
1993-01-01
Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Wireless sensor systems for sense/decide/act/communicate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Cushner, Adam; Baker, James A.
2003-12-01
After 9/11, the United States (U.S.) was suddenly pushed into challenging situations they could no longer ignore as simple spectators. The War on Terrorism (WoT) was suddenly ignited and no one knows when this war will end. While the government is exploring many existing and potential technologies, the area of wireless Sensor networks (WSN) has emerged as a foundation for establish future national security. Unlike other technologies, WSN could provide virtual presence capabilities needed for precision awareness and response in military, intelligence, and homeland security applications. The Advance Concept Group (ACG) vision of Sense/Decide/Act/Communicate (SDAC) sensor system is an instantiationmore » of the WSN concept that takes a 'systems of systems' view. Each sensing nodes will exhibit the ability to: Sense the environment around them, Decide as a collective what the situation of their environment is, Act in an intelligent and coordinated manner in response to this situational determination, and Communicate their actions amongst each other and to a human command. This LDRD report provides a review of the research and development done to bring the SDAC vision closer to reality.« less
Automated intelligent video surveillance system for ships
NASA Astrophysics Data System (ADS)
Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob
2009-05-01
To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.
2004-01-01
The Sensor Web concept emerged as the number of Earth Science Satellites began to increase in the recent years. The idea, part of a vision for the future of earth science, was that the sensor systems would be linked in an active way to provide improved forecast capability. This means that a system that is nearly autonomous would need to be developed to allow the satellites to re-target and deploy assets for particular phenomena or provide on board processing for real time data. This talk will describe several elements of the sensor web.
Battlespace Dominance : Winning the Information War
1996-06-01
NRaD is uniquely qualified to provide the expertise and tools to achieve information dominance . Almost every NRaD effort deals with acquiring data, transforming data into...prototyping to fully produced systems. NRaD is applying these capabilities to the central element of future naval warfare information dominance . NRaD’s vision...making information dominance for the warrior a reality is based on achieving five interrelated objectives, or Corporate Initiatives. Our first
Ackland, Peter
2012-01-01
In the first 12 years of VISION 2020 sound programmatic approaches have been developed that are capable of delivering equitable eye health services to even the most remote and impoverished communities. A body of evidence around the economic arguments for investment in eye health has been developed that has fuelled successful advocacy work resulting in supportive high level policy statements. More than a 100 national plans to achieve the elimination of avoidable blindness have been developed and some notable contributions made from the corporate and government sectors to resource eye health programs. Good progress has been made to control infectious blinding diseases and at the very least there is anecdotal evidence to suggest that the global increase in the prevalence of blindness and visual impairment has been reversed in recent years, despite the ever increasing and more elderly global population. However if we are to achieve the goal of VISION 2020 we require a considerable scaling up of current efforts–this will depend on our future success in two key areas: i) Successful advocacy and engagement at individual country level to secure significantly enhanced national government commitment to financing their own VISION 2020 plans.ii) A new approach to VISION 2020 thinking that integrates eye health into health system development and develops new partnerships with wider health development initiatives. PMID:22944746
Science Opportunities Enabled by NASA's Constellation System: Interim Report
NASA Technical Reports Server (NTRS)
2008-01-01
In 2004 NASA initiated studies of advanced science mission concepts known as the Vision Missions and inspired by a series of NASA roadmap activities conducted in 2003. Also in 2004 NASA began implementation of the first phases of a new space exploration policy, the Vision for Space Exploration. This implementation effort included development of a new human-carrying spacecraft, known as Orion, and two new launch vehicles, the Ares I and Ares V rockets.collectively called the Constellation System. NASA asked the National Research Council (NRC) to evaluate the science opportunities enabled by the Constellation System (see Preface) and to produce an interim report on a short time schedule and a final report by November 2008. The committee notes, however, that the Constellation System and its Orion and Ares vehicles have been justified by NASA and selected in order to enable human exploration beyond low Earth orbit, and not to enable science missions. This interim report of the Committee on Science Opportunities Enabled by NASA s Constellation System evaluates the 11 Vision Mission studies presented to it and groups them into two categories: those more deserving of future study, and those less deserving of future study. Although its statement of task also refers to Earth science missions, the committee points out that the Vision Missions effort was focused on future astronomy, heliophysics, and planetary exploration and did not include any Earth science studies because, at the time, the NRC was conducting the first Earth science decadal survey, and funding Earth science studies as part of the Vision Missions effort would have interfered with that process. Consequently, no Earth science missions are evaluated in this interim report. However, the committee will evaluate any Earth science mission proposal submitted in response to its request for information issued in March 2008 (see Appendix A). The committee based its evaluation of the preexisting Vision Missions studies on two criteria: whether the concepts offered the potential for a significant scientific advance, and whether or not the concepts would benefit from the Constellation System. The committee determined that all of the concepts offered the possibility of a significant scientific advance, but it cautions that such an evaluation ultimately must be made by the decadal survey process, and it emphasizes that this interim report s evaluation should not be considered to be an endorsement of the scientific merit of these proposals, which must of course be evaluated relative to other proposals. The committee determined that seven of these concepts would benefit from the Constellation System, whereas four would not, but it stresses that this conclusion does not reflect an evaluation of the scientific merit of the projects, but rather an assessment of whether or not new capabilities provided by the Constellation System could significantly affect them. Some of the mission concepts, such as the Advanced Compton Telescope, already offer a significant scientific advance and fit easily within the mass and volume constraints of existing launch vehicles. Other mission concepts, such as the Palmer Quest proposal to drill through the Mars polar cap, are not constrained by the launch vehicle, but rather by other technology limitations. The committee evaluated the mission concepts as presented to it, aware nevertheless that proposing a far larger and more ambitious mission with the same science goals might be possible given the capabilities of the Ares V launch vehicle. (Such proposals can be submitted in response to the committee s request for information to be evaluated in its final report.) See Table S.1 for a summary of the Vision Missions, including their cost estimates, technical maturity, and reasons that they might benefit from the Constellation System. The committee developed several findings and recommendations.
Kęsik, Karolina; Książek, Kamil
2017-01-01
Augmented reality (AR) is becoming increasingly popular due to its numerous applications. This is especially evident in games, medicine, education, and other areas that support our everyday activities. Moreover, this kind of computer system not only improves our vision and our perception of the world that surrounds us, but also adds additional elements, modifies existing ones, and gives additional guidance. In this article, we focus on interpreting a reality-based real-time environment evaluation for informing the user about impending obstacles. The proposed solution is based on a hybrid architecture that is capable of estimating as much incoming information as possible. The proposed solution has been tested and discussed with respect to the advantages and disadvantages of different possibilities using this type of vision. PMID:29207564
Połap, Dawid; Kęsik, Karolina; Książek, Kamil; Woźniak, Marcin
2017-12-04
Augmented reality (AR) is becoming increasingly popular due to its numerous applications. This is especially evident in games, medicine, education, and other areas that support our everyday activities. Moreover, this kind of computer system not only improves our vision and our perception of the world that surrounds us, but also adds additional elements, modifies existing ones, and gives additional guidance. In this article, we focus on interpreting a reality-based real-time environment evaluation for informing the user about impending obstacles. The proposed solution is based on a hybrid architecture that is capable of estimating as much incoming information as possible. The proposed solution has been tested and discussed with respect to the advantages and disadvantages of different possibilities using this type of vision.
Designing and validating the joint battlespace infosphere
NASA Astrophysics Data System (ADS)
Peterson, Gregory D.; Alexander, W. Perry; Birdwell, J. Douglas
2001-08-01
Fielding and managing the dynamic, complex information systems infrastructure necessary for defense operations presents significant opportunities for revolutionary improvements in capabilities. An example of this technology trend is the creation and validation of the Joint Battlespace Infosphere (JBI) being developed by the Air Force Research Lab. The JBI is a system of systems that integrates, aggregates, and distributes information to users at all echelons, from the command center to the battlefield. The JBI is a key enabler of meeting the Air Force's Joint Vision 2010 core competencies such as Information Superiority, by providing increased situational awareness, planning capabilities, and dynamic execution. At the same time, creating this new operational environment introduces significant risk due to an increased dependency on computational and communications infrastructure combined with more sophisticated and frequent threats. Hence, the challenge facing the nation is the most effective means to exploit new computational and communications technologies while mitigating the impact of attacks, faults, and unanticipated usage patterns.
Machine vision process monitoring on a poultry processing kill line: results from an implementation
NASA Astrophysics Data System (ADS)
Usher, Colin; Britton, Dougl; Daley, Wayne; Stewart, John
2005-11-01
Researchers at the Georgia Tech Research Institute designed a vision inspection system for poultry kill line sorting with the potential for process control at various points throughout a processing facility. This system has been successfully operating in a plant for over two and a half years and has been shown to provide multiple benefits. With the introduction of HACCP-Based Inspection Models (HIMP), the opportunity for automated inspection systems to emerge as viable alternatives to human screening is promising. As more plants move to HIMP, these systems have the great potential for augmenting a processing facilities visual inspection process. This will help to maintain a more consistent and potentially higher throughput while helping the plant remain within the HIMP performance standards. In recent years, several vision systems have been designed to analyze the exterior of a chicken and are capable of identifying Food Safety 1 (FS1) type defects under HIMP regulatory specifications. This means that a reliable vision system can be used in a processing facility as a carcass sorter to automatically detect and divert product that is not suitable for further processing. This improves the evisceration line efficiency by creating a smaller set of features that human screeners are required to identify. This can reduce the required number of screeners or allow for faster processing line speeds. In addition to identifying FS1 category defects, the Georgia Tech vision system can also identify multiple "Other Consumer Protection" (OCP) category defects such as skin tears, bruises, broken wings, and cadavers. Monitoring this data in an almost real-time system allows the processing facility to address anomalies as soon as they occur. The Georgia Tech vision system can record minute-by-minute averages of the following defects: Septicemia Toxemia, cadaver, over-scald, bruises, skin tears, and broken wings. In addition to these defects, the system also records the length and width information of the entire chicken and different parts such as the breast, the legs, the wings, and the neck. The system also records average color and miss- hung birds, which can cause problems in further processing. Other relevant production information is also recorded including truck arrival and offloading times, catching crew and flock serviceman data, the grower, the breed of chicken, and the number of dead-on- arrival (DOA) birds per truck. Several interesting observations from the Georgia Tech vision system, which has been installed in a poultry processing plant for several years, are presented. Trend analysis has been performed on the performance of the catching crews and flock serviceman, and the results of the processed chicken as they relate to the bird dimensions and equipment settings in the plant. The results have allowed researchers and plant personnel to identify potential areas for improvement in the processing operation, which should result in improved efficiency and yield.
Auditory opportunity and visual constraint enabled the evolution of echolocation in bats.
Thiagavel, Jeneni; Cechetto, Clément; Santana, Sharlene E; Jakobsen, Lasse; Warrant, Eric J; Ratcliffe, John M
2018-01-08
Substantial evidence now supports the hypothesis that the common ancestor of bats was nocturnal and capable of both powered flight and laryngeal echolocation. This scenario entails a parallel sensory and biomechanical transition from a nonvolant, vision-reliant mammal to one capable of sonar and flight. Here we consider anatomical constraints and opportunities that led to a sonar rather than vision-based solution. We show that bats' common ancestor had eyes too small to allow for successful aerial hawking of flying insects at night, but an auditory brain design sufficient to afford echolocation. Further, we find that among extant predatory bats (all of which use laryngeal echolocation), those with putatively less sophisticated biosonar have relatively larger eyes than do more sophisticated echolocators. We contend that signs of ancient trade-offs between vision and echolocation persist today, and that non-echolocating, phytophagous pteropodid bats may retain some of the necessary foundations for biosonar.
A Concept for Robust, High Density Terminal Air Traffic Operations
NASA Technical Reports Server (NTRS)
Isaacson, Douglas R.; Robinson, John E.; Swenson, Harry N.; Denery, Dallas G.
2010-01-01
This paper describes a concept for future high-density, terminal air traffic operations that has been developed by interpreting the Joint Planning and Development Office s vision for the Next Generation (NextGen) Air Transportation System and coupling it with emergent NASA and other technologies and procedures during the NextGen timeframe. The concept described in this paper includes five core capabilities: 1) Extended Terminal Area Routing, 2) Precision Scheduling Along Routes, 3) Merging and Spacing, 4) Tactical Separation, and 5) Off-Nominal Recovery. Gradual changes are introduced to the National Airspace System (NAS) by phased enhancements to the core capabilities in the form of increased levels of automation and decision support as well as targeted task delegation. NASA will be evaluating these conceptual technological enhancements in a series of human-in-the-loop simulations and will accelerate development of the most promising capabilities in cooperation with the FAA through the Efficient Flows Into Congested Airspace Research Transition Team.
iPAS: AES Flight System Technology Maturation for Human Spaceflight
NASA Technical Reports Server (NTRS)
Othon, William L.
2014-01-01
In order to realize the vision of expanding human presence in space, NASA will develop new technologies that can enable future crewed spacecraft to go far beyond Earth orbit. These technologies must be matured to the point that future project managers can accept the risk of incorporating them safely and effectively within integrated spacecraft systems, to satisfy very challenging mission requirements. The technologies must also be applied and managed within an operational context that includes both on-board crew and mission support on Earth. The Advanced Exploration Systems (AES) Program is one part of the NASA strategy to identify and develop key capabilities for human spaceflight, and mature them for future use. To support this initiative, the Integrated Power Avionics and Software (iPAS) environment has been developed that allows engineers, crew, and flight operators to mature promising technologies into applicable capabilities, and to assess the value of these capabilities within a space mission context. This paper describes the development of the integration environment to support technology maturation and risk reduction, and offers examples of technology and mission demonstrations executed to date.
NASA Technical Reports Server (NTRS)
Schlagheck, Ronald A.; Sibille, Laurent; Sacksteder, Kurt; Owens, Chuck
2005-01-01
The NASA Microgravity Science program has transitioned research required in support of NASA s Vision for Space Exploration. Research disciplines including the Materials Science, Fluid Physics and Combustion Science are now being applied toward projects with application in the planetary utilization and transformation of space resources. The scientific and engineering competencies and infrastructure in these traditional fields developed at multiple NASA Centers and by external research partners provide essential capabilities to support the agency s new exploration thrusts including In-Situ Resource Utilization (ISRU). Among the technologies essential to human space exploration, the production of life support consumables, especially oxygen and; radiation shielding; and the harvesting of potentially available water are realistically achieved for long-duration crewed missions only through the use of ISRU. Ongoing research in the physical sciences have produced a body of knowledge relevant to the extraction of oxygen from lunar and planetary regolith and associated reduction of metals and silicon for use meeting manufacturing and repair requirements. Activities being conducted and facilities used in support of various ISRU projects at the Glenn Research Center and Marshall Space Flight Center will be described. The presentation will inform the community of these new research capabilities, opportunities, and challenges to utilize their materials, fluids and combustion science expertise and capabilities to support the vision for space exploration.
The capability of lithography simulation based on MVM-SEM® system
NASA Astrophysics Data System (ADS)
Yoshikawa, Shingo; Fujii, Nobuaki; Kanno, Koichi; Imai, Hidemichi; Hayano, Katsuya; Miyashita, Hiroyuki; Shida, Soichi; Murakawa, Tsutomu; Kuribara, Masayuki; Matsumoto, Jun; Nakamura, Takayuki; Matsushita, Shohei; Hara, Daisuke; Pang, Linyong
2015-10-01
The 1Xnm technology node lithography is using SMO-ILT, NTD or more complex pattern. Therefore in mask defect inspection, defect verification becomes more difficult because many nuisance defects are detected in aggressive mask feature. One key Technology of mask manufacture is defect verification to use aerial image simulator or other printability simulation. AIMS™ Technology is excellent correlation for the wafer and standards tool for defect verification however it is difficult for verification over hundred numbers or more. We reported capability of defect verification based on lithography simulation with a SEM system that architecture and software is excellent correlation for simple line and space.[1] In this paper, we use a SEM system for the next generation combined with a lithography simulation tool for SMO-ILT, NTD and other complex pattern lithography. Furthermore we will use three dimension (3D) lithography simulation based on Multi Vision Metrology SEM system. Finally, we will confirm the performance of the 2D and 3D lithography simulation based on SEM system for a photomask verification.
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
Liberal Learning as Freedom: A Capabilities Approach to Undergraduate Education
ERIC Educational Resources Information Center
Garnett, Robert F., Jr.
2009-01-01
In this paper, I employ the pioneering works of Nussbaum, Sen, Saito, and Walker, in conjunction with the U.S. tradition of academic freedom, to outline a capability-centered vision of undergraduate education. Pace Nussbaum and Walker, I propose a short list of learning capabilities to which every undergraduate student should be entitled. This…
McNulty, Jason D; Klann, Tyler; Sha, Jin; Salick, Max; Knight, Gavin T; Turng, Lih-Sheng; Ashton, Randolph S
2014-06-07
Increased realization of the spatial heterogeneity found within in vivo tissue microenvironments has prompted the desire to engineer similar complexities into in vitro culture substrates. Microcontact printing (μCP) is a versatile technique for engineering such complexities onto cell culture substrates because it permits microscale control of the relative positioning of molecules and cells over large surface areas. However, challenges associated with precisely aligning and superimposing multiple μCP steps severely limits the extent of substrate modification that can be achieved using this method. Thus, we investigated the feasibility of using a vision guided selectively compliant articulated robotic arm (SCARA) for μCP applications. SCARAs are routinely used to perform high precision, repetitive tasks in manufacturing, and even low-end models are capable of achieving microscale precision. Here, we present customization of a SCARA to execute robotic-μCP (R-μCP) onto gold-coated microscope coverslips. The system not only possesses the ability to align multiple polydimethylsiloxane (PDMS) stamps but also has the capability to do so even after the substrates have been removed, reacted to graft polymer brushes, and replaced back into the system. Plus, non-biased computerized analysis shows that the system performs such sequential patterning with <10 μm precision and accuracy, which is equivalent to the repeatability specifications of the employed SCARA model. R-μCP should facilitate the engineering of complex in vivo-like complexities onto culture substrates and their integration with microfluidic devices.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-17
... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...
The Flight Telerobotic Servicer (FTS) - A focus for automation and robotics on the Space Station
NASA Technical Reports Server (NTRS)
Hinkal, Sanford W.; Andary, James F.; Watzin, James G.; Provost, David E.
1987-01-01
The concept, fundamental design principles, and capabilities of the FTS, a multipurpose telerobotic system for use on the Space Station and Space Shuttle, are discussed. The FTS is intended to assist the crew in the performance of extravehicular tasks; the telerobot will also be used on the Orbital Maneuvering Vehicle to service free-flyer spacecraft. The FTS will be capable of both teleoperation and autonomous operation; eventually it may also utilize ground control. By careful selection of the functional architecture and a modular approach to the hardware and software design, the FTS can accept developments in artificial intelligence and newer, more advanced sensors, such as machine vision and collision avoidance.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864
Environmental and Personal Safety: No Vision Required. Practice Report
ERIC Educational Resources Information Center
Bozeman, Laura A.
2004-01-01
Personal safety is an important issue for all people, regardless of their physical capabilities. For people with visual impairments (that is, those who are blind or have low vision), real concerns exist regarding their vulnerability to crime and their greater risk of attack. With a nationwide increase in crime in the United States, "Three out of…
The Jordy Electronic Magnification Device: Opinions, Observations, and Commentary
ERIC Educational Resources Information Center
Francis, Barry
2005-01-01
The Jordy electronic magnification device is one of a small number of electronic headborne devices designed to provide people with low vision the capability to perform near-range, intermediate-range, and distance viewing tasks. This report seeks to define the benefits of using the Jordy as a low vision device by people who are legally blind. The…
ERIC Educational Resources Information Center
Liebmann, Jeffrey D.
Information technology is changing the workplace. Forecasts range from wondrous visions of future capabilities to dark scenarios of employment loss and dehumanization. Some predict revolutionary impacts, while others conclude that the way we do business will change only gradually if much at all. The less positive visions of the future workplace…
Spatial multibody modeling and vehicle dynamics analysis of advanced vehicle technologies
NASA Astrophysics Data System (ADS)
Letherwood, Michael D.; Gunter, David D.; Gorsich, David J.; Udvare, Thomas B.
2004-08-01
The US Army vision, announced in October of 1999, encompasses people, readiness, and transformation. The goal of the Army vision is to transition the entire Army into a force that is strategically responsive and dominant at every point of the spectrum of operations. The transformation component will be accomplished in three ways: the Objective Force, the Legacy (current) Force, and the Interim Force. The objective force is not platform driven, but rather the focus is on achieving capabilities that will operate as a "system of systems." As part of the Objective Force, the US Army plans to begin production of the Future Combat System (FCS) in FY08 and field the first unit by FY10 as currently defined in the FCS solicitation(1). As part of the FCS program, the Future Tactical Truck System (FTTS) encompasses all US Army tactical wheeled vehicles and its initial efforts will focus only on the heavy class. The National Automotive Center (NAC) is using modeling and simulation to demonstrate the feasibility and operational potential of advanced commercial and military technologies with application to new and existing tactical vehicles and to describe potential future vehicle capabilities. This document will present the results of computer-based, vehicle dynamics performance assessments of FTTS concepts with such features as hybrid power sources, active suspensions, skid steering, and in-hub electric drive motors. Fully three-dimensional FTTS models are being created using commercially available modeling and simulation methodologies such as ADAMS and DADS and limited vehicle dynamics validation studies are will be performed.
2010-12-01
including thermal optics Much more precise target engagement and stabilization method Drawbacks Mechanical malfunctions more common Gunner has...complete panorama view that extends from 0–180 degrees off-center, from our camera system. Figure 20 360° view dome projection Figure 21 shows the...method can incorporate various types of synthetic vision aids, such as thermal or electro-optical sensors, to give the user the capability to see in
2015-12-04
from back-office big - data analytics to fieldable hot-spot systems providing storage-processing-communication services for off- grid sensors. Speed...and power efficiency are the key metrics. Current state-of-the art approaches for big - data aim toward scaling out to many computers to meet...pursued within Lincoln Laboratory as well as external sponsors. Our vision is to bring new capabilities in big - data and internet-of-things applications
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Saiyed, Naseem H.; Swith, Marion Shayne
2005-01-01
When United States President George W. Bush announced the Vision for Space Exploration in January 2004, twelve propulsion and launch system projects were being pursued in the Next Generation Launch Technology (NGLT) Program. These projects underwent a review for near-term relevance to the Vision. Subsequently, five projects were chosen as advanced development projects by NASA s Exploration Systems Mission Directorate (ESMD). These five projects were Auxiliary Propulsion, Integrated Powerhead Demonstrator, Propulsion Technology and Integration, Vehicle Subsystems, and Constellation University Institutes. Recently, an NGLT effort in Vehicle Structures was identified as a gap technology that was executed via the Advanced Development Projects Office within ESMD. For all of these advanced development projects, there is an emphasis on producing specific, near-term technical deliverables related to space transportation that constitute a subset of the promised NGLT capabilities. The purpose of this paper is to provide a brief description of the relevancy review process and provide a status of the aforementioned projects. For each project, the background, objectives, significant technical accomplishments, and future plans will be discussed. In contrast to many of the current ESMD activities, these areas are providing hardware and testing to further develop relevant technologies in support of the Vision for Space Exploration.
Flight Simulator Evaluation of Display Media Devices for Synthetic Vision Concepts
NASA Technical Reports Server (NTRS)
Arthur, J. J., III; Williams, Steven P.; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2004-01-01
The Synthetic Vision Systems (SVS) Project of the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSP) is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft. To accomplish these safety and capacity improvements, the SVS concept is designed to provide a clear view of the world around the aircraft through the display of computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information. Display media devices with which to implement SVS technology that have been evaluated so far within the Project include fixed field of view head up displays and head down Primary Flight Displays with pilot-selectable field of view. A simulation experiment was conducted comparing these display devices to a fixed field of view, unlimited field of regard, full color Helmet-Mounted Display system. Subject pilots flew a visual circling maneuver in IMC at a terrain-challenged airport. The data collected for this experiment is compared to past SVS research studies.
A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.
Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres
2016-05-28
Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.
Structure and function of a compound eye, more than half a billion years old.
Schoenemann, Brigitte; Pärnaste, Helje; Clarkson, Euan N K
2017-12-19
Until now, the fossil record has not been capable of revealing any details of the mechanisms of complex vision at the beginning of metazoan evolution. Here, we describe functional units, at a cellular level, of a compound eye from the base of the Cambrian, more than half a billion years old. Remains of early Cambrian arthropods showed the external lattices of enormous compound eyes, but not the internal structures or anything about how those compound eyes may have functioned. In a phosphatized trilobite eye from the lower Cambrian of the Baltic, we found lithified remnants of cellular systems, typical of a modern focal apposition eye, similar to those of a bee or dragonfly. This shows that sophisticated eyes already existed at the beginning of the fossil record of higher organisms, while the differences between the ancient system and the internal structures of a modern apposition compound eye open important insights into the evolution of vision. Copyright © 2017 the Author(s). Published by PNAS.
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
NASA Technical Reports Server (NTRS)
Glaab, Louis J.; Kramer, Lynda J.; Arthur, Trey; Parrish, Russell V.; Barry, John S.
2003-01-01
Limited visibility is the single most critical factor affecting the safety and capacity of worldwide aviation operations. Synthetic Vision Systems (SVS) technology can solve this visibility problem with a visibility solution. These displays employ computer-generated terrain imagery to present 3D, perspective out-the-window scenes with sufficient information and realism to enable operations equivalent to those of a bright, clear day, regardless of weather conditions. To introduce SVS display technology into as many existing aircraft as possible, a retrofit approach was defined that employs existing HDD display capabilities for glass cockpits and HUD capabilities for the other aircraft. This retrofit approach was evaluated for typical nighttime airline operations at a major international airport. Overall, 6 evaluation pilots performed 75 research approaches, accumulating 18 hours flight time evaluating SVS display concepts that used the NASA LaRC's Boeing B-757-200 aircraft at Dallas/Fort Worth International Airport. Results from this flight test establish the SVS retrofit concept, regardless of display size, as viable for tested conditions. Future assessments need to extend evaluation of the approach to operations in an appropriate, terrain-challenged environment with daytime test conditions.
Smartphones as image processing systems for prosthetic vision.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J
2013-01-01
The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.
Liu, Yu-Ting; Pal, Nikhil R; Marathe, Amar R; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems.
Liu, Yu-Ting; Pal, Nikhil R.; Marathe, Amar R.; Wang, Yu-Kai; Lin, Chin-Teng
2017-01-01
A brain-computer interface (BCI) creates a direct communication pathway between the human brain and an external device or system. In contrast to patient-oriented BCIs, which are intended to restore inoperative or malfunctioning aspects of the nervous system, a growing number of BCI studies focus on designing auxiliary systems that are intended for everyday use. The goal of building these BCIs is to provide capabilities that augment existing intact physical and mental capabilities. However, a key challenge to BCI research is human variability; factors such as fatigue, inattention, and stress vary both across different individuals and for the same individual over time. If these issues are addressed, autonomous systems may provide additional benefits that enhance system performance and prevent problems introduced by individual human variability. This study proposes a human-machine autonomous (HMA) system that simultaneously aggregates human and machine knowledge to recognize targets in a rapid serial visual presentation (RSVP) task. The HMA focuses on integrating an RSVP BCI with computer vision techniques in an image-labeling domain. A fuzzy decision-making fuser (FDMF) is then applied in the HMA system to provide a natural adaptive framework for evidence-based inference by incorporating an integrated summary of the available evidence (i.e., human and machine decisions) and associated uncertainty. Consequently, the HMA system dynamically aggregates decisions involving uncertainties from both human and autonomous agents. The collaborative decisions made by an HMA system can achieve and maintain superior performance more efficiently than either the human or autonomous agents can achieve independently. The experimental results shown in this study suggest that the proposed HMA system with the FDMF can effectively fuse decisions from human brain activities and the computer vision techniques to improve overall performance on the RSVP recognition task. This conclusion demonstrates the potential benefits of integrating autonomous systems with BCI systems. PMID:28676734
Design and control of an embedded vision guided robotic fish with multiple control surfaces.
Yu, Junzhi; Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface.
Sensor Needs for Control and Health Management of Intelligent Aircraft Engines
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Gang, Sanjay; Hunter, Gary W.; Guo, Ten-Huei; Semega, Kenneth J.
2004-01-01
NASA and the U.S. Department of Defense are conducting programs which support the future vision of "intelligent" aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized.
Design and Control of an Embedded Vision Guided Robotic Fish with Multiple Control Surfaces
Wang, Kai; Tan, Min; Zhang, Jianwei
2014-01-01
This paper focuses on the development and control issues of a self-propelled robotic fish with multiple artificial control surfaces and an embedded vision system. By virtue of the hybrid propulsion capability in the body plus the caudal fin and the complementary maneuverability in accessory fins, a synthesized propulsion scheme including a caudal fin, a pair of pectoral fins, and a pelvic fin is proposed. To achieve flexible yet stable motions in aquatic environments, a central pattern generator- (CPG-) based control method is employed. Meanwhile, a monocular underwater vision serves as sensory feedback that modifies the control parameters. The integration of the CPG-based motion control and the visual processing in an embedded microcontroller allows the robotic fish to navigate online. Aquatic tests demonstrate the efficacy of the proposed mechatronic design and swimming control methods. Particularly, a pelvic fin actuated sideward swimming gait was first implemented. It is also found that the speeds and maneuverability of the robotic fish with coordinated control surfaces were largely superior to that of the swimming robot propelled by a single control surface. PMID:24688413
Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing.
Leonard, Simon; Wu, Kyle L; Kim, Yonjae; Krieger, Axel; Kim, Peter C W
2014-04-01
This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.
Stereo vision tracking of multiple objects in complex indoor environments.
Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro
2010-01-01
This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.
Heliospheric Physics and NASA's Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Minow, Joseph I.
2007-01-01
The Vision for Space Exploration outlines NASA's development of a new generation of human-rated launch vehicles to replace the Space Shuttle and an architecture for exploring the Moon and Mars. The system--developed by the Constellation Program--includes a near term (approx. 2014) capability to provide crew and cargo service to the International Space Station after the Shuttle is retired in 2010 and a human return to the Moon no later than 2020. Constellation vehicles and systems will necessarily be required to operate efficiently, safely, and reliably in the space plasma and radiation environments of low Earth orbit, the Earth's magnetosphere, interplanetary space, and on the lunar surface. This presentation will provide an overview of the characteristics of space radiation and plasma environments relevant to lunar programs including the trans-lunar injection and trans-Earth injection trajectories through the Earth's radiation belts, solar wind surface dose and plasma wake charging environments in near lunar space, energetic solar particle events, and galactic cosmic rays and discusses the design and operational environments being developed for lunar program requirements to assure that systems operate successfully in the space environment.
Method and apparatus for predicting the direction of movement in machine vision
NASA Technical Reports Server (NTRS)
Lawton, Teri B. (Inventor)
1992-01-01
A computer-simulated cortical network is presented. The network is capable of computing the visibility of shifts in the direction of movement. Additionally, the network can compute the following: (1) the magnitude of the position difference between the test and background patterns; (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern; and (3) the direction of a test pattern moved relative to a textured background. The direction of movement of an object in the field of view of a robotic vision system is detected in accordance with nonlinear Gabor function algorithms. The movement of objects relative to their background is used to infer the 3-dimensional structure and motion of object surfaces.
Computer vision challenges and technologies for agile manufacturing
NASA Astrophysics Data System (ADS)
Molley, Perry A.
1996-02-01
Sandia National Laboratories, a Department of Energy laboratory, is responsible for maintaining the safety, security, reliability, and availability of the nuclear weapons stockpile for the United States. Because of the changing national and global political climates and inevitable budget cuts, Sandia is changing the methods and processes it has traditionally used in the product realization cycle for weapon components. Because of the increasing age of the nuclear stockpile, it is certain that the reliability of these weapons will degrade with time unless eventual action is taken to repair, requalify, or renew them. Furthermore, due to the downsizing of the DOE weapons production sites and loss of technical personnel, the new product realization process is being focused on developing and deploying advanced automation technologies in order to maintain the capability for producing new components. The goal of Sandia's technology development program is to create a product realization environment that is cost effective, has improved quality and reduced cycle time for small lot sizes. The new environment will rely less on the expertise of humans and more on intelligent systems and automation to perform the production processes. The systems will be robust in order to provide maximum flexibility and responsiveness for rapidly changing component or product mixes. An integrated enterprise will allow ready access to and use of information for effective and efficient product and process design. Concurrent engineering methods will allow a speedup of the product realization cycle, reduce costs, and dramatically lessen the dependency on creating and testing physical prototypes. Virtual manufacturing will allow production processes to be designed, integrated, and programed off-line before a piece of hardware ever moves. The overriding goal is to be able to build a large variety of new weapons parts on short notice. Many of these technologies that are being developed are also applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.
Simultaneous Deep-Ocean Operations With Autonomous and Remotely Operated Vehicles
NASA Astrophysics Data System (ADS)
Yoerger, D. R.; Bowen, A. D.; Bradley, A. M.
2005-12-01
The complimentary capabilities of autonomous and remotely vehicles can be obtained more efficiently if two or more vehicles can be deployed simultaneously from a single vessel. Simultaneous operations make better use of ship time and personnel. However, such operations require specific technical capabilities and careful scheduling. We recently demonstrated several key capabilities on the VISIONS05 cruise to the Juan de Fuca Ridge, where the Autonomous Benthic Explorer (ABE) and the ROV Jason 2 were operated simultaneously. The cruise featured complex ROV operations ranging from servicing seismic instruments, water sampling, drilling, and installation of in-situ experiments. The AUV provided detailed near-bottom bathymetry of the Endeavour segment while concurrently providing a cable route survey for a primary Canadian Neptune node. To meet these goals, we had to operate both vehicles at the same time. In previous efforts, we have operated ABE in a coordinated fashion with either the submersible Alvin or Jason 2. But the vehicles were either deployed sequentially or they were operated in separate acoustic transponder nets with the restriction that the vessel recover the AUV within a reasonable period after it reached the surface to avoid loss of the AUV. During the VISIONS05 cruise, we operated both vehicles at the same time and demonstrated several key capabilities to make simultaneous operations more efficient. These include the ability of the AUV to anchor to the seafloor after its batteries were expended or if a fault occurred, allowing complex ROV operations to run to completion without the constraint of retrieving the AUV at a specific time. The anchoring system allowed the vehicle to rest near the seafloor on a short mooring in a low power state. The AUV returned to the surface either through an acoustic command from the vessel or when a preassigned time was reached. We also tested an experimental acoustic beacon system that can allow multiple vehicles to determine their position without interfering with each other.
Creating a vision for the twenty-first century healthcare organization.
Zuckerman, A M
2000-01-01
Management approaches used by healthcare organizations have often lagged behind other businesses in more competitive industries. Companies operating in such dynamic environments have found that to cope with the rapid pace of change they must have an articulated understanding of their organization's capabilities and consensus on where the organization is headed based on predictions about the future operating environment. This statement of identity and strategic direction takes the form of a vision statement that serves as the compass for the organization's decisions for a five- to ten-year period. This article discusses the importance of vision statements in tomorrow's healthcare organizations, presents an overview of future scenarios that may provide context for organizational visions, and suggests a process for developing a vision statement. A case study is presented to illustrate how a vision statement is created. Following the guidelines presented in this article and reviewing the case study should assist healthcare executives and their boards in crafting better visions of their organizations' futures, developing more effective strategies to realize these visions, and adapting to more frequent and more significant change.
Deficiency of adaptive control of the binocular coordination of saccades in strabismus.
Bucci, M P; Kapoula, Z; Eggert, T; Garraud, L
1997-10-01
Disconjugate (different in the two eyes) oculomotor adaptation is driven by the need to maintain binocular vision. Since binocular vision is deficient in strabismus, we wondered whether oculomotor disconjugate adaptive capabilities are deficient in such subjects. We studied eight adult subjects with constant, long-standing convergent strabismus of variable angles (4-30 prism D). No subject had severe amblyopia. Binocular vision was evaluated with stereoacuity tests. Two subjects had peripheral binocular vision and gross stereopsis; two other subjects had abnormal retinal correspondence and abnormal or pseudo gross stereopsis. In the other subjects binocular vision and stereopsis were absent. To stimulate disconjugate changes of saccades, subjects viewed for 20 min an image that was magnified in one eye (aniseikonia). Subjects with residual peripheral binocular vision and even subjects with pseudo or abnormal binocular vision showed disconjugate changes of the binocular coordination of their saccades; these changes reduced the disparity resulting from the aniseikonia. In contrast, for subjects without binocular vision the changes were not correlated with the disparity induced by the aniseikonia. Rather, these changes served to improve fixation of one or the other eye individually.
2009-03-01
infrared, thermal , or night vision applications. Understanding the true capabilities and limitations of the ALAN camera and its applicability to a...an option to more expensive infrared, thermal , or night vision applications. Ultimately, it will be clear whether the configuration of the Kestrel...45 A. THERMAL CAMERAS................................................................................45 1
Gulf States Strategic Vision to Face Iranian Nuclear Project
2015-09-01
STRATEGIC VISION TO FACE IRANIAN NUCLEAR PROJECT by Fawzan A. Alfawzan September 2015 Thesis Advisor: James Russell Second Reader: Anne...nuclear weapons at a high degree. Nuclear capabilities provided Iran with uranium enrichments abilities and nuclear weapons to enable the country to...IN SECURITY STUDIES (STRATEGIC STUDIES) from the NAVAL POSTGRADUATE SCHOOL September 2015 Approved by: James Russell Thesis
Demas, James A.; Payne, Hannah; Cline, Hollis T.
2011-01-01
Developing amphibians need vision to avoid predators and locate food before visual system circuits fully mature. Xenopus tadpoles can respond to visual stimuli as soon as retinal ganglion cells (RGCs) innervate the brain, however, in mammals, chicks and turtles, RGCs reach their central targets many days, or even weeks, before their retinas are capable of vision. In the absence of vision, activity-dependent refinement in these amniote species is mediated by waves of spontaneous activity that periodically spread across the retina, correlating the firing of action potentials in neighboring RGCs. Theory suggests that retinorecipient neurons in the brain use patterned RGC activity to sharpen the retinotopy first established by genetic cues. We find that in both wild type and albino Xenopus tadpoles, RGCs are spontaneously active at all stages of tadpole development studied, but their population activity never coalesces into waves. Even at the earliest stages recorded, visual stimulation dominates over spontaneous activity and can generate patterns of RGC activity similar to the locally correlated spontaneous activity observed in amniotes. In addition, we show that blocking AMPA and NMDA type glutamate receptors significantly decreases spontaneous activity in young Xenopus retina, but that blocking GABAA receptor blockers does not. Our findings indicate that vision drives correlated activity required for topographic map formation. They further suggest that developing retinal circuits in the two major subdivisions of tetrapods, amphibians and amniotes, evolved different strategies to supply appropriately patterned RGC activity to drive visual circuit refinement. PMID:21312343
An artificial elementary eye with optic flow detection and compositional properties.
Pericet-Camara, Ramon; Dobrzynski, Michal K; Juston, Raphaël; Viollet, Stéphane; Leitel, Robert; Mallot, Hanspeter A; Floreano, Dario
2015-08-06
We describe a 2 mg artificial elementary eye whose structure and functionality is inspired by compound eye ommatidia. Its optical sensitivity and electronic architecture are sufficient to generate the required signals for the measurement of local optic flow vectors in multiple directions. Multiple elementary eyes can be assembled to create a compound vision system of desired shape and curvature spanning large fields of view. The system configurability is validated with the fabrication of a flexible linear array of artificial elementary eyes capable of extracting optic flow over multiple visual directions. © 2015 The Author(s).
ALHAT COBALT: CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M.
2015-01-01
The COBALT project is a flight demonstration of two NASA ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) capabilities that are key for future robotic or human landing GN&C (Guidance, Navigation and Control) systems. The COBALT payload integrates the Navigation Doppler Lidar (NDL) for ultraprecise velocity and range measurements with the Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. Terrestrial flight tests of the COBALT payload in an open-loop and closed-loop GN&C configuration will be conducted onboard a commercial, rocket-propulsive Vertical Test Bed (VTB) at a test range in Mojave, CA.
Upgrades to the NESS (Nuclear Engine System Simulation) Code
NASA Technical Reports Server (NTRS)
Fittje, James E.
2007-01-01
In support of the President's Vision for Space Exploration, the Nuclear Thermal Rocket (NTR) concept is being evaluated as a potential propulsion technology for human expeditions to the moon and Mars. The need for exceptional propulsion system performance in these missions has been documented in numerous studies, and was the primary focus of a considerable effort undertaken during the 1960's and 1970's. The NASA Glenn Research Center is leveraging this past NTR investment in their vehicle concepts and mission analysis studies with the aid of the Nuclear Engine System Simulation (NESS) code. This paper presents the additional capabilities and upgrades made to this code in order to perform higher fidelity NTR propulsion system analysis and design.
The 4-D approach to visual control of autonomous systems
NASA Technical Reports Server (NTRS)
Dickmanns, Ernst D.
1994-01-01
Development of a 4-D approach to dynamic machine vision is described. Core elements of this method are spatio-temporal models oriented towards objects and laws of perspective projection in a foward mode. Integration of multi-sensory measurement data was achieved through spatio-temporal models as invariants for object recognition. Situation assessment and long term predictions were allowed through maintenance of a symbolic 4-D image of processes involving objects. Behavioral capabilities were easily realized by state feedback and feed-foward control.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe
2017-10-16
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application
Vassallo, Raquel
2017-01-01
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
A high resolution and high speed 3D imaging system and its application on ATR
NASA Astrophysics Data System (ADS)
Lu, Thomas T.; Chao, Tien-Hsin
2006-04-01
The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.
NASA Technical Reports Server (NTRS)
Aquilina, Rudolph A.
2015-01-01
The SMART-NAS Testbed for Safe Trajectory Based Operations Project will deliver an evaluation capability, critical to the ATM community, allowing full NextGen and beyond-NextGen concepts to be assessed and developed. To meet this objective a strong focus will be placed on concept integration and validation to enable a gate-to-gate trajectory-based system capability that satisfies a full vision for NextGen. The SMART-NAS for Safe TBO Project consists of six sub-projects. Three of the sub-projects are focused on exploring and developing technologies, concepts and models for evolving and transforming air traffic management operations in the ATM+2 time horizon, while the remaining three sub-projects are focused on developing the tools and capabilities needed for testing these advanced concepts. Function Allocation, Networked Air Traffic Management and Trajectory Based Operations are developing concepts and models. SMART-NAS Test-bed, System Assurance Technologies and Real-time Safety Modeling are developing the tools and capabilities to test these concepts. Simulation and modeling capabilities will include the ability to assess multiple operational scenarios of the national airspace system, accept data feeds, allowing shadowing of actual operations in either real-time, fast-time and/or hybrid modes of operations in distributed environments, and enable integrated examinations of concepts, algorithms, technologies, and NAS architectures. An important focus within this project is to enable the development of a real-time, system-wide safety assurance system. The basis of such a system is a continuum of information acquisition, analysis, and assessment that enables awareness and corrective action to detect and mitigate potential threats to continuous system-wide safety at all levels. This process, which currently can only be done post operations, will be driven towards "real-time" assessments in the 2035 time frame.
Adaptation to Variance of Stimuli in Drosophila Larva Navigation
NASA Astrophysics Data System (ADS)
Wolk, Jason; Gepner, Ruben; Gershow, Marc
In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.
Robotic follower experimentation results: ready for FCS increment I
NASA Astrophysics Data System (ADS)
Jaczkowski, Jeffrey J.
2003-09-01
Robotics is a fundamental enabling technology required to meet the U.S. Army's vision to be a strategically responsive force capable of domination across the entire spectrum of conflict. The U. S. Army Research, Development and Engineering Command (RDECOM) Tank Automotive Research, Development & Engineering Center (TARDEC), in partnership with the U.S. Army Research Laboratory, is developing a leader-follower capability for Future Combat Systems. The Robotic Follower Advanced Technology Demonstration (ATD) utilizes a manned leader to provide a highlevel proofing of the follower's path, which operates with minimal user intervention. This paper will give a programmatic overview and discuss both the technical approach and operational experimentation results obtained during testing conducted at Ft. Bliss, New Mexico in February-March 2003.
Science Instruments and Sensors Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Barney, Rich; Zuber, Maria
2005-01-01
The Science Instruments and Sensors roadmaps include capabilities associated with the collection, detection, conversion, and processing of scientific data required to answer compelling science questions driven by the Vision for Space Exploration and The New Age of Exploration (NASA's Direction for 2005 & Beyond). Viewgraphs on these instruments and sensors are presented.
Vision guided landing of an an autonomous helicopter in hazardous terrain
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Montgomery, Jim
2005-01-01
Future robotic space missions will employ a precision soft-landing capability that will enable exploration of previously inaccessible sites that have strong scientific significance. To enable this capability, a fully autonomous onboard system that identifies and avoids hazardous features such as steep slopes and large rocks is required. Such a system will also provide greater functionality in unstructured terrain to unmanned aerial vehicles. This paper describes an algorithm for landing hazard avoidance based on images from a single moving camera. The core of the algorithm is an efficient application of structure from motion to generate a dense elevation map of the landing area. Hazards are then detected in this map and a safe landing site is selected. The algorithm has been implemented on an autonomous helicopter testbed and demonstrated four times resulting in the first autonomous landing of an unmanned helicopter in unknown and hazardous terrain.
Command History OPNAV 5750-1 Fiscal Year 2004
2006-05-04
highly capable facilities including three hyperbaric 2 chambers, anechoic chambers, auditory and vision laboratories, closed atmosphere test room...3 Hyperbaric Chambers (1 Saturation) • 1000m3 Anechoic Chamber • 140m3 Reverberant Chamber • 10 Audio Testing Booths • Vision Research...Using Hand-Held Personal Digital Assistants (PDAs) in a Hyperbaric Environment and the PDA-based Submarine Escape and Rescue Calculator and
Sensing Super-Position: Human Sensing Beyond the Visual Spectrum
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2007-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.
Utilization of the Space Vision System as an Augmented Reality System For Mission Operations
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles
2003-01-01
Augmented reality is a technique whereby computer generated images are superimposed on live images for visual enhancement. Augmented reality can also be characterized as dynamic overlays when computer generated images are registered with moving objects in a live image. This technique has been successfully implemented, with low to medium levels of registration precision, in an NRA funded project entitled, "Improving Human Task Performance with Luminance Images and Dynamic Overlays". Future research is already being planned to also utilize a laboratory-based system where more extensive subject testing can be performed. However successful this might be, the problem will still be whether such a technology can be used with flight hardware. To answer this question, the Canadian Space Vision System (SVS) will be tested as an augmented reality system capable of improving human performance where the operation requires indirect viewing. This system has already been certified for flight and is currently flown on each shuttle mission for station assembly. Successful development and utilization of this system in a ground-based experiment will expand its utilization for on-orbit mission operations. Current research and development regarding the use of augmented reality technology is being simulated using ground-based equipment. This is an appropriate approach for development of symbology (graphics and annotation) optimal for human performance and for development of optimal image registration techniques. It is anticipated that this technology will become more pervasive as it matures. Because we know what and where almost everything is on ISS, this reduces the registration problem and improves the computer model of that reality, making augmented reality an attractive tool, provided we know how to use it. This is the basis for current research in this area. However, there is a missing element to this process. It is the link from this research to the current ISS video system and to flight hardware capable of utilizing this technology. This is the basis for this proposed Space Human Factors Engineering project, the determination of the display symbology within the performance limits of the Space Vision System that will objectively improve human performance. This utilization of existing flight hardware will greatly reduce the costs of implementation for flight. Besides being used onboard shuttle and space station and as a ground-based system for mission operational support, it also has great potential for science and medical training and diagnostics, remote learning, team learning, video/media conferencing, and educational outreach.
Micro-optical artificial compound eyes.
Duparré, J W; Wippermann, F C
2006-03-01
Natural compound eyes combine small eye volumes with a large field of view at the cost of comparatively low spatial resolution. For small invertebrates such as flies or moths, compound eyes are the perfectly adapted solution to obtaining sufficient visual information about their environment without overloading their brains with the necessary image processing. However, to date little effort has been made to adopt this principle in optics. Classical imaging always had its archetype in natural single aperture eyes which, for example, human vision is based on. But a high-resolution image is not always required. Often the focus is on very compact, robust and cheap vision systems. The main question is consequently: what is the better approach for extremely miniaturized imaging systems-just scaling of classical lens designs or being inspired by alternative imaging principles evolved by nature in the case of small insects? In this paper, it is shown that such optical systems can be achieved using state-of-the-art micro-optics technology. This enables the generation of highly precise and uniform microlens arrays and their accurate alignment to the subsequent optics-, spacing- and optoelectronics structures. The results are thin, simple and monolithic imaging devices with a high accuracy of photolithography. Two different artificial compound eye concepts for compact vision systems have been investigated in detail: the artificial apposition compound eye and the cluster eye. Novel optical design methods and characterization tools were developed to allow the layout and experimental testing of the planar micro-optical imaging systems, which were fabricated for the first time by micro-optics technology. The artificial apposition compound eye can be considered as a simple imaging optical sensor while the cluster eye is capable of becoming a valid alternative to classical bulk objectives but is much more complex than the first system.
Hi-Vision telecine system using pickup tube
NASA Astrophysics Data System (ADS)
Iijima, Goro
1992-08-01
Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.
Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.
Mustari, Michael J
2017-12-01
Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
NASA Stennis Space Center Integrated System Health Management Test Bed and Development Capabilities
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Holland, Randy; Coote, David
2006-01-01
Integrated System Health Management (ISHM) is a capability that focuses on determining the condition (health) of every element in a complex System (detect anomalies, diagnose causes, prognosis of future anomalies), and provide data, information, and knowledge (DIaK)-not just data-to control systems for safe and effective operation. This capability is currently done by large teams of people, primarily from ground, but needs to be embedded on-board systems to a higher degree to enable NASA's new Exploration Mission (long term travel and stay in space), while increasing safety and decreasing life cycle costs of spacecraft (vehicles; platforms; bases or outposts; and ground test, launch, and processing operations). The topics related to this capability include: 1) ISHM Related News Articles; 2) ISHM Vision For Exploration; 3) Layers Representing How ISHM is Currently Performed; 4) ISHM Testbeds & Prototypes at NASA SSC; 5) ISHM Functional Capability Level (FCL); 6) ISHM Functional Capability Level (FCL) and Technology Readiness Level (TRL); 7) Core Elements: Capabilities Needed; 8) Core Elements; 9) Open Systems Architecture for Condition-Based Maintenance (OSA-CBM); 10) Core Elements: Architecture, taxonomy, and ontology (ATO) for DIaK management; 11) Core Elements: ATO for DIaK Management; 12) ISHM Architecture Physical Implementation; 13) Core Elements: Standards; 14) Systematic Implementation; 15) Sketch of Work Phasing; 16) Interrelationship Between Traditional Avionics Systems, Time Critical ISHM and Advanced ISHM; 17) Testbeds and On-Board ISHM; 18) Testbed Requirements: RETS AND ISS; 19) Sustainable Development and Validation Process; 20) Development of on-board ISHM; 21) Taxonomy/Ontology of Object Oriented Implementation; 22) ISHM Capability on the E1 Test Stand Hydraulic System; 23) Define Relationships to Embed Intelligence; 24) Intelligent Elements Physical and Virtual; 25) ISHM Testbeds and Prototypes at SSC Current Implementations; 26) Trailer-Mounted RETS; 27) Modeling and Simulation; 28) Summary ISHM Testbed Environments; 29) Data Mining - ARC; 30) Transitioning ISHM to Support NASA Missions; 31) Feature Detection Routines; 32) Sample Features Detected in SSC Test Stand Data; and 33) Health Assessment Database (DIaK Repository).
Infrared sensors and systems for enhanced vision/autonomous landing applications
NASA Technical Reports Server (NTRS)
Kerr, J. Richard
1993-01-01
There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.
Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras
NASA Astrophysics Data System (ADS)
Quinn, Mark Kenneth
2018-05-01
Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.
Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses
NASA Astrophysics Data System (ADS)
Lin, Yu-Pu; Bennett, Christopher H.; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier
2016-09-01
Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations.
Spectral ophthalmoscopy based on supercontinuum
NASA Astrophysics Data System (ADS)
Cheng, Yueh-Hung; Yu, Jiun-Yann; Wu, Han-Hsuan; Huang, Bo-Jyun; Chu, Shi-Wei
2010-02-01
Confocal scanning laser ophthalmoscope (CSLO) has been established to be an important diagnostic tool for retinopathies like age-related macular degeneration, glaucoma and diabetes. Compared to a confocal laser scanning microscope, CSLO is also capable of providing optical sectioning on retina with the aid of a pinhole, but the microscope objective is replaced by the optics of eye. Since optical spectrum is the fingerprint of local chemical composition, it is attractive to incorporate spectral acquisition into CSLO. However, due to the limitation of laser bandwidth and chromatic/geometric aberration, the scanning systems in current CSLO are not compatible with spectral imaging. Here we demonstrate a spectral CSLO by combining a diffraction-limited broadband scanning system and a supercontinuum laser source. Both optical sectioning capability and sub-cellular resolution are demonstrated on zebrafish's retina. To our knowledge, it is also the first time that CSLO is applied onto the study of fish vision. The versatile spectral CSLO system will be useful to retinopathy diagnosis and neuroscience research.
Detection of Kaposi's Sarcoma Associated Herpesvirus Nucleic Acids Using a Smartphone Accessory
Mancuso, Matthew; Cesarman, Ethel; Erickson, David
2014-01-01
Kaposi's sarcoma (KS) is an infectious cancer occurring in immune-compromised patients, caused by Kaposi's sarcoma associated herpesvirus (KSHV). Our vision is to simplify the process of KS diagnosis through the creation of a smartphone based point-of-care system capable of yielding an actionable diagnostic readout starting from a raw biopsy sample. In this work we develop the sensing mechanism for the overall system, a smartphone accessory capable of detecting KSHV nucleic acids. The accessory reads out microfluidic chips filled with a colorimetric nanoparticle assay targeted at KSHV. We calculate that our final device can read out gold nanoparticle solutions with an accuracy of .05 OD, and we demonstrate that it can detect DNA sequences from KSHV down to 1 nM. We believe that through integration with our previously developed components, a smartphone based system like the one studied here can provide accurate detection information, as well as a simple platform for field based clinical diagnosis and research. PMID:25117534
Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses.
Lin, Yu-Pu; Bennett, Christopher H; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier
2016-09-07
Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations.
Human Exploration of the Solar System by 2100
NASA Technical Reports Server (NTRS)
Litchford, Ronald J.
2017-01-01
It has been suggested that the U.S., in concert with private entities and international partners, set itself on a course to accomplish human exploration of the solar system by the end of this century. This is a strikingly bold vision intended to revitalize the aspirations of HSF in service to the security, economic, and scientific interests of the nation. Solar system distance and time scales impose severe requirements on crewed space transportation systems, however, and fully realizing all objectives in support of this goal will require a multi-decade commitment employing radically advanced technologies - most prominently, space habitats capable of sustaining and protecting life in harsh radiation environments under zero gravity conditions and in-space propulsion technologies capable of rapid deep space transits with earth return, the subject of this paper. While near term mission destinations such as the moon and Mars can be accomplished with chemical propulsion and/or high power SEP, fundamental capability constraints render these traditional systems ineffective for solar system wide exploration. Nuclear based propulsion and alternative energetic methods, on the other hand, represent potential avenues, perhaps the only viable avenues, to high specific power space transport evincing reduced trip time, reduced IMLEO, and expanded deep space reach. Here, very long term HSF objectives for solar system wide exploration are examined in relation to the advanced propulsion technology solution landscape including foundational science, technical/engineering challenges, and developmental prospects.
Automated visual inspection system based on HAVNET architecture
NASA Astrophysics Data System (ADS)
Burkett, K.; Ozbayoglu, Murat A.; Dagli, Cihan H.
1994-10-01
In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.
NASA Technical Reports Server (NTRS)
Abercromby, Andrew F. J.; Thaxton, Sherry S.; Onady, Elizabeth A.; Rajulu, Sudhakar L.
2006-01-01
The Science Crew Operations and Utility Testbed (SCOUT) project is focused on the development of a rover vehicle that can be utilized by two crewmembers during extra vehicular activities (EVAs) on the moon and Mars. The current SCOUT vehicle can transport two suited astronauts riding in open cockpit seats. Among the aspects currently being developed is the cockpit design and layout. This process includes the identification of possible locations for a socket to which a crewmember could connect a portable life support system (PLSS) for recharging power, air, and cooling while seated in the vehicle. The spaces in which controls and connectors may be situated within the vehicle are constrained by the reach and vision capabilities of the suited crewmembers. Accordingly, quantification of the volumes within which suited crewmembers can both see and reach relative to the vehicle represents important information during the design process.
ATHENA: system design and implementation for a next-generation x-ray telescope
NASA Astrophysics Data System (ADS)
Ayre, M.; Bavdaz, M.; Ferreira, I.; Wille, E.; Lumb, D.; Linder, M.; Stefanescu, A.
2017-08-01
ATHENA, Europe's next generation x-ray telescope, is currently under Assessment Phase study with parallel candidate industrial Prime contractors after selection for the 'L2' slot in ESA's Cosmic Vision Programme, with a mandate to address the 'Hot and Energetic Universe' Cosmic Vision science theme. This paper will consider the main technical requirements of the mission, and their mapping to resulting design choices at both mission and spacecraft level. The reference mission architecture and current reference spacecraft design will then be described, with particular emphasis given to description of the Science Instrument Module (SIM) design, currently under the responsibility of the ESA Study Team. The SIM is a very challenging item due primarily to the need to provide to the instruments (i) a soft ride during launch, and (ii) a very large ( 3 kW) heat dissipation capability at varying interface temperatures and locations.
Vision 20/20: Single photon counting x-ray detectors in medical imaging
Taguchi, Katsuyuki; Iwanczyk, Jan S.
2013-01-01
Photon counting detectors (PCDs) with energy discrimination capabilities have been developed for medical x-ray computed tomography (CT) and x-ray (XR) imaging. Using detection mechanisms that are completely different from the current energy integrating detectors and measuring the material information of the object to be imaged, these PCDs have the potential not only to improve the current CT and XR images, such as dose reduction, but also to open revolutionary novel applications such as molecular CT and XR imaging. The performance of PCDs is not flawless, however, and it seems extremely challenging to develop PCDs with close to ideal characteristics. In this paper, the authors offer our vision for the future of PCD-CT and PCD-XR with the review of the current status and the prediction of (1) detector technologies, (2) imaging technologies, (3) system technologies, and (4) potential clinical benefits with PCDs. PMID:24089889
Computational approaches to vision
NASA Technical Reports Server (NTRS)
Barrow, H. G.; Tenenbaum, J. M.
1986-01-01
Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.
A pseudoisochromatic test of color vision for human infants.
Mercer, Michele E; Drodge, Suzanne C; Courage, Mary L; Adams, Russell J
2014-07-01
Despite the development of experimental methods capable of measuring early human color vision, we still lack a procedure comparable to those used to diagnose the well-identified congenital and acquired color vision anomalies in older children, adults, and clinical patients. In this study, we modified a pseudoisochromatic test to make it more suitable for young infants. Using a forced choice preferential looking procedure, 216 3-to-23-mo-old babies were tested with pseudoisochromatic targets that fell on either a red/green or a blue/yellow dichromatic confusion axis. For comparison, 220 color-normal adults and 22 color-deficient adults were also tested. Results showed that all babies and adults passed the blue/yellow target but many of the younger infants failed the red/green target, likely due to the interaction of the lingering immaturities within the visual system and the small CIE vector distance within the red/green plate. However, older (17-23 mo) infants, color- normal adults and color-defective adults all performed according to expectation. Interestingly, performance on the red/green plate was better among female infants, well exceeding the expected rate of genetic dimorphism between genders. Overall, with some further modification, the test serves as a promising tool for the detection of early color vision anomalies in early human life. Copyright © 2014 Elsevier B.V. All rights reserved.
IITET and shadow TT: an innovative approach to training at the point of need
NASA Astrophysics Data System (ADS)
Gross, Andrew; Lopez, Favio; Dirkse, James; Anderson, Darran; Berglie, Stephen; May, Christopher; Harkrider, Susan
2014-06-01
The Image Intensification and Thermal Equipment Training (IITET) project is a joint effort between Night Vision and Electronics Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) and the Army Research Institute (ARI) Fort Benning Research Unit. The IITET effort develops a reusable and extensible training architecture that supports the Army Learning Model and trains Manned-Unmanned Teaming (MUM-T) concepts to Shadow Unmanned Aerial Systems (UAS) payload operators. The training challenge of MUM-T during aviation operations is that UAS payload operators traditionally learn few of the scout-reconnaissance skills and coordination appropriate to MUM-T at the schoolhouse. The IITET effort leveraged the simulation experience and capabilities at NVESD and ARI's research to develop a novel payload operator training approach consistent with the Army Learning Model. Based on the training and system requirements, the team researched and identified candidate capabilities in several distinct technology areas. The training capability will support a variety of training missions as well as a full campaign. Data from these missions will be captured in a fully integrated AAR capability, which will provide objective feedback to the user in near-real-time. IITET will be delivered via a combination of browser and video streaming technologies, eliminating the requirement for a client download and reducing user computer system requirements. The result is a novel UAS Payload Operator training capability, nested within an architecture capable of supporting a wide variety of training needs for air and ground tactical platforms and sensors, and potentially several other areas requiring vignette-based serious games training.
Conceptual Drivers for an Exploration Medical System
NASA Technical Reports Server (NTRS)
Antonsen, E.; Canga, M.
2016-01-01
Interplanetary spaceflight provides unique challenges that have not been encountered in prior spaceflight experience. Extended distance and timeframes introduce new challenges such as an inability to resupply medications and consumables, inability to evacuate injured or ill crew, and communication delays that introduce a requirement for some level of autonomous medical capability. Because of these challenges the approaches used in prior programs have limited application to a proposed three year Mars mission. This paper proposes a paradigm shift in the approach to medical risk mitigation for crew health and mission objectives threatened by inadequate medical capabilities in the setting of severely limited resources. A conceptual approach is outlined to derive medical system and vehicle needs from an integrated vision of how medical care will be provided within this new paradigm. Using NASA Design Reference Missions this process assesses each mission phase to deconstruct medical needs at any point during a mission. Two operational categories are proposed, nominal operations (pre-planned activities) and contingency operations (medical conditions requiring evaluation) that meld clinical needs and research needs into a single system. These definitions are used to derive a task level analysis to support quantifiable studies into a medical capabilities trade. This trade allows system design to proceed from both a mission centric and ethics-based approach to medical limitations in an exploration class mission.
Laptop Computer - Based Facial Recognition System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. A. Cain; G. B. Singleton
2001-03-01
The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
NASA Astrophysics Data System (ADS)
McKinley, John B.; Pierson, Roger; Ertem, M. C.; Krone, Norris J., Jr.; Cramer, James A.
2008-04-01
Flight tests were conducted at Greenbrier Valley Airport (KLWB) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Norris Electro Optical Systems Corporation (NEOC) developmental ultraviolet (UV) sensor. These flights were sponsored by NEOC under a Federal Aviation Administration program, and the ultraviolet concepts, technology, system mechanization, and hardware for landing during low visibility landing conditions have been patented by NEOC. Imagery from the UV sensor, HUD guidance cues, and out-the-window videos were separately recorded at the engineering workstation for each approach. Inertial flight path data were also recorded. Various configurations of portable UV emitters were positioned along the runway edge and threshold. The UV imagery of the runway outline was displayed on the HUD along with guidance generated from the mission computer. Enhanced Flight Vision System (EFVS) approaches with the UV sensor were conducted from the initial approach fix to the ILS decision height in both VMC and IMC. Although the availability of low visibility conditions during the flight test period was limited, results from previous fog range testing concluded that UV EFVS has the performance capability to penetrate CAT II runway visual range obscuration. Furthermore, independent analysis has shown that existing runway light emit sufficient UV radiation without the need for augmentation other than lens replacement with UV transmissive quartz lenses. Consequently, UV sensors should qualify as conforming to FAA requirements for EFVS approaches. Combined with Synthetic Vision System (SVS), UV EFVS would function as both a precision landing aid, as well as an integrity monitor for the GPS and SVS database.
NASA Technical Reports Server (NTRS)
Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert
1996-01-01
The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.
Viability of a Reusable In-Space Transportation System
NASA Technical Reports Server (NTRS)
Jefferies, Sharon A.; McCleskey, Carey M.; Nufer, Brian M.; Lepsch, Roger A.; Merrill, Raymond G.; North, David D.; Martin, John G.; Komar, David R.
2015-01-01
The National Aeronautics and Space Administration (NASA) is currently developing options for an Evolvable Mars Campaign (EMC) that expands human presence from Low Earth Orbit (LEO) into the solar system and to the surface of Mars. The Hybrid in-space transportation architecture is one option being investigated within the EMC. The architecture enables return of the entire in-space propulsion stage and habitat to cis-lunar space after a round trip to Mars. This concept of operations opens the door for a fully reusable Mars transportation system from cis-lunar space to a Mars parking orbit and back. This paper explores the reuse of in-space transportation systems, with a focus on the propulsion systems. It begins by examining why reusability should be pursued and defines reusability in space-flight context. A range of functions and enablers associated with preparing a system for reuse are identified and a vision for reusability is proposed that can be advanced and implemented as new capabilities are developed. Following this, past reusable spacecraft and servicing capabilities, as well as those currently in development are discussed. Using the Hybrid transportation architecture as an example, an assessment of the degree of reusability that can be incorporated into the architecture with current capabilities is provided and areas for development are identified that will enable greater levels of reuse in the future. Implications and implementation challenges specific to the architecture are also presented.
Always-on low-power optical system for skin-based touchless machine control.
Lecca, Michela; Gottardi, Massimo; Farella, Elisabetta; Milosevic, Bojan
2016-06-01
Embedded vision systems are smart energy-efficient devices that capture and process a visual signal in order to extract high-level information about the surrounding observed world. Thanks to these capabilities, embedded vision systems attract more and more interest from research and industry. In this work, we present a novel low-power optical embedded system tailored to detect the human skin under various illuminant conditions. We employ the presented sensor as a smart switch to activate one or more appliances connected to it. The system is composed of an always-on low-power RGB color sensor, a proximity sensor, and an energy-efficient microcontroller (MCU). The architecture of the color sensor allows a hardware preprocessing of the RGB signal, which is converted into the rg space directly on chip reducing the power consumption. The rg signal is delivered to the MCU, where it is classified as skin or non-skin. Each time the signal is classified as skin, the proximity sensor is activated to check the distance of the detected object. If it appears to be in the desired proximity range, the system detects the interaction and switches on/off the connected appliances. The experimental validation of the proposed system on a prototype shows that processing both distance and color remarkably improves the performance of the two separated components. This makes the system a promising tool for energy-efficient, touchless control of machines.
Display Parameters and Requirements
NASA Astrophysics Data System (ADS)
Bahadur, Birendra
The following sections are included: * INTRODUCTION * HUMAN FACTORS * Anthropometry * Sensory * Cognitive * Discussions * THE HUMAN VISUAL SYSTEM - CAPABILITIES AND LIMITATIONS * Cornea * Pupil and Iris * Lens * Vitreous Humor * Retina * RODS - NIGHT VISION * CONES - DAY VISION * RODS AND CONES - TWILIGHT VISION * VISUAL PIGMENTS * MACULA * BLOOD * CHOROID COAT * Visual Signal Processing * Pathways to the Brain * Spatial Vision * Temporal Vision * Colour Vision * Colour Blindness * DICHROMATISM * Protanopia * Deuteranopia * Tritanopia * ANOMALOUS TRICHROMATISM * Protanomaly * Deuteranomaly * Tritanomaly * CONE MONOCHROMATISM * ROD MONOCHROMATISM * Using Colour Effectively * COLOUR MIXTURES AND THE CHROMATICITY DIAGRAM * Colour Matching Functions and Chromaticity Co-ordinates * CIE 1931 Colour Space * CIE PRIMARIES * CIE COLOUR MATCHING FUNCTIONS AND CHROMATICITY CO-ORDINATES * METHODS FOR DETERMINING TRISTIMULUS VALUES AND COLOUR CO-ORDINATES * Spectral Power Distribution Method * Filter Method * CIE 1931 CHROMATICITY DIAGRAM * ADDITIVE COLOUR MIXTURE * CIE 1976 Chromaticity Diagram * CIE Uniform Colour Spaces and Colour Difference Formulae * CIELUV OR L*u*v* * CIELAB OR L*a*b* * CIE COLOUR DIFFERENCE FORMULAE * Colour Temperature and CIE Standard Illuminants and source * RADIOMETRIC AND PHOTOMETRIC QUANTITIES * Photopic (Vλ and Scotopic (Vλ') Luminous Efficiency Function * Photometric and Radiometric Flux * Luminous and Radiant Intensities * Incidence: Illuminance and Irradiance * Exitance or Emittance (M) * Luminance and Radiance * ERGONOMIC REQUIREMENTS OF DISPLAYS * ELECTRO-OPTICAL PARAMETERS AND REQUIREMENTS * Contrast and Contrast Ratio * Luminance and Brightness * Colour Contrast and Chromaticity * Glare * Other Aspects of Legibility * SHAPE AND SIZE OF CHARACTERS * DEFECTS AND BLEMISHES * FLICKER AND DISTORTION * ANGLE OF VIEW * Switching Speed * Threshold and Threshold Characteristic * Measurement Techniques For Electro-optical Parameters * RADIOMETRIC MEASUREMENTS * Broadband Radiometry or Filtered Photodetector Radiometric Method * Spectroradiometric Method * PHOTOMETRIC MEASUREMENTS * COLOUR MEASUREMENTS * LUMINANCE, CONTRAST RATIO, THRESHOLD CHARACTERISTIC AND POLAR PLOT * SWITCHING SPEED * ELECTRICAL AND LIFE PARAMETERS AND REQUIREMENTS * Operating Voltage, Current Drainage and Power Consumption * Operating Frequency * Life Expectancy * LCD FAILURE MODES * Liquid Crystal Materials * Substrate Glass * Electrode Patterns * Alignment and Aligning Material * Peripheral and End Plug Seal * Spacers * Crossover Material * Polarizers and Reflectors * Connectors * Heater * Colour Filters * Backlighting System * Explanation For Some of the Observed Defects * BLOOMING PIXELS * POLARIZER RELATED DEFECTS * DIFFERENTIAL THERMAL EXPANSION RELATED DEFECTS * ELECTROCHEMICAL AND ELECTROHYDRODYNAMIC RELATED DEFECTS * REVERSE TWIST AND REVERSE TILT * MEMORY OR REMINISCENT CONTRAST * LCD RELIABILRY AND ACCELERATED LIFE TESTING * ACKNOWLEDGEMENTS * REFERENCES * APPENDIX
Steering of an automated vehicle in an unstructured environment
NASA Astrophysics Data System (ADS)
Kanakaraju, Sampath; Shanmugasundaram, Sathish K.; Thyagarajan, Ramesh; Hall, Ernest L.
1999-08-01
The purpose of this paper is to describe a high-level path planning logic, which processes the data from a vision system and an ultrasonic obstacle avoidance system and steers an autonomous mobile robot between obstacles. The test bed was an autonomous root built at University of Cincinnati, and this logic was tested and debugged on this machine. Attempts have already been made to incorporate fuzzy system on a similar robot, and this paper extends them to take advantage of the robot's ZTR capability. Using the integrated vision syste, the vehicle senses its location and orientation. A rotating ultrasonic sensor is used to map the location and size of possible obstacles. With these inputs the fuzzy logic controls the speed and the steering decisions of the robot. With the incorporation of this logic, it has been observed that Bearcat II has been very successful in avoiding obstacles very well. This was achieved in the Ground Robotics Competition conducted by the AUVS in June 1999, where it travelled a distance of 154 feet in a 10ft. wide path ridden with obstacles. This logic proved to be a significant contributing factor in this feat of Bearcat II.
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.
2003-01-01
A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.
Vision-mediated interaction with the Nottingham caves
NASA Astrophysics Data System (ADS)
Ghali, Ahmed; Bayomi, Sahar; Green, Jonathan; Pridmore, Tony; Benford, Steve
2003-05-01
The English city of Nottingham is widely known for its rich history and compelling folklore. A key attraction is the extensive system of caves to be found beneath Nottingham Castle. Regular guided tours are made of the Nottingham caves, during which castle staff tell stories and explain historical events to small groups of visitors while pointing out relevant cave locations and features. The work reported here is part of a project aimed at enhancing the experience of cave visitors, and providing flexible storytelling tools to their guides, by developing machine vision systems capable of identifying specific actions of guides and/or visitors and triggering audio and/or video presentations as a result. Attention is currently focused on triggering audio material by directing the beam of a standard domestic flashlight towards features of interest on the cave wall. Cameras attached to the walls or roof provide image sequences within which torch light and cave features are detected and their relative positions estimated. When a target feature is illuminated the corresponding audio response is generated. We describe the architecture of the system, its implementation within the caves and the results of initial evaluations carried out with castle guides and members of the public.
Vision-based semi-autonomous outdoor robot system to reduce soldier workload
NASA Astrophysics Data System (ADS)
Richardson, Al; Rodgers, Michael H.
2001-09-01
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors
Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres
2016-01-01
Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms. PMID:27240382
Computer programming for generating visual stimuli.
Bukhari, Farhan; Kurylo, Daniel D
2008-02-01
Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.
NASA Astrophysics Data System (ADS)
Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan
2017-12-01
Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.
Wearable Improved Vision System for Color Vision Deficiency Correction
Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria
2017-01-01
Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827
Guidance, Navigation, and Control Technology Assessment for Future Planetary Science Missions
NASA Technical Reports Server (NTRS)
Beauchamp, Pat; Cutts, James; Quadrelli, Marco B.; Wood, Lincoln J.; Riedel, Joseph E.; McHenry, Mike; Aung, MiMi; Cangahuala, Laureano A.; Volpe, Rich
2013-01-01
Future planetary explorations envisioned by the National Research Council's (NRC's) report titled Vision and Voyages for Planetary Science in the Decade 2013-2022, developed for NASA Science Mission Directorate (SMD) Planetary Science Division (PSD), seek to reach targets of broad scientific interest across the solar system. This goal requires new capabilities such as innovative interplanetary trajectories, precision landing, operation in close proximity to targets, precision pointing, multiple collaborating spacecraft, multiple target tours, and advanced robotic surface exploration. Advancements in Guidance, Navigation, and Control (GN&C) and Mission Design in the areas of software, algorithm development and sensors will be necessary to accomplish these future missions. This paper summarizes the key GN&C and mission design capabilities and technologies needed for future missions pursuing SMD PSD's scientific goals.
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol; Padgett, Curtis; Zhu, David; Lung, Gerald; Howard, Ayanna
2000-01-01
A three-dimensional microelectronic device (3DANN-R) capable of performing general image convolution at the speed of 1012 operations/second (ops) in a volume of less than 1.5 cubic centimeter has been successfully built under the BMDO/JPL VIGILANTE program. 3DANN-R was developed in partnership with Irvine Sensors Corp., Costa Mesa, California. 3DANN-R is a sugar-cube-sized, low power image convolution engine that in its core computation circuitry is capable of performing 64 image convolutions with large (64x64) windows at video frame rates. This paper explores potential applications of 3DANN-R such as target recognition, SAR and hyperspectral data processing, and general machine vision using real data and discuss technical challenges for providing deployable systems for BMDO surveillance and interceptor programs.
An Integrated Global Atmospheric Composition Observing System: Progress and Impediments
NASA Astrophysics Data System (ADS)
Keating, T. J.
2016-12-01
In 2003-2005, a vision of an integrated global observing system for atmospheric composition and air quality emerged through several international forums (IGACO, 2004; GEO, 2005). In the decade since, the potential benefits of such a system for improving our understanding and mitigation of health and climate impacts of air pollution have become clearer and the needs more urgent. Some progress has been made towards the goal: technology has developed, capabilities have been demonstrated, and lessons have been learned. In Europe, the Copernicus Atmospheric Monitoring Service has blazed a trail for other regions to follow. Powerful new components of the emerging global system (e.g. a constellation of geostationary instruments) are expected to come on-line in the near term. But there are important gaps in the emerging system that are likely to keep us from achieving for some time the full benefits that were envisioned more than a decade ago. This presentation will explore the components and benefits of an integrated global observing system for atmospheric composition and air quality, some of the gaps and obstacles that exist in our current capabilities and institutions, and efforts that may be needed to achieve the envisioned system.
GPR application on construction foundation study
NASA Astrophysics Data System (ADS)
Amran, T. S. T.; Ismail, M. P.; Ismail, M. A.; Amin, M. S. M.; Ahmad, M. R.; Basri, N. S. M.
2017-11-01
Extensive researches and studies have been carried on radar system for commercialisation of ground penetrating radar (GPR) technology pioneered in construction, and thus claimed its rightful place in the vision of future. The application of ground penetrating radar in construction study is briefly reviewed. Based on previous experimentation and studies, this paper is focus on reinforcement bar (rebar) investigation on construction. The various data through previous references used to discuss and analyse the capability of ground penetrating radar for further improvement in construction projects especially in rebar placement in works.
Evolution of Embedded Processing for Wide Area Surveillance
2014-01-01
future vision . 15. SUBJECT TERMS Embedded processing; high performance computing; general-purpose graphical processing units (GPGPUs) 16. SECURITY...recon- naissance (ISR) mission capabilities. The capabilities these advancements are achieving include the ability to provide persistent all...fighters to support and positively affect their mission . Significant improvements in high-performance computing (HPC) technology make it possible to
Night vision imaging systems design, integration, and verification in military fighter aircraft
NASA Astrophysics Data System (ADS)
Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David
2012-04-01
This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and rear cockpits at the various stages of the test campaign. This process allowed a considerable enhancement of the TORNADO NVIS configuration, giving a good medium-high level NVG operational capability to the aircraft. Further developments also include the design, integration and test of internal/external lighting for the Italian TORNADO "Mid Life Update" (MLU) and other programs, such as the AM-X aircraft internal/external lights modification/testing and the activities addressing low-altitude NVG operations with fast jets (e.g., TORNADO, AM-X, MB-339CD), a major issue being the safe ejection of aircrew with NVG and NVG modified helmets. Two options have been identified for solving this problem: namely the modification of the current Gentex HGU-55 helmets and the design of a new helmet incorporating a reliable NVG connection/disconnection device (i.e., a mechanical system fully integrated in the helmet frame), with embedded automatic disconnection capability in case of ejection.
Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision
NASA Astrophysics Data System (ADS)
Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.
2018-01-01
The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.
Effects of high-color-discrimination capability spectra on color-deficient vision.
Perales, Esther; Linhares, João Manuel Maciel; Masuda, Osamu; Martínez-Verdú, Francisco M; Nascimento, Sérgio Miguel Cardoso
2013-09-01
Light sources with three spectral bands in specific spectral positions are known to have high-color-discrimination capability. W. A. Thornton hypothesized that they may also enhance color discrimination for color-deficient observers. This hypothesis was tested here by comparing the Rösch-MacAdam color volume for color-deficient observers rendered by three of these singular spectra, two reported previously and one derived in this paper by maximization of the Rösch-MacAdam color solid. It was found that all illuminants tested enhance discriminability for deuteranomalous observers, but their impact on other congenital deficiencies was variable. The best illuminant was the one derived here, as it was clearly advantageous for the two red-green anomalies and for tritanopes and almost neutral for red-green dichromats. We conclude that three-band spectra with high-color-discrimination capability for normal observers do not necessarily produce comparable enhancements for color-deficient observers, but suitable spectral optimization clearly enhances the vision of the color deficient.
Automated site characterization for robotic sample acquisition systems
NASA Astrophysics Data System (ADS)
Scholl, Marija S.; Eberlein, Susan J.
1993-04-01
A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.
Dimensional measuring techniques in the automotive and aircraft industry
NASA Astrophysics Data System (ADS)
Muench, K. H.; Baertlein, Hugh
1994-03-01
Optical tooling methods used in industry are rapidly being replaced by new electronic sensor techniques. The impact of new measuring technologies on the production process has caused major changes on the industrial shop floor as well as within industrial measurement systems. The paper deals with one particular industrial measuring system, the manual theodolite measuring system (TMS), within the aircraft and automobile industry. With TMS, setup, data capture, and data analysis are flexible enough to suit industry's demands regarding speed, accuracy, and mobility. Examples show the efficiency and the wide range of TMS applications. In cooperation with industry, the Video Theodolite System was developed. Its origin, functions, capabilities, and future plans are briefly described. With the VTS a major step has been realized in direction to vision systems for industrial applications.
NASA Astrophysics Data System (ADS)
Stack, J. R.; Guthrie, R. S.; Cramer, M. A.
2009-05-01
The purpose of this paper is to outline the requisite technologies and enabling capabilities for network-centric sensor data analysis within the mine warfare community. The focus includes both automated processing and the traditional humancentric post-mission analysis (PMA) of tactical and environmental sensor data. This is motivated by first examining the high-level network-centric guidance and noting the breakdown in the process of distilling actionable requirements from this guidance. Examples are provided that illustrate the intuitive and substantial capability improvement resulting from processing sensor data jointly in a network-centric fashion. Several candidate technologies are introduced including the ability to fully process multi-sensor data given only partial overlap in sensor coverage and the ability to incorporate target identification information in stride. Finally the critical enabling capabilities are outlined including open architecture, open business, and a concept of operations. This ability to process multi-sensor data in a network-centric fashion is a core enabler of the Navy's vision and will become a necessity with the increasing number of manned and unmanned sensor systems and the requirement for their simultaneous use.
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
Burnett, Anthea; Yashadhana, Aryati; Cabrera Aguas, Maria; Hanni, Yvonne; Yu, Mitasha
2016-01-01
A person's capability to access services and achieve good eye health is influenced by their behaviours, perceptions, beliefs and experiences. As evidence from Papua New Guinea (PNG) about people's lived experience with vision impairment is limited, the purpose of the present study was to better understand the beliefs, perceptions and emotional responses to vision impairment in PNG. A qualitative study, using both purposive and convenience sampling, was undertaken to explore common beliefs and perceptions about vision impairment, as well as the emotional responses to vision impairment. In-depth interviews were undertaken with 51 adults from five provinces representing culturally and geographically diverse regions of PNG. Grounded theory was used to elicit key themes from interview data. Participants described activities of everyday life impacted by vision impairment and the related worry, sadness and social exclusion. Common beliefs about the causes of vision impairment were environmental stressors (sun, dust, dirt and smoke), ageing and sorcery. Findings provide insight into the unique social context in PNG and identify a number of programmatic and policy implications, such as the need for preventative eye health information and services, addressing persisting beliefs in sorcery when developing health information packages, and the importance of coordinating with counselling and well-being services for people experiencing vision impairment.
NASA Astrophysics Data System (ADS)
Theisen, Bernard L.; Lane, Gerald R.
2003-10-01
The Intelligent Ground Vehicle Competition (IGVC) is one of three, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI) in the 1990's. The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics, and mobile platform fundamentals to design and build an unmanned system. Both the U.S. and international teams focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligtent driving capabilities. Over the past 11 years, the competition has challenged both undergraduates and graduates, including Ph.D. students with real world applications in intelligent transportation systems, the military, and manufacturing automation. To date, teams from over 40 universities and colleges have participated. In this paper, we describe some of the applications of the technologies required by this competition, and discuss the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the three-day competition are highlighted. Finally, an assessment of the competition based on participant feedback is presented.
An Overview of the NASA Aeronautics Test Program Strategic Plan
NASA Technical Reports Server (NTRS)
Marshall, Timothy J.
2010-01-01
U.S. leadership in aeronautics depends on ready access to technologically advanced, efficient, and affordable aeronautics test capabilities. These systems include major wind tunnels and propulsion test facilities and flight test capabilities. The federal government owns the majority of the major aeronautics test capabilities in the United States, primarily through the National Aeronautics and Space Administration (NASA) and the Department of Defense (DoD), however an overarching strategy for management of these national assets was needed. Therefore, in Fiscal Year (FY) 2006 NASA established the Aeronautics Test Program (ATP) as a two-pronged strategic initiative to: (1) retain and invest in NASA aeronautics test capabilities considered strategically important to the agency and the nation, and (2) establish a strong, high level partnership with the DoD Test Resources Management Center (TRMC), stewards of the DoD test and evaluation infrastructure. Since then, approximately seventy percent of the ATP budget has been directed to underpin fixed and variable costs of facility operations within its portfolio and the balance towards strategic investments in its test facilities, including maintenance and capability upgrades. Also, a strong guiding coalition was established through the National Partnership for Aeronautics Testing (NPAT), with governance by the senior leadership of NASA s Aeronautics Research Mission Directorate (ARMD) and the DoD's TRMC. As part of its strategic planning, ATP has performed or participated in many studies and analyses, including assessments of major NASA and DoD aeronautics test capabilities, test facility condition evaluations and market research. The ATP strategy has also benefitted from unpublished RAND research and analysis by Ant n et al. (2009). Together, these various studies, reports and assessments serve as a foundation for a new, five year strategic plan that will guide ATP through FY 2014. Our vision for the future is a balanced portfolio of aeronautics ground and flight test capabilities that advance U.S. leadership in aeronautics in the short and long term. Key to the ATP vision is the concept of availability, not necessarily ownership; that is, NASA does not have to own and operate all facilities that are envisioned for future aeronautics testing. However, ATP will enable access to capabilities which are needed but not owned by NASA through strategic partnerships and reliance agreements. This paper will outline the major aspects of the ATP strategic plan for achieving its mission.
Colour calibration of a laboratory computer vision system for quality evaluation of pre-sliced hams.
Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul
2009-01-01
Due to the high variability and complex colour distribution in meats and meat products, the colour signal calibration of any computer vision system used for colour quality evaluations, represents an essential condition for objective and consistent analyses. This paper compares two methods for CIE colour characterization using a computer vision system (CVS) based on digital photography; namely the polynomial transform procedure and the transform proposed by the sRGB standard. Also, it presents a procedure for evaluating the colour appearance and presence of pores and fat-connective tissue on pre-sliced hams made from pork, turkey and chicken. Our results showed high precision, in colour matching, for device characterization when the polynomial transform was used to match the CIE tristimulus values in comparison with the sRGB standard approach as indicated by their ΔE(ab)(∗) values. The [3×20] polynomial transfer matrix yielded a modelling accuracy averaging below 2.2 ΔE(ab)(∗) units. Using the sRGB transform, high variability was appreciated among the computed ΔE(ab)(∗) (8.8±4.2). The calibrated laboratory CVS, implemented with a low-cost digital camera, exhibited reproducible colour signals in a wide range of colours capable of pinpointing regions-of-interest and allowed the extraction of quantitative information from the overall ham slice surface with high accuracy. The extracted colour and morphological features showed potential for characterizing the appearance of ham slice surfaces. CVS is a tool that can objectively specify colour and appearance properties of non-uniformly coloured commercial ham slices.
Hubble Space Telescope: cost reduction by re-engineering telemetry processing and archiving
NASA Astrophysics Data System (ADS)
Miebach, Manfred P.
1998-05-01
The Hubble Space Telescope (HST), the first of NASA's Great Observatories, was launched on April 24, 1990. The HST was designed for a minimum fifteen-year mission with on-orbit servicing by the Space Shuttle System planned at approximately three-year intervals. Major changes to the HST ground system are planned to be in place for the third servicing mission in December 1999. The primary objectives of the ground system reengineering effort, a project called 'vision December 1999. The primary objectives of the ground system re-engineering effort, a project called 'vision 2000 control center systems (CCS)', are to reduce both development and operating costs significantly for the remaining years of HST's lifetime. Development costs will be reduced by providing a modern hardware and software architecture and utilizing commercial of f the shelf (COTS) products wherever possible. Operating costs will be reduced by eliminating redundant legacy systems and processes and by providing an integrated ground system geared toward autonomous operation. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on current Data Warehouse technology. The purpose of this data store is to provide a common data source of telemetry data for all HST subsystems. This data store will become the engineering data archive and will include a queryable database for the user to analyze HST telemetry. The access to the engineering data in the Data Warehouse is platform- independent from an office environment using commercial standards. Latest internet technology is used to reach the HST engineering community. A WEB-based user interface allows easy access to the data archives. This paper will provide a high level overview of the CCS system and will illustrate some of the CCS telemetry capabilities. Samples of CCS user interface pages will be given. Vision 2000 is an ambitious project, but one that is well under way. It will allow the HST program to realize reduced operations costs for the Third Servicing Mission and beyond.
Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results
NASA Astrophysics Data System (ADS)
Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
Vision for an Open, Global Greenhouse Gas Information System (GHGIS)
NASA Astrophysics Data System (ADS)
Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team
2010-12-01
Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.
The Cyborg Astrobiologist: scouting red beds for uncommon features with geological significance
NASA Astrophysics Data System (ADS)
McGuire, Patrick Charles; Díaz-Martínez, Enrique; Ormö, Jens; Gómez-Elvira, Javier; Rodríguez-Manfredi, José Antonio; Sebastián-Martínez, Eduardo; Ritter, Helge; Haschke, Robert; Oesker, Markus; Ontrup, Jörg
2005-04-01
The `Cyborg Astrobiologist' has undergone a second geological field trial, at a site in northern Guadalajara, Spain, near Riba de Santiuste. The site at Riba de Santiuste is dominated by layered deposits of red sandstones. The Cyborg Astrobiologist is a wearable computer and video camera system that has demonstrated a capability to find uncommon interest points in geological imagery in real time in the field. In this second field trial, the computer vision system of the Cyborg Astrobiologist was tested at seven different tripod positions, on three different geological structures. The first geological structure was an outcrop of nearly homogeneous sandstone, which exhibits oxidized-iron impurities in red areas and an absence of these iron impurities in white areas. The white areas in these `red beds' have turned white because the iron has been removed. The iron removal from the sandstone can proceed once the iron has been chemically reduced, perhaps by a biological agent. In one instance the computer vision system found several (iron-free) white spots to be uncommon and therefore interesting, as well as several small and dark nodules. The second geological structure was another outcrop some 600 m to the east, with white, textured mineral deposits on the surface of the sandstone, at the bottom of the outcrop. The computer vision system found these white, textured mineral deposits to be interesting. We acquired samples of the mineral deposits for geochemical analysis in the laboratory. This laboratory analysis of the crust identifies a double layer, consisting of an internal millimetre-size layering of calcite and an external centimetre-size efflorescence of gypsum. The third geological structure was a 50 cm thick palaeosol layer, with fossilized root structures of some plants. The computer vision system also found certain areas of these root structures to be interesting. A quasi-blind comparison of the Cyborg Astrobiologist's interest points for these images with the interest points determined afterwards by a human geologist shows that the Cyborg Astrobiologist concurred with the human geologist 68% of the time (true-positive rate), with a 32% false-positive rate and a 32% false-negative rate. The performance of the Cyborg Astrobiologist's computer vision system was by no means perfect, so there is plenty of room for improvement. However, these tests validate the image-segmentation and uncommon-mapping technique that we first employed at a different geological site (Rivas Vaciamadrid) with somewhat different properties for the imagery.
Learning prosthetic vision: a virtual-reality study.
Chen, Spencer C; Hallum, Luke E; Lovell, Nigel H; Suaning, Gregg J
2005-09-01
Acceptance of prosthetic vision will be heavily dependent on the ability of recipients to form useful information from such vision. Training strategies to accelerate learning and maximize visual comprehension would need to be designed in the light of the factors affecting human learning under prosthetic vision. Some of these potential factors were examined in a visual acuity study using the Landolt C optotype under virtual-reality simulation of prosthetic vision. Fifteen normally sighted subjects were tested for 10-20 sessions. Potential learning factors were tested at p < 0.05 with regression models. Learning was most evident across-sessions, though 17% of sessions did express significant within-session trends. Learning was highly concentrated toward a critical range of optotype sizes, and subjects were less capable in identifying the closed optotype (a Landolt C with no gap, forming a closed annulus). Training for implant recipients should target these critical sizes and the closed optotype to extend the limit of visual comprehension. Although there was no evidence that image processing affected overall learning, subjects showed varying personal preferences.
Robonaut: A Robotic Astronaut Assistant
NASA Technical Reports Server (NTRS)
Ambrose, Robert O.; Diftler, Myron A.
2001-01-01
NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.
Techniques and potential capabilities of multi-resolutional information (knowledge) processing
NASA Technical Reports Server (NTRS)
Meystel, A.
1989-01-01
A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.
Explicit solution techniques for impact with contact constraints
NASA Technical Reports Server (NTRS)
Mccarty, Robert E.
1993-01-01
Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.
Explicit solution techniques for impact with contact constraints
NASA Astrophysics Data System (ADS)
McCarty, Robert E.
1993-08-01
Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.
Micro-Inspector Spacecraft for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Mueller, Juergen; Alkalai, Leon; Lewis, Carol
2005-01-01
NASA is seeking to embark on a new set of human and robotic exploration missions back to the Moon, to Mars, and destinations beyond. Key strategic technical challenges will need to be addressed to realize this new vision for space exploration, including improvements in safety and reliability to improve robustness of space operations. Under sponsorship by NASA's Exploration Systems Mission, the Jet Propulsion Laboratory (JPL), together with its partners in government (NASA Johnson Space Center) and industry (Boeing, Vacco Industries, Ashwin-Ushas Inc.) is developing an ultra-low mass (<3.0 kg) free-flying micro-inspector spacecraft in an effort to enhance safety and reduce risk in future human and exploration missions. The micro-inspector will provide remote vehicle inspections to ensure safety and reliability, or to provide monitoring of in-space assembly. The micro-inspector spacecraft represents an inherently modular system addition that can improve safety and support multiple host vehicles in multiple applications. On human missions, it may help extend the reach of human explorers, decreasing human EVA time to reduce mission cost and risk. The micro-inspector development is the continuation of an effort begun under NASA's Office of Aerospace Technology Enabling Concepts and Technology (ECT) program. The micro-inspector uses miniaturized celestial sensors; relies on a combination of solar power and batteries (allowing for unlimited operation in the sun and up to 4 hours in the shade); utilizes a low-pressure, low-leakage liquid butane propellant system for added safety; and includes multi-functional structure for high system-level integration and miniaturization. Versions of this system to be designed and developed under the H&RT program will include additional capabilities for on-board, vision-based navigation, spacecraft inspection, and collision avoidance, and will be demonstrated in a ground-based, space-related environment. These features make the micro-inspector design unique in its ability to serve crewed as well as robotic spacecraft, well beyond Earth-orbit and into arenas such as robotic missions, where human teleoperation capability is not locally available.
Air Force Handbook. 109th Congress
2009-01-01
FY06 Combat Survivor Evader Locator (CSEL) Acquisition Status Capabilities/Profile Functions /Performance Parameters 38 • Air Force’s primary source for...Broadcast Service (GBS) Capabilities/Profile Acquisition Status Functions /Performance Parameters • Purchase Requirements (Phase 2): • 3 primary ...Operations (AF CONOPS) that support the CSAF and joint vision of combat operations. • AF CONOPS describe key Air Force mission and/or functional areas
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2007-01-01
The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.
Flight instruments and helmet-mounted SWIR imaging systems
NASA Astrophysics Data System (ADS)
Robinson, Tim; Green, John; Jacobson, Mickey; Grabski, Greg
2011-06-01
Night vision technology has experienced significant advances in the last two decades. Night vision goggles (NVGs) based on gallium arsenide (GaAs) continues to raise the bar for alternative technologies. Resolution, gain, sensitivity have all improved; the image quality through these devices is nothing less than incredible. Panoramic NVGs and enhanced NVGs are examples of recent advances that increase the warfighter capabilities. Even with these advances, alternative night vision devices such as solid-state indium gallium arsenide (InGaAs) focal plane arrays are under development for helmet-mounted imaging systems. The InGaAs imaging system offers advantages over the existing NVGs. Two key advantages are; (1) the new system produces digital image data, and (2) the new system is sensitive to energy in the shortwave infrared (SWIR) spectrum. While it is tempting to contrast the performance of these digital systems to the existing NVGs, the advantage of different spectral detection bands leads to the conclusion that the technologies are less competitive and more synergistic. It is likely, by the end of the decade, pilots within a cockpit will use multi-band devices. As such, flight decks will need to be compatible with both NVGs and SWIR imaging systems. Insertion of NVGs in aircraft during the late 70's and early 80's resulted in many "lessons learned" concerning instrument compatibility with NVGs. These "lessons learned" ultimately resulted in specifications such as MIL-L-85762A and MIL-STD-3009. These specifications are now used throughout industry to produce NVG-compatible illuminated instruments and displays for both military and civilian applications. Inserting a SWIR imaging device in a cockpit will require similar consideration. A project evaluating flight deck instrument compatibility with SWIR devices is currently ongoing; aspects of this evaluation are described in this paper. This project is sponsored by the Air Force Research Laboratory (AFRL).
COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights
NASA Technical Reports Server (NTRS)
Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas
2016-01-01
The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.
A developmental roadmap for learning by imitation in robots.
Lopes, Manuel; Santos-Victor, José
2007-04-01
In this paper, we present a strategy whereby a robot acquires the capability to learn by imitation following a developmental pathway consisting on three levels: 1) sensory-motor coordination; 2) world interaction; and 3) imitation. With these stages, the system is able to learn tasks by imitating human demonstrators. We describe results of the different developmental stages, involving perceptual and motor skills, implemented in our humanoid robot, Baltazar. At each stage, the system's attention is drawn toward different entities: its own body and, later on, objects and people. Our main contributions are the general architecture and the implementation of all the necessary modules until imitation capabilities are eventually acquired by the robot. Also, several other contributions are made at each level: learning of sensory-motor maps for redundant robots, a novel method for learning how to grasp objects, and a framework for learning task description from observation for program-level imitation. Finally, vision is used extensively as the sole sensing modality (sometimes in a simplified setting) avoiding the need for special data-acquisition hardware.
EarthCube's Assessment Framework: Ensuring Return on Investment
NASA Astrophysics Data System (ADS)
Lehnert, K.
2016-12-01
EarthCube is a community-governed, NSF-funded initiative to transform geoscience research by developing cyberinfrastructure that improves access, sharing, visualization, and analysis of all forms of geosciences data and related resources. EarthCube's goal is to enable geoscientists to tackle the challenges of understanding and predicting a complex and evolving solid Earth, hydrosphere, atmosphere, and space environment systems. EarthCube's infrastructure needs capabilities around data, software, and systems. It is essential for EarthCube to determine the value of new capabilities for the community and the progress of the overall effort to demonstrate its value to the science community and Return on Investment for the NSF. EarthCube is therefore developing an assessment framework for research proposals, projects funded by EarthCube, and the overall EarthCube program. As a first step, a software assessment framework has been developed that addresses the EarthCube Strategic Vision by promoting best practices in software development, complete and useful documentation, interoperability, standards adherence, open science, and education and training opportunities for research developers.
Towards Automated Nanomanipulation under Scanning Electron Microscopy
NASA Astrophysics Data System (ADS)
Ye, Xutao
Robotic Nanomaterial Manipulation inside scanning electron microscopes (SEM) is useful for prototyping functional devices and characterizing one-dimensional nanomaterial's properties. Conventionally, manipulation of nanowires has been performed via teleoperation, which is time-consuming and highly skill-dependent. Manual manipulation also has the limitation of low success rates and poor reproducibility. This research focuses on a robotic system capable of automated pick-place of single nanowires. Through SEM visual detection and vision-based motion control, the system transferred individual silicon nanowires from their growth substrate to a microelectromechanical systems (MEMS) device that characterized the nanowires' electromechanical properties. The performances of the nanorobotic pick-up and placement procedures were quantified by experiments. The system demonstrated automated nanowire pick-up and placement with high reliability. A software system for a load-lock-compatible nanomanipulation system is also designed and developed in this research.
A phase-based stereo vision system-on-a-chip.
Díaz, Javier; Ros, Eduardo; Sabatini, Silvio P; Solari, Fabio; Mota, Sonia
2007-02-01
A simple and fast technique for depth estimation based on phase measurement has been adopted for the implementation of a real-time stereo system with sub-pixel resolution on an FPGA device. The technique avoids the attendant problem of phase warping. The designed system takes full advantage of the inherent processing parallelism and segmentation capabilities of FPGA devices to achieve a computation speed of 65megapixels/s, which can be arranged with a customized frame-grabber module to process 211frames/s at a size of 640x480 pixels. The processing speed achieved is higher than conventional camera frame rates, thus allowing the system to extract multiple estimations and be used as a platform to evaluate integration schemes of a population of neurons without increasing hardware resource demands.
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
Vision Systems with the Human in the Loop
NASA Astrophysics Data System (ADS)
Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard
2005-12-01
The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Enabling information management systems in tactical network environments
NASA Astrophysics Data System (ADS)
Carvalho, Marco; Uszok, Andrzej; Suri, Niranjan; Bradshaw, Jeffrey M.; Ceccio, Philip J.; Hanna, James P.; Sinclair, Asher
2009-05-01
Net-Centric Information Management (IM) and sharing in tactical environments promises to revolutionize forward command and control capabilities by providing ubiquitous shared situational awareness to the warfighter. This vision can be realized by leveraging the tactical and Mobile Ad hoc Networks (MANET) which provide the underlying communications infrastructure, but, significant technical challenges remain. Enabling information management in these highly dynamic environments will require multiple support services and protocols which are affected by, and highly dependent on, the underlying capabilities and dynamics of the tactical network infrastructure. In this paper we investigate, discuss, and evaluate the effects of realistic tactical and mobile communications network environments on mission-critical information management systems. We motivate our discussion by introducing the Advanced Information Management System (AIMS) which is targeted for deployment in tactical sensor systems. We present some operational requirements for AIMS and highlight how critical IM support services such as discovery, transport, federation, and Quality of Service (QoS) management are necessary to meet these requirements. Our goal is to provide a qualitative analysis of the impact of underlying assumptions of availability and performance of some of the critical services supporting tactical information management. We will also propose and describe a number of technologies and capabilities that have been developed to address these challenges, providing alternative approaches for transport, service discovery, and federation services for tactical networks.
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
Lightweight Nonmetallic Thermal Protection Materials Technology
NASA Technical Reports Server (NTRS)
Valentine, Peter G.; Lawrence, Timothy W.; Gubert, Michael K.; Milos, Frank S.; Levine, Stanley R.; Ohlhorst, Craig W.; Koenig, John R.
2005-01-01
To fulfill President George W. Bush's "Vision for Space Exploration" (2004) - successful human and robotic missions to and from other solar system bodies in order to explore their atmospheres and surfaces - the National Aeronautics and Space Administration (NASA) must reduce the trip time, cost, and vehicle weight so that the payload and scientific experiments' capabilities can be maximized. The new project described in this paper will generate thermal protection system (TPS) product that will enable greater fidelity in mission/vehicle design trade studies, support risk reduction for material selections, assist in the optimization of vehicle weights, and provide materials and processes templates for use in the development of human-rated TPS qualification and certification plans.
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
X-37 Flight Demonstrator Project: Capabilities for Future Space Transportation System Development
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.
2004-01-01
The X-37 Approach and Landing Vehicle (ALTV) is an automated (unmanned) spacecraft designed to reduce technical risk in the descent and landing phases of flight. ALTV mission requirements and Orbital Vehicle (OV) technology research and development (R&D) goals are formulated to validate and mature high-payoff ground and flight technologies such as Thermal Protection Systems (TPS). It has been more than three decades since the Space Shuttle was designed and built. Real-world hardware experience gained through the multitude of X-37 Project activities has expanded both Government and industry knowledge of the challenges involved in developing new generations of spacecraft that can fulfill the Vision for Space Exploration.
Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation
2010-01-01
Background Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands. Methods The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances. Results The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only). Conclusions The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training. PMID:20731834
Recognition of 3-D Scene with Partially Occluded Objects
NASA Astrophysics Data System (ADS)
Lu, Siwei; Wong, Andrew K. C...
1987-03-01
This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.
NASA Technical Reports Server (NTRS)
1972-01-01
A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.
Code of Federal Regulations, 2010 CFR
2010-07-01
... whose visual acuity, if better than 20/200, is accompanied by a limit to the field of vision in the... congenital defect) which so limits the person's functional capabilities (mobility, communication, self-care...
Receptoral and Neural Aliasing.
1993-01-30
standard psychophysical methods. Stereoscoptc capability makes VisionWorks ideal for investigating and simulating strabismus and amblyopia , or developing... amblyopia . OElectrophyslological and psychophysical response to spatio-temporal and novel stimuli for investipttion of visual field deficits
Got Vision? Unity of Vision in Policy and Strategy: What it is, and Why we need it
2010-07-01
fail instead”—from Paul Carroll’s review of Alice Schroeder’s biogra- phy of Warren Buffet (Paul Carroll, “Why Panic Passes Him By,” The Wall Street...opponents”—this implies that the mindset, the calculations, and the capabilities of the enemy have to be taken into account , which is precisely where...27 In this regard, he bears an un- canny resemblance to Edward Lansdale.28 According to his own and others’ accounts , Lansdale applied considerable
Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight
NASA Technical Reports Server (NTRS)
Suorsa, Raymond; Sridhar, Banavar
1991-01-01
A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.
Micro- and nano-NDE systems for aircraft: great things in small packages
NASA Astrophysics Data System (ADS)
Malas, James C.; Kropas-Hughes, Claudia V.; Blackshire, James L.; Moran, Thomas; Peeler, Deborah; Frazier, W. G.; Parker, Danny
2003-07-01
Recent advancements in small, microscopic NDE sensor technologies will revolutionize how aircraft maintenance is done, and will significantly improve the reliability and airworthiness of current and future aircraft systems. A variety of micro/nano systems and concepts are being developed that will enable whole new capabilities for detecting and tracking structural integrity damage. For aging aircraft systems, the impact of micro-NDE sensor technologies will be felt immediately, with dramatic reductions in labor for maintenance, and extended useable life of critical components being two of the primary benefits. For the fleet management of future aircraft systems, a comprehensive evaluation and tracking of vehicle health throughout its entire life cycle will be needed. Indeed, micro/nano NDE systems will be instrumental in realizing this futuristic vision. Several major challenges will need to be addressed, however, before micro- and nano-NDE systems can effectively be implemented, and this will require interdisciplinary research approaches, and a systematic engineering integration of the new technologies into real systems. Future research will need to emphasize systems engineering approaches for designing materials and structures with in-situ inspection and prognostic capabilities. Recent advances in 1) embedded / add-on micro-sensors, 2) computer modeling of nondestructive evaluation responses, and 3) wireless communications are important steps toward this goal, and will ultimately provide previously unimagined opportunities for realizing whole new integrated vehicle health monitoring capabilities. The future use of micro/nano NDE technologies as vehicle health monitoring tools will have profound implications, and will provide a revolutionary way of doing NDE in the near and distant future.
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Multifunctional millimeter-wave radar system for helicopter safety
NASA Astrophysics Data System (ADS)
Goshi, Darren S.; Case, Timothy J.; McKitterick, John B.; Bui, Long Q.
2012-06-01
A multi-featured sensor solution has been developed that enhances the operational safety and functionality of small airborne platforms, representing an invaluable stride toward enabling higher-risk, tactical missions. This paper demonstrates results from a recently developed multi-functional sensor system that integrates a high performance millimeter-wave radar front end, an evidence grid-based integration processing scheme, and the incorporation into a 3D Synthetic Vision System (SVS) display. The front end architecture consists of a w-band real-beam scanning radar that generates a high resolution real-time radar map and operates with an adaptable antenna architecture currently configured with an interferometric capability for target height estimation. The raw sensor data is further processed within an evidence grid-based integration functionality that results in high-resolution maps in the region surrounding the platform. Lastly, the accumulated radar results are displayed in a fully rendered 3D SVS environment integrated with local database information to provide the best representation of the surrounding environment. The integrated system concept will be discussed and initial results from an experimental flight test of this developmental system will be presented. Specifically, the forward-looking operation of the system demonstrates the system's ability to produce high precision terrain mapping with obstacle detection and avoidance capability, showcasing the system's versatility in a true operational environment.
NASA Astrophysics Data System (ADS)
Ataide, Italani; Ataide, Jade; McLeod, Roger
2006-03-01
We hope that we can soon demonstrate that an important part of a nation's scientific, technologic, health and other educational or economic indicators, such as productivity and agrarian progress, are linked to the visual capabilities of its population. We propose to engage Brazilians specifically, and other South or Central Americans generally, in deciding whether Naturoptic Vision Improvement patent innovations or services, can be nurtured by the countries involved, for a franchisor who will be granting time-limited but protected and profit-free use permission, for the purposes referred to above. Cost-benefit analyses are readily accomplished. Insurers can easily improve their profitability by establishing that their clients, whose vision has been Naturoptically improved, are safer drivers than individuals with static vision states, caused or abetted by glasses, contacts or surgically altered corneas.
Mediated-reality magnification for macular degeneration rehabilitation
NASA Astrophysics Data System (ADS)
Martin-Gonzalez, Anabel; Kotliar, Konstantin; Rios-Martinez, Jorge; Lanzl, Ines; Navab, Nassir
2014-10-01
Age-related macular degeneration (AMD) is a gradually progressive eye condition, which is one of the leading causes of blindness and low vision in the Western world. Prevailing optical visual aids compensate part of the lost visual function, but omitting helpful complementary information. This paper proposes an efficient magnification technique, which can be implemented on a head-mounted display, for improving vision of patients with AMD, by preserving global information of the scene. Performance of the magnification approach is evaluated by simulating central vision loss in normally sighted subjects. Visual perception was measured as a function of text reading speed and map route following speed. Statistical analysis of experimental results suggests that our magnification method improves reading speed 1.2 times and spatial orientation to find routes on a map 1.5 times compared to a conventional magnification approach, being capable to enhance peripheral vision of AMD subjects along with their life quality.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
NASA's Decadal Planning Team Mars Mission Analysis Summary
NASA Astrophysics Data System (ADS)
Drake, Bret G.
2007-02-01
In June 1999 the NASA Administrator chartered an internal NASA task force, termed the Decadal Planning Team, to create new integrated vision and strategy for space exploration. The efforts of the Decadal Planning Team evolved into the Agency-wide team known as the NASA Exploration Team (NEXT). This team was also instructed to identify technology roadmaps to enable the science-driven exploration vision, established a cross-Enterprise, cross-Center systems engineering team with emphasis focused on revolutionary not evolutionary approaches. The strategy of the DPT and NEXT teams was to "Go Anywhere, Anytime" by conquering key exploration hurdles of space transportation, crew health and safety, human/robotic partnerships, affordable abundant power, and advanced space systems performance. Early emphasis was placed on revolutionary exploration concepts such as rail gun and electromagnetic launchers, propellant depots, retrograde trajectories, nano structures, and gas core nuclear rockets to name a few. Many of these revolutionary concepts turned out to be either not feasible for human exploration missions or well beyond expected technology readiness for near-term implementation. During the DPT and NEXT study cycles, several architectures were analyzed including missions to the Earth-Sun Libration Point (L2), the Earth-Moon Gateway and L1, the lunar surface, Mars (both short and long stays), one-year round trip Mars, and near-Earth asteroids. Common emphasis of these studies included utilization of the Earth-Moon Libration Point (L1) as a staging point for exploration activities, current (Shuttle) and near-term launch capabilities (EELV), advanced propulsion, and robust space power. Although there was much emphasis placed on utilization of existing launch capabilities, the team concluded that missions in near-Earth space are only marginally feasible and human missions to Mars were not feasible without a heavy lift launch capability. In addition, the team concluded that missions in Earth s neighborhood, such as to the Moon, can serve as stepping-stones toward further deep-space missions in terms of proving systems, technologies, and operational concepts. The material contained in this presentation was compiled to capture the work performed by the Mars Sub-Team of the DPT NEXT efforts in the late 1999-2001 timeframe.
NASA's Decadal Planning Team Mars Mission Analysis Summary
NASA Technical Reports Server (NTRS)
Drake, Bret G. (Editor)
2007-01-01
In June 1999 the NASA Administrator chartered an internal NASA task force, termed the Decadal Planning Team, to create new integrated vision and strategy for space exploration. The efforts of the Decadal Planning Team evolved into the Agency-wide team known as the NASA Exploration Team (NEXT). This team was also instructed to identify technology roadmaps to enable the science-driven exploration vision, established a cross-Enterprise, cross-Center systems engineering team with emphasis focused on revolutionary not evolutionary approaches. The strategy of the DPT and NEXT teams was to "Go Anywhere, Anytime" by conquering key exploration hurdles of space transportation, crew health and safety, human/robotic partnerships, affordable abundant power, and advanced space systems performance. Early emphasis was placed on revolutionary exploration concepts such as rail gun and electromagnetic launchers, propellant depots, retrograde trajectories, nano structures, and gas core nuclear rockets to name a few. Many of these revolutionary concepts turned out to be either not feasible for human exploration missions or well beyond expected technology readiness for near-term implementation. During the DPT and NEXT study cycles, several architectures were analyzed including missions to the Earth-Sun Libration Point (L2), the Earth-Moon Gateway and L1, the lunar surface, Mars (both short and long stays), one-year round trip Mars, and near-Earth asteroids. Common emphasis of these studies included utilization of the Earth-Moon Libration Point (L1) as a staging point for exploration activities, current (Shuttle) and near-term launch capabilities (EELV), advanced propulsion, and robust space power. Although there was much emphasis placed on utilization of existing launch capabilities, the team concluded that missions in near-Earth space are only marginally feasible and human missions to Mars were not feasible without a heavy lift launch capability. In addition, the team concluded that missions in Earth s neighborhood, such as to the Moon, can serve as stepping-stones toward further deep-space missions in terms of proving systems, technologies, and operational concepts. The material contained in this presentation was compiled to capture the work performed by the Mars Sub-Team of the DPT NEXT efforts in the late 1999-2001 timeframe.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III
2005-01-01
Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.
A simple, inexpensive, and effective implementation of a vision-guided autonomous robot
NASA Astrophysics Data System (ADS)
Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James
2006-10-01
This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.
NASA Technical Reports Server (NTRS)
Sutliff, Thomas J.; Kohl, Fred J.
2004-01-01
A new Vision for Space Exploration was announced earlier this year by U.S. President George W. Bush. NASA has evaluated on-going programs for strategic alignment with this vision. The evaluation proceeded at a rapid pace and is resulting in changes to the scope and focus of experimental research that will be conducted in support of the new vision. The existing network of researchers in the physical sciences - a highly capable, independent, and loosely knitted community - typically have shared conclusions derived from their work within appropriate discipline-specific peer reviewed journals and publications. The initial result of introducing this Vision for Space Exploration has been to shift research focus from a broad coverage of numerous, widely varying topics into a research program focused on a nearly-singular set of supporting research objectives to enable advances in space exploration. Two of these traditional physical science research disciplines, Combustion Science and Fluid Physics, are implementing a course adjustment from a portfolio dominated by "Fundamental Science Research" to one focused nearly exclusively on supporting the Exploration Vision. Underlying scientific and engineering competencies and infrastructure of the Microgravity Combustion Science and Fluid Physics disciplines do provide essential research capabilities to support the contemporary thrusts of human life support, radiation countermeasures, human health, low gravity research for propulsion and materials and, ultimately, research conducted on the Moon and Mars. A perspective on how these two research disciplines responded to the course change will be presented. The relevance to the new NASA direction is provided, while demonstrating through two examples how the prior investment in fundamental research is being brought to bear on solving the issues confronting the successful implementation of the exploration goals.
20 CFR 220.120 - The claimant's residual functional capacity.
Code of Federal Regulations, 2011 CFR
2011-04-01
... medically determinable impairment(s), such as skin impairment(s), epilepsy, impairment(s) of vision, hearing... with a low back disorder may be fully capable of the physical demands consistent with those of...
20 CFR 220.120 - The claimant's residual functional capacity.
Code of Federal Regulations, 2012 CFR
2012-04-01
... medically determinable impairment(s), such as skin impairment(s), epilepsy, impairment(s) of vision, hearing... with a low back disorder may be fully capable of the physical demands consistent with those of...
20 CFR 220.120 - The claimant's residual functional capacity.
Code of Federal Regulations, 2013 CFR
2013-04-01
... medically determinable impairment(s), such as skin impairment(s), epilepsy, impairment(s) of vision, hearing... with a low back disorder may be fully capable of the physical demands consistent with those of...
20 CFR 220.120 - The claimant's residual functional capacity.
Code of Federal Regulations, 2014 CFR
2014-04-01
... medically determinable impairment(s), such as skin impairment(s), epilepsy, impairment(s) of vision, hearing... with a low back disorder may be fully capable of the physical demands consistent with those of...
NASA Astrophysics Data System (ADS)
Cross, Jack; Schneider, John; Cariani, Pete
2013-05-01
Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.
Evaluation of 5 different labeled polymer immunohistochemical detection systems.
Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A
2010-01-01
Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.
A Vision for Ice Giant Exploration
NASA Technical Reports Server (NTRS)
Hofstadter, M.; Simon, A.; Atreya, S.; Banfield, D.; Fortney, J.; Hayes, A.; Hedman, M.; Hospodarsky, G.; Mandt, K.; Masters, A.;
2017-01-01
From Voyager to a Vision for 2050: NASA and ESA have just completed a study of candidate missionsto Uranus and Neptune, the so-called ice giant planets. It is a Pre-Decadal Survey Study, meant to inform the next Planetary Science Decadal Survey about opportunities for missions launching in the 2020's and early 2030's. There have been no space flight missions to the ice giants since the Voyager 2 flybys of Uranus in 1986 and Neptune in 1989. This paper presents some conclusions of that study (hereafter referred to as The Study), and how the results feed into a vision for where planetary science can be in 2050. Reaching that vision will require investments in technology andground-based science in the 2020's, flight during the 2030's along with continued technological development of both ground- and space-based capabilities, and data analysis and additional flights in the 2040's. We first discuss why exploring the ice giants is important. We then summarize the science objectives identified by The Study, and our vision of the science goals for 2050. We then review some of the technologies needed to make this vision a reality.
Refractory Research Group - U.S. DOE, Albany Research Center [Institution Profile
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, James P.
2004-09-01
The refractory research group at the Albany Research Center (ARC) has a long history of conducting materials research within the U.S. Bureau of Mines, and more recently, within the U.S. Dept. of Energy. When under the U.S. Bureau of Mines, research was driven by national needs to develop substitute materials and to conserve raw materials. This mission was accomplished by improving refractory material properties and/or by recycling refractories using critical and strategic materials. Currently, as a U.S. Dept of Energy Fossil Energy field site, research is driven primarily by the need to assist DOE in meeting its vision to developmore » economically and environmentally viable technologies for the production of electricity from fossil fuels. Research at ARC impacts this vision by: • Providing information on the performance characteristics of materials being specified for the current generation of power systems; • Developing cost-effective, high performance materials for inclusion in the next generation of fossil power systems; and • Solving environmental emission and waste problems related to fossil energy systems. A brief history of past refractory research within the U.S. Bureau of Mines, the current refractory research at ARC, and the equipment and capabilities used to conduct refractory research at ARC will be discussed.« less
Planet Formation Imager (PFI): science vision and key requirements
NASA Astrophysics Data System (ADS)
Kraus, Stefan; Monnier, John D.; Ireland, Michael J.; Duchêne, Gaspard; Espaillat, Catherine; Hönig, Sebastian; Juhasz, Attila; Mordasini, Chris; Olofsson, Johan; Paladini, Claudia; Stassun, Keivan; Turner, Neal; Vasisht, Gautam; Harries, Tim J.; Bate, Matthew R.; Gonzalez, Jean-François; Matter, Alexis; Zhu, Zhaohuan; Panic, Olja; Regaly, Zsolt; Morbidelli, Alessandro; Meru, Farzana; Wolf, Sebastian; Ilee, John; Berger, Jean-Philippe; Zhao, Ming; Kral, Quentin; Morlok, Andreas; Bonsor, Amy; Ciardi, David; Kane, Stephen R.; Kratter, Kaitlin; Laughlin, Greg; Pepper, Joshua; Raymond, Sean; Labadie, Lucas; Nelson, Richard P.; Weigelt, Gerd; ten Brummelaar, Theo; Pierens, Arnaud; Oudmaijer, Rene; Kley, Wilhelm; Pope, Benjamin; Jensen, Eric L. N.; Bayo, Amelia; Smith, Michael; Boyajian, Tabetha; Quiroga-Nuñez, Luis Henry; Millan-Gabet, Rafael; Chiavassa, Andrea; Gallenne, Alexandre; Reynolds, Mark; de Wit, Willem-Jan; Wittkowski, Markus; Millour, Florentin; Gandhi, Poshak; Ramos Almeida, Cristina; Alonso Herrero, Almudena; Packham, Chris; Kishimoto, Makoto; Tristram, Konrad R. W.; Pott, Jörg-Uwe; Surdej, Jean; Buscher, David; Haniff, Chris; Lacour, Sylvestre; Petrov, Romain; Ridgway, Steve; Tuthill, Peter; van Belle, Gerard; Armitage, Phil; Baruteau, Clement; Benisty, Myriam; Bitsch, Bertram; Paardekooper, Sijme-Jan; Pinte, Christophe; Masset, Frederic; Rosotti, Giovanni
2016-08-01
The Planet Formation Imager (PFI) project aims to provide a strong scientific vision for ground-based optical astronomy beyond the upcoming generation of Extremely Large Telescopes. We make the case that a breakthrough in angular resolution imaging capabilities is required in order to unravel the processes involved in planet formation. PFI will be optimised to provide a complete census of the protoplanet population at all stellocentric radii and over the age range from 0.1 to 100 Myr. Within this age period, planetary systems undergo dramatic changes and the final architecture of planetary systems is determined. Our goal is to study the planetary birth on the natural spatial scale where the material is assembled, which is the "Hill Sphere" of the forming planet, and to characterise the protoplanetary cores by measuring their masses and physical properties. Our science working group has investigated the observational characteristics of these young protoplanets as well as the migration mechanisms that might alter the system architecture. We simulated the imprints that the planets leave in the disk and study how PFI could revolutionise areas ranging from exoplanet to extragalactic science. In this contribution we outline the key science drivers of PFI and discuss the requirements that will guide the technology choices, the site selection, and potential science/technology tradeoffs.
NASA Technical Reports Server (NTRS)
Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.
2016-01-01
Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.
Putting Automated Visual Inspection Systems To Work On The Factory Floor: What's Missing?
NASA Astrophysics Data System (ADS)
Waltz, Frederick M.; Snyder, Michael A.; Batchelor, Bruce G.
1990-02-01
Machine vision systems and other automated visual inspection (AVI) systems have been proving their usefulness in factories for more than a decade. In spite of this, the number of installed systems is far below the number that could profitably be employed. In the opinion of the authors, the primary reason for this is the high cost of customizing vision systems to meet applications requirements. A three-part approach to this problem has proven to be useful: 1. A multi-phase paradigm for customer interaction, system specification, system development, and system installation; 2. A powerful and easy-to-use system development environment, including a a flexible laboratory lighting setup, plus software-based tools to assist in the design of image acquisition systems, b. an image processing environment with a very large repertoire of image processing and feature extraction operations and an easy-to-use command interpreter having macro capabilities, and c. an image analysis environment with high-level constructs, a flexible and powerful syntax, and a "seamless" interface to the image processing level; and 3. A moderately-priced high-speed "target" system fully compatible with the development environment, so that algorithms developed thereon can be transferred directly to the factory environment without further development costs or reprogramming. Items 1 and 2 are covered in other papers1,23,4,5 and are touched on here only briefly. Item 3 is the main subject of this paper. Our major motivation in presenting this paper is to offer suggestions to vendors developing commercial boards and systems, in hopes that the special needs of industrial inspection can be met.
A new vision for fusion energy research: Fusion rocket engines for planetary defense
Wurden, G. A.; Weber, T. E.; Turchi, P. J.; ...
2015-11-16
Here, we argue that it is essential for the fusion energy program to identify an imagination-capturing critical mission by developing a unique product which could command the marketplace. We lay out the logic that this product is a fusion rocket engine, to enable a rapid response capable of deflecting an incoming comet, to prevent its impact on the planet Earth, in defense of our population, infrastructure, and civilization. As a side benefit, deep space solar system exploration, with greater speed and orders-of-magnitude greater payload mass would also be possible.
A new vision for fusion energy research: Fusion rocket engines for planetary defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A.; Weber, T. E.; Turchi, P. J.
Here, we argue that it is essential for the fusion energy program to identify an imagination-capturing critical mission by developing a unique product which could command the marketplace. We lay out the logic that this product is a fusion rocket engine, to enable a rapid response capable of deflecting an incoming comet, to prevent its impact on the planet Earth, in defense of our population, infrastructure, and civilization. As a side benefit, deep space solar system exploration, with greater speed and orders-of-magnitude greater payload mass would also be possible.
Millimeter wave imaging: a historical review
NASA Astrophysics Data System (ADS)
Appleby, Roger; Robertson, Duncan A.; Wikner, David
2017-05-01
The SPIE Passive and Active Millimeter Wave Imaging conference has provided an annual focus and forum for practitioners in the field of millimeter wave imaging for the past two decades. To celebrate the conference's twentieth anniversary we present a historical review of the evolution of millimeter wave imaging over the past twenty years. Advances in device technology play a fundamental role in imaging capability whilst system architectures have also evolved. Imaging phenomenology continues to be a crucial topic underpinning the deployment of millimeter wave imaging in diverse applications such as security, remote sensing, non-destructive testing and synthetic vision.
An operator interface design for a telerobotic inspection system
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tso, Kam S.; Hayati, Samad
1993-01-01
The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
Top-level modeling of an als system utilizing object-oriented techniques
NASA Astrophysics Data System (ADS)
Rodriguez, L. F.; Kang, S.; Ting, K. C.
The possible configuration of an Advanced Life Support (ALS) System capable of supporting human life for long-term space missions continues to evolve as researchers investigate potential technologies and configurations. To facilitate the decision process the development of acceptable, flexible, and dynamic mathematical computer modeling tools capable of system level analysis is desirable. Object-oriented techniques have been adopted to develop a dynamic top-level model of an ALS system.This approach has several advantages; among these, object-oriented abstractions of systems are inherently modular in architecture. Thus, models can initially be somewhat simplistic, while allowing for adjustments and improvements. In addition, by coding the model in Java, the model can be implemented via the World Wide Web, greatly encouraging the utilization of the model. Systems analysis is further enabled with the utilization of a readily available backend database containing information supporting the model. The subsystem models of the ALS system model include Crew, Biomass Production, Waste Processing and Resource Recovery, Food Processing and Nutrition, and the Interconnecting Space. Each subsystem model and an overall model have been developed. Presented here is the procedure utilized to develop the modeling tool, the vision of the modeling tool, and the current focus for each of the subsystem models.
Adaptive and Context-Aware Reconciliation of Reactive and Pro-active Behavior in Evolving Systems
NASA Astrophysics Data System (ADS)
Trajcevski, Goce; Scheuermann, Peter
One distinct characteristics of the context-aware systems is their ability to react and adapt to the evolution of the environment, which is often a result of changes in the values of various (possibly correlated) attributes. Based on these changes, reactive systems typically take corrective actions, e.g., adjusting parameters in order to maintain the desired specifications of the system's state. Pro-active systems, on the other hand, may change the mode of interaction with the environment as well as the desired goals of the system. In this paper we describe our (ECA)2 paradigm for reactive behavior with proactive impact and we present our ongoing work and vision for a system that is capable of context-aware adaptation, while ensuring the maintenance of a set of desired behavioral policies. Our main focus is on developing a formalism that provides tools for expressing normal, as well as defeasible and/or exceptional specification. However, at the same time, we insist on a sound semantics and the capability of answering hypothetical "what-if" queries. Towards this end, we introduce the high-level language L_{ EAR} that can be used to describe the dynamics of the problem domain, specify triggers under the (ECA)2 paradigm, and reason about the consequences of the possible evolutions.
Conceptual Drivers for an Exploration Medical System
NASA Technical Reports Server (NTRS)
Antonsen, Erik; Hanson, Andrea; Shah, Ronak; Reed, Rebekah; Canga, Michael
2016-01-01
Interplanetary spaceflight, such as NASA's proposed three-year mission to Mars, provides unique and novel challenges when compared with human spaceflight to date. Extended distance and multi-year missions introduce new elements of operational complexity and additional risk. These elements include: inability to resupply medications and consumables, inability to evacuate injured or ill crew, uncharted psychosocial conditions, and communication delays that create a requirement for some level of autonomous medical capability. Because of these unique challenges, the approaches used in prior programs have limited application to a Mars mission. On a Mars mission, resource limitations will significantly constrain available medical capabilities, and require a paradigm shift in the approach to medical system design and risk mitigation for crew health. To respond to this need for a new paradigm, the Exploration Medical Capability (ExMC) Element is assessing each Mars mission phase-transit, surface stay, rendezvous, extravehicular activity, and return-to identify and prioritize medical needs for the journey beyond low Earth orbit (LEO). ExMC is addressing both planned medical operations, and unplanned contingency medical operations that meld clinical needs and research needs into a single system. This assessment is being used to derive a gap analysis and studies to support meaningful medical capabilities trades. These trades, in turn, allow the exploration medical system design to proceed from both a mission centric and ethics-based approach, and to manage the risks associated with the medical limitations inherent in an exploration class mission. This paper outlines the conceptual drivers used to derive medical system and vehicle needs from an integrated vision of how medical care will be provided within this paradigm. Keywords: (Max 6 keywords: exploration, medicine, spaceflight, Mars, research, NASA)
Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.
Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal
2017-01-07
The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.
Can Effective Synthetic Vision System Displays be Implemented on Limited Size Display Spaces?
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Glaab, Lou J.; Prinzel, Lance J.; Elliott, Dawn M.
2004-01-01
The Synthetic Vision Systems (SVS) element of the NASA Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents, and to enhance operational capabilities of all types or aircraft. To accomplish these safety and situation awareness improvements, the SVS concepts are designed to provide a clear view of the world ahead through the display of computer generated imagery derived from an onboard database of terrain, obstacle and airport information. An important issue for the SVS concept is whether useful and effective Synthetic Vision System (SVS) displays can be implemented on limited size display spaces as would be required to implement this technology on older aircraft with physically smaller instrument spaces. In this study, prototype SVS displays were put on the following display sizes: (a) size "A' (e.g. 757 EADI), (b) form factor "D" (e.g. 777 PFD), and (c) new size "X" (Rectangular flat-panel, approximately 20 x 25 cm). Testing was conducted in a high-resolution graphics simulation facility at NASA Langley Research Center. Specific issues under test included the display size as noted above, the field-of-view (FOV) to be shown on the display and directly related to FOV is the degree of minification of the displayed image or picture. Using simulated approaches with display size and FOV conditions held constant no significant differences by these factors were found. Preferred FOV based on performance was determined by using approaches during which pilots could select FOV. Mean preference ratings for FOV were in the following order: (1) 30 deg., (2) Unity, (3) 60 deg., and (4) 90 deg., and held true for all display sizes tested. Limitations of the present study and future research directions are discussed.
Evidence against global attention filters selective for absolute bar-orientation in human vision.
Inverso, Matthew; Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George
2016-01-01
The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task.
Vision-based obstacle recognition system for automated lawn mower robot development
NASA Astrophysics Data System (ADS)
Mohd Zin, Zalhan; Ibrahim, Ratnawati
2011-06-01
Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.
In praise of the incomplete leader.
Ancona, Deborah; Malone, Thomas W; Orlikowski, Wanda J; Senge, Peter M
2007-02-01
Today's top executives are expected to do everything right, from coming up with solutions to unfathomably complex problems to having the charisma and prescience to rally stakeholders around a perfect vision of the future. But no one leader can be all things to all people. It's time to end the myth of the complete leader, say the authors. Those at the top must come to understand their weaknesses as well as their strengths. Only by embracing the ways in which they are incomplete can leaders fill in the gaps in their knowledge with others' skills. The incomplete leader has the confidence and humility to recognize unique talents and perspectives throughout the organization--and to let those qualities shine. The authors' work studying leadership over the past six years has led them to develop a framework of distributed leadership. Within that model, leadership consists of four capabilities: sensemaking, relating, "visioning," and inventing. Sensemaking involves understanding and mapping the context in which a company and its people operate. A leader skilled in this area can quickly identify the complexities of a given situation and explain them to others. The second capability, relating, means being able to build trusting relationships with others through inquiring (listening with intention), advocating (explaining one's own point of view), and connecting (establishing a network of allies who can help a leader accomplish his or her goals). Visioning, the third capability, means coming up with a compelling image of the future. It is a collaborative process that articulates what the members of an organization want to create. Finally, inventing involves developing new ways to bring that vision to life. Rarely will a single person be skilled in all four areas. That's why it's critical that leaders find others who can offset their limitations and complement their strengths. Those who don't will not only bear the burden of leadership alone but will find themselves at the helm of an unbalanced ship.
Information Systems for NASA's Aeronautics and Space Enterprises
NASA Technical Reports Server (NTRS)
Kutler, Paul
1998-01-01
The aerospace industry is being challenged to reduce costs and development time as well as utilize new technologies to improve product performance. Information technology (IT) is the key to providing revolutionary solutions to the challenges posed by the increasing complexity of NASA's aeronautics and space missions and the sophisticated nature of the systems that enable them. The NASA Ames vision is to develop technologies enabling the information age, expanding the frontiers of knowledge for aeronautics and space, improving America's competitive position, and inspiring future generations. Ames' missions to accomplish that vision include: 1) performing research to support the American aviation community through the unique integration of computation, experimentation, simulation and flight testing, 2) studying the health of our planet, understanding living systems in space and the origins of the universe, developing technologies for space flight, and 3) to research, develop and deliver information technologies and applications. Information technology may be defined as the use of advance computing systems to generate data, analyze data, transform data into knowledge and to use as an aid in the decision-making process. The knowledge from transformed data can be displayed in visual, virtual and multimedia environments. The decision-making process can be fully autonomous or aided by a cognitive processes, i.e., computational aids designed to leverage human capacities. IT Systems can learn as they go, developing the capability to make decisions or aid the decision making process on the basis of experiences gained using limited data inputs. In the future, information systems will be used to aid space mission synthesis, virtual aerospace system design, aid damaged aircraft during landing, perform robotic surgery, and monitor the health and status of spacecraft and planetary probes. NASA Ames through the Center of Excellence for Information Technology Office is leading the effort in pursuit of revolutionary, IT-based approaches to satisfying NASA's aeronautics and space requirements. The objective of the effort is to incorporate information technologies within each of the Agency's four Enterprises, i.e., Aeronautics and Space Transportation Technology, Earth, Science, Human Exploration and Development of Space and Space Sciences. The end results of these efforts for Enterprise programs and projects should be reduced cost, enhanced mission capability and expedited mission completion.
NASA Astrophysics Data System (ADS)
Lee, El-Hang; Lee, S. G.; O, B. H.; Park, S. G.; Noh, H. S.; Kim, K. H.; Song, S. H.
2006-09-01
A collective overview and review is presented on the original work conducted on the theory, design, fabrication, and in-tegration of micro/nano-scale optical wires and photonic devices for applications in a newly-conceived photonic systems called "optical printed circuit board" (O-PCBs) and "VLSI photonic integrated circuits" (VLSI-PIC). These are aimed for compact, high-speed, multi-functional, intelligent, light-weight, low-energy and environmentally friendly, low-cost, and high-volume applications to complement or surpass the capabilities of electrical PCBs (E-PCBs) and/or VLSI electronic integrated circuit (VLSI-IC) systems. These consist of 2-dimensional or 3-dimensional planar arrays of micro/nano-optical wires and circuits to perform the functions of all-optical sensing, storing, transporting, processing, switching, routing and distributing optical signals on flat modular boards or substrates. The integrated optical devices include micro/nano-scale waveguides, lasers, detectors, switches, sensors, directional couplers, multi-mode interference devices, ring-resonators, photonic crystal devices, plasmonic devices, and quantum devices, made of polymer, silicon and other semiconductor materials. For VLSI photonic integration, photonic crystals and plasmonic structures have been used. Scientific and technological issues concerning the processes of miniaturization, interconnection and integration of these systems as applicable to board-to-board, chip-to-chip, and intra-chip integration, are discussed along with applications for future computers, telecommunications, and sensor-systems. Visions and challenges toward these goals are also discussed.
Monovision techniques for telerobots
NASA Technical Reports Server (NTRS)
Goode, P. W.; Carnils, K.
1987-01-01
The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.
NASA Astrophysics Data System (ADS)
McCain, Harry G.; Andary, James F.; Hewitt, Dennis R.; Haley, Dennis C.
The Flight Telerobotic Servicer (FTS) Project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station Freedom (SSF). The FTS will provide a telerobotic capability to the Freedom Station in the early assembly phases of the program and will be employed for assembly, maintenance, and inspection applications throughout the lifetime of the space station. Appropriately configured elements of the FTS will also be employed for robotic manipulation in remote satellite servicing applications and possibly the Lunar/Mars Program. In mid-1989, the FTS entered the flight system design and implementation phase (Phase C/D) of development with the signing of the FTS prime contract with Martin Marietta Astronautics Group in Denver, Colorado. The basic FTS design is now established and can be reported on in some detail. This paper will describe the FTS flight system design and the rationale for the specific design approaches and component selections. The current state of space technology and the general nature of the FTS task dictate that the FTS be designed with sophisticated teleoperation capabilities for its initial primary operating mode. However, there are technologies, such as advanced computer vision and autonomous planning techniques currently in research and advanced development phases which would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Therefore, a specific requirement on the initial FTS design is that it has the capability to evolve as new technology becomes available. This paper will describe the FTS design approach for evolution to more autonomous capabilities. Some specific task applications of the FTS and partial automation approaches of these tasks will also be discussed in this paper.
McCain, H G; Andary, J F; Hewitt, D R; Haley, D C
1991-01-01
The Flight Telerobotic Servicer (FTS) Project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station) Freedom (SSF). The FTS will provide a telerobotic capability to the Freedom Station in the early assembly phases of the program and will be employed for assembly, maintenance, and inspection applications throughout the lifetime of the space station. Appropriately configured elements of the FTS will also be employed for robotic manipulation in remote satellite servicing applications and possibly the Lunar/Mars Program. In mid-1989, the FTS entered the flight system design and implementation phase (Phase C/D) of development with the signing of the FTS prime contract with Martin Marietta Astronautics Group in Denver, Colorado. The basic FTS design is now established and can be reported on in some detail. This paper will describe the FTS flight system design and the rationale for the specific design approaches and component selections. The current state of space technology and the nature of the FTS task dictate that the FTS be designed with sophisticated teleoperation capabilities for its initial primary operating mode. However, there are technologies, such as advanced computer vision and autonomous planning techniques currently in research and advanced development phases which would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Therefore, a specific requirement on the initial FTS design is that it has the capability to evolve as new technology becomes available. This paper will describe the FTS design approach for evolution to more autonomous capabilities. Some specific task applications of the FTS and partial automation approaches of these tasks will also be discussed in this paper.
NASA Technical Reports Server (NTRS)
McCain, H. G.; Andary, J. F.; Hewitt, D. R.; Haley, D. C.
1991-01-01
The Flight Telerobotic Servicer (FTS) Project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station) Freedom (SSF). The FTS will provide a telerobotic capability to the Freedom Station in the early assembly phases of the program and will be employed for assembly, maintenance, and inspection applications throughout the lifetime of the space station. Appropriately configured elements of the FTS will also be employed for robotic manipulation in remote satellite servicing applications and possibly the Lunar/Mars Program. In mid-1989, the FTS entered the flight system design and implementation phase (Phase C/D) of development with the signing of the FTS prime contract with Martin Marietta Astronautics Group in Denver, Colorado. The basic FTS design is now established and can be reported on in some detail. This paper will describe the FTS flight system design and the rationale for the specific design approaches and component selections. The current state of space technology and the nature of the FTS task dictate that the FTS be designed with sophisticated teleoperation capabilities for its initial primary operating mode. However, there are technologies, such as advanced computer vision and autonomous planning techniques currently in research and advanced development phases which would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Therefore, a specific requirement on the initial FTS design is that it has the capability to evolve as new technology becomes available. This paper will describe the FTS design approach for evolution to more autonomous capabilities. Some specific task applications of the FTS and partial automation approaches of these tasks will also be discussed in this paper.
Sensor fusion to enable next generation low cost Night Vision systems
NASA Astrophysics Data System (ADS)
Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.
2010-04-01
The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.
Conceptual Study on Hypersonic Turbojet Experimental Vehicle (HYTEX)
NASA Astrophysics Data System (ADS)
Taguchi, Hideyuki; Murakami, Akira; Sato, Tetsuya; Tsuchiya, Takeshi
Pre-cooled turbojet engines have been investigated aiming at realization of reusable space transportation systems and hypersonic airplanes. Evaluation methods of these engine performances have been established based on ground tests. There are some plans on the demonstration of hypersonic propulsion systems. JAXA focused on hypersonic propulsion systems as a key technology of hypersonic transport airplane. Demonstrations of Mach 5 class hypersonic technologies are stated as a development target at 2025 in the long term vision. In this study, systems analyses of hypersonic turbojet experiment (HYTEX) with Mach 5 flight capability is performed. Aerodynamic coefficients are obtained by CFD analyses and wind tunnel tests. Small Pre-cooled turbojet is fabricated and tested using liquid hydrogen as fuel. As a result, characteristics of the baseline vehicle shape is clarified, . and effects of pre-cooling are confirmed at the firing test.
NASA Astrophysics Data System (ADS)
Fang, Yi-Chin; Wu, Bo-Wen; Lin, Wei-Tang; Jon, Jen-Liung
2007-11-01
Resolution and color are two main directions for measuring optical digital image, but it will be a hard work to integral improve the image quality of optical system, because there are many limits such as size, materials and environment of optical system design. Therefore, it is important to let blurred images as aberrations and noises or due to the characteristics of human vision as far distance and small targets to raise the capability of image recognition with artificial intelligence such as genetic algorithm and neural network in the condition that decreasing color aberration of optical system and not to increase complex calculation in the image processes. This study could achieve the goal of integral, economically and effectively to improve recognition and classification in low quality image from optical system and environment.
Intelligent Sensors: Strategies for an Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
NASA Technical Reports Server (NTRS)
Ifju, Peter
2002-01-01
Micro Air Vehicles (MAVs) will be developed for tracking individuals, locating terrorist threats, and delivering remote sensors, for surveillance and chemical/biological agent detection. The tasks are: (1) Develop robust MAV platform capable of carrying sensor payload. (2) Develop fully autonomous capabilities for delivery of sensors to remote and distant locations. The current capabilities and accomplishments are: (1) Operational electric (inaudible) 6-inch MAVs with novel flexible wing, providing superior aerodynamic efficiency and control. (2) Vision-based flight stability and control (from on-board cameras).
Design of a dynamic test platform for autonomous robot vision systems
NASA Technical Reports Server (NTRS)
Rich, G. C.
1980-01-01
The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.
Synthetic Immunology: Hacking Immune Cells to Expand Their Therapeutic Capabilities.
Roybal, Kole T; Lim, Wendell A
2017-04-26
The ability of immune cells to survey tissues and sense pathologic insults and deviations makes them a unique platform for interfacing with the body and disease. With the rapid advancement of synthetic biology, we can now engineer and equip immune cells with new sensors and controllable therapeutic response programs to sense and treat diseases that our natural immune system cannot normally handle. Here we review the current state of engineered immune cell therapeutics and their unique capabilities compared to small molecules and biologics. We then discuss how engineered immune cells are being designed to combat cancer, focusing on how new synthetic biology tools are providing potential ways to overcome the major roadblocks for treatment. Finally, we give a long-term vision for the use of synthetic biology to engineer immune cells as a general sensor-response platform to precisely detect disease, to remodel disease microenvironments, and to treat a potentially wide range of challenging diseases.
Engineering the vibrational coherence of vision into a synthetic molecular device.
Gueye, Moussa; Manathunga, Madushanka; Agathangelou, Damianos; Orozco, Yoelvis; Paolino, Marco; Fusi, Stefania; Haacke, Stefan; Olivucci, Massimo; Léonard, Jérémie
2018-01-22
The light-induced double-bond isomerization of the visual pigment rhodopsin operates a molecular-level optomechanical energy transduction, which triggers a crucial protein structure change. In fact, rhodopsin isomerization occurs according to a unique, ultrafast mechanism that preserves mode-specific vibrational coherence all the way from the reactant excited state to the primary photoproduct ground state. The engineering of such an energy-funnelling function in synthetic compounds would pave the way towards biomimetic molecular machines capable of achieving optimum light-to-mechanical energy conversion. Here we use resonance and off-resonance vibrational coherence spectroscopy to demonstrate that a rhodopsin-like isomerization operates in a biomimetic molecular switch in solution. Furthermore, by using quantum chemical simulations, we show why the observed coherent nuclear motion critically depends on minor chemical modifications capable to induce specific geometric and electronic effects. This finding provides a strategy for engineering vibrationally coherent motions in other synthetic systems.
Synthetic Immunology: Hacking Immune Cells to Expand Their Therapeutic Capabilities
Roybal, Kole T.; Lim, Wendell A.
2017-01-01
The ability of immune cells to survey tissues and sense pathologic insults and deviations makes them a unique platform for interfacing with the body and disease. With the rapid advancement of synthetic biology, we can now engineer and equip immune cells with new sensors and controllable therapeutic response programs to sense and treat diseases that our natural immune system cannot normally handle. Here we review the current state of engineered immune cell therapeutics and their unique capabilities compared to small molecules and biologics. We then discuss how engineered immune cells are being designed to combat cancer, focusing on how new synthetic biology tools are providing potential ways to overcome the major roadblocks for treatment. Finally, we give a long-term vision for the use of synthetic biology to engineer immune cells as a general sensor-response platform to precisely detect disease, to remodel disease microenvironments, and to treat a potentially wide range of challenging diseases. PMID:28446063
Refining the aggregate exposure pathway
Advancements in measurement technologies and modeling capabilities continue to result in an abundance of exposure information, adding to that currently in existence. However, fragmentation within the exposure science community acts as an obstacle for realizing the vision set fort...
Aerospace Concurrent Engineering Design Teams: Current State, Next Steps and a Vision for the Future
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Chattopadhyay, Debarati; Karpati, Gabriel; McGuire, Melissa; Borden, Chester; Panek, John; Warfield, Keith
2011-01-01
Over the past sixteen years, government aerospace agencies and aerospace industry have developed and evolved operational concurrent design teams to create novel spaceflight mission concepts and designs. These capabilities and teams, however, have evolved largely independently. In today's environment of increasingly complex missions with limited budgets it is becoming readily apparent that both implementing organizations and today's concurrent engineering teams will need to interact more often than they have in the past. This will require significant changes in the current state of practice. This paper documents the findings from a concurrent engineering workshop held in August 2010 to identify the key near term improvement areas for concurrent engineering capabilities and challenges to the long-term advancement of concurrent engineering practice. The paper concludes with a discussion of a proposed vision for the evolution of these teams over the next decade.
Multi-Center Evaluation of the Automated Immunohematology Instrument, the ORTHO VISION Analyzer.
Aysola, Agnes; Wheeler, Leslie; Brown, Richard; Denham, Rebecca; Colavecchia, Connie; Pavenski, Katerina; Krok, Elizabeth; Hayes, Chelsea; Klapper, Ellen
2017-02-01
ORTHO VISION Analyzer (Vision), is an immunohematology instrument using ID-MT gel card technology with digital image processing. It has a continuous, random sample access with STAT priority processing. The efficiency and ease of operation of Vision was evaluated at 5 medical centers. De-identified patient samples were tested on the ORTHO ProVue Analyzer (ProVue) and repeated on the Vision mimicking the daily workload pattern. Turnaround times (TAT) were collected and compared. Operators rated key features of the analyzer on a scale of 1 to 5. A total of 507 samples were tested on both instruments at the 5 trial sites. The mean TAT (SD) were 31.6 minutes (5.5) with Vision and 35.7 minutes (8.4) with ProVue, which renders a 12% reduction. Type and screens were performed on 381 samples; the mean TAT (SD) was 32.2 minutes (4.5) with Vision and 37.0 minutes (7.4) with ProVue. Antibody identification with eleven panel cells was performed on 134 samples on Vision; TAT (SD) was 43.2 minutes (8.3). The installation, training, configuration, maintenance and validation processes are all streamlined to provide a short implementation time. The average rating of main functions by the operators was 4.1 to 4.8. Opportunities for improvement, such as flexibility with editing QC results, maintenance schedule, and printing options were identified. The capabilities to perform serial dilutions, to accept pediatric tubes, and review results by e-Connectivity are enhancements over the ProVue. Vision provides shorter TAT compared to ProVue. Every site described a positive experience using Vision. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank
2005-05-01
Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.
REDUCTIONS WITHOUT REGRET: DEFINING THE NEEDED CAPABILITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swegle, J.; Tincher, D.
This is the second of three papers (in addition to an introductory summary) aimed at providing a framework for evaluating future reductions or modifications of the U.S. nuclear force, first by considering previous instances in which nuclear-force capabilities were eliminated; second by looking forward into at least the foreseeable future at the features of global and regional deterrence (recognizing that new weapon systems currently projected will have expected lifetimes stretching beyond our ability to predict the future); and third by providing examples of past or possible undesirable outcomes in the shaping of the future nuclear force, as well as somemore » closing thoughts for the future. This paper begins with a discussion of the current nuclear force and the plans and procurement programs for the modernization of that force. Current weapon systems and warheads were conceived and built decades ago, and procurement programs have begun for the modernization or replacement of major elements of the nuclear force: the heavy bomber, the air-launched cruise missile, the ICBMs, and the ballistic-missile submarines. In addition, the Nuclear Weapons Council has approved a new framework for nuclear-warhead life extension not fully fleshed out yet that aims to reduce the current number of nuclear explosives from seven to five, the so-called 3+2 vision. This vision includes three interoperable warheads for both ICBMs and SLBMs (thus eliminating one backup weapon) and two warheads for aircraft delivery (one gravity bomb and one cruise-missile, eliminating a second backup gravity bomb). This paper also includes a discussion of the current and near-term nuclear-deterrence mission, both global and regional, and offers some observations on future of the strategic deterrence mission and the challenges of regional and extended nuclear deterrence.« less
Using Vision Metrology System for Quality Control in Automotive Industries
NASA Astrophysics Data System (ADS)
Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.
2012-07-01
The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M
2011-01-01
The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.
NASA Technical Reports Server (NTRS)
Downward, James G.
1992-01-01
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.
Meso-scale controlled motion for a microfluidic drop ejector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galambos, Paul C.; Givler, Richard C.; Pohl, Kenneth Roy
2004-12-01
The objective of this LDRD was to develop a uniquely capable, novel droplet solution based manufacturing system built around a new MEMS drop ejector. The development all the working subsystems required was completed, leaving the integration of these subsystems into a working prototype still left to accomplish. This LDRD report will focus on the three main subsystems: (1) MEMS drop ejector--the MEMS ''sideshooter'' effectively ejected 0.25 pl drops at 10 m/s, (2) packaging--a compact ejector package based on a modified EMDIP (Electro-Microfluidic Dual In-line Package--SAND2002-1941) was fabricated, and (3) a vision/stage system allowing precise ejector package positioning in 3 dimensionsmore » above a target was developed.« less
Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy
2005-01-01
Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.
The astronaut and the banana peel: An EVA retriever scenario
NASA Technical Reports Server (NTRS)
Shapiro, Daniel G.
1989-01-01
To prepare for the problem of accidents in Space Station activities, the Extravehicular Activity Retriever (EVAR) robot is being constructed, whose purpose is to retrieve astronauts and tools that float free of the Space Station. Advanced Decision Systems is at the beginning of a project to develop research software capable of guiding EVAR through the retrieval process. This involves addressing problems in machine vision, dexterous manipulation, real time construction of programs via speech input, and reactive execution of plans despite the mishaps and unexpected conditions that arise in uncontrolled domains. The problem analysis phase of this work is presented. An EVAR scenario is used to elucidate major domain and technical problems. An overview of the technical approach to prototyping an EVAR system is also presented.
Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor
Delbruck, Tobi; Lang, Manuel
2013-01-01
Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most “threatening” ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided1. PMID:24311999
Hyperstereopsis in night vision devices: basic mechanisms and impact for training requirements
NASA Astrophysics Data System (ADS)
Priot, Anne-Emmanuelle; Hourlier, Sylvain; Giraudet, Guillaume; Leger, Alain; Roumes, Corinne
2006-05-01
Including night vision capabilities in Helmet Mounted Displays has been a serious challenge for many years. The use of "see through" head mounted image intensifiers systems is particularly challenging as it introduces some peculiar visual characteristics usually referred as "hyperstereopsis". Flight testing of such systems has started in the early nineties, both in US and Europe. While the trials conducted in US yielded quite controversial results, convergent positive ones were obtained from European testing, mainly in UK, Germany and France. Subsequently, work on integrating optically coupled I2 tubes on HMD was discontinued in the US, while European manufacturers developed such HMDs for various rotary wings platforms like the TIGER. Coping with hyperstereopsis raises physiological and cognitive human factors issues. Starting in the sixties, effects of increased interocular separation and adaptation to such unusual vision conditions has been quite extensively studied by a number of authors as Wallach, Schor, Judge and Miles, Fisher and Ciuffreda. A synthetic review of literature on this subject will be presented. According to users' reports, three successive phases will be described for habituation to such devices: initial exposure, building compensation phase and behavioral adjustments phase. An habituation model will be suggested to account for HMSD users' reports and literature data bearing on hyperstereopsis, cue weighting for depth perception, adaptation and learning processes, task cognitive control. Finally, some preliminary results on hyperstereopsis spatial and temporal adaptation coming from the survey of training of TIGER pilots, currently conducted at the French-German Army Aviation Training Center, will be unveiled.
Aviator's night vision system (ANVIS) in Operation Enduring Freedom (OEF): user acceptability survey
NASA Astrophysics Data System (ADS)
Hiatt, Keith L.; Trollman, Christopher J.; Rash, Clarence E.
2010-04-01
In 1973, the U.S. Army adopted night vision devices for use in the aviation environment. These devices are based on the principle of image intensification (I2) and have become the mainstay for the aviator's capability to operate during periods of low illumination, i.e., at night. In the nearly four decades that have followed, a number of engineering advancements have significantly improved the performance of these devices. The current version, using 3rd generation I2 technology is known as the Aviator's Night Vision Imaging System (ANVIS). While considerable experience with performance has been gained during training and peacetime operations, no previous studies have looked at user acceptability and performance issues in a combat environment. This study was designed to compare Army Aircrew experiences in a combat environment to currently available information in the published literature (all peacetime laboratory and field training studies) and to determine if the latter is valid. The purpose of this study was to identify and assess aircrew satisfaction with the ANVIS and any visual performance issues or problems relating to its use in Operation Enduring Freedom (OEF). The study consisted of an anonymous survey (based on previous validated surveys used in the laboratory and training environments) of 86 Aircrew members (64% Rated and 36% Non-rated) of an Aviation Task Force approximately 6 months into their OEF deployment. This group represents an aggregate of >94,000 flight hours of which ~22,000 are ANVIS and ~16,000 during this deployment. Overall user acceptability of ANVIS in a combat environment will be discussed.
Ultraviolet-Blocking Lenses Protect, Enhance Vision
NASA Technical Reports Server (NTRS)
2010-01-01
To combat the harmful properties of light in space, as well as that of artificial radiation produced during laser and welding work, Jet Propulsion Laboratory (JPL) scientists developed a lens capable of absorbing, filtering, and scattering the dangerous light while not obstructing vision. SunTiger Inc. now Eagle Eyes Optics, of Calabasas, California was formed to market a full line of sunglasses based on the JPL discovery that promised 100-percent elimination of harmful wavelengths and enhanced visual clarity. The technology was recently inducted into the Space Technology Hall of Fame.
A methodology for comprehensive strategic planning and program prioritization
NASA Astrophysics Data System (ADS)
Raczynski, Christopher Michael
2008-10-01
This process developed in this work, Strategy Optimization for the Allocation of Resources (SOAR), is a strategic planning methodology based off Integrated Product and Process Development and systems engineering techniques. Utilizing a top down approach, the process starts with the creation of the organization vision and its measures of effectiveness. These measures are prioritized based on their application to external world scenarios which will frame the future. The programs which will be used to accomplish this vision are identified by decomposing the problem. Information is gathered on the programs as to the application, cost, schedule, risk, and other pertinent information. The relationships between the levels of the hierarchy are mapped utilizing subject matter experts. These connections are then utilized to determine the overall benefit of the programs to the vision of the organization. Through a Multi-Objective Genetic Algorithm a tradespace of potential program portfolios can be created amongst which the decision maker can allocate resources. The information and portfolios are presented to the decision maker through the use of a Decision Support System which collects and visualizes all the data in a single location. This methodology was tested utilizing a science and technology planning exercise conducted by the United States Navy. A thorough decomposition was defined and technology programs identified which had the potential to provide benefit to the vision. The prioritization of the top level capabilities was performed through the use of a rank ordering scheme and a previous naval application was used to demonstrate a cumulative voting scheme. Voting was performed utilizing the Nominal Group Technique to capture the relationships between the levels of the hierarchy. Interrelationships between the technologies were identified and a MOGA was utilized to optimize portfolios with respect to these constraints and information was placed in a DSS. This formulation allowed the decision makers to assess which portfolio could provide the greatest benefit to the Navy while still fitting within the funding profile.
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
Promoting meaningful use of health information technology in Israel: ministry of health vision.
Gerber, Ayala; Topaz, Maxim Max
2014-01-01
The Ministry of Health (MOH) of Israel has overall responsibility for the healthcare system. In recent years the MOH has developed strong capabilities in the areas of technology assessment and prioritization of new technologies. Israel completed the transition to computerized medical records a decade ago in most care settings; however, the processes in Israel was spontaneous, without government control and standards settings, therefore large variations among systems and among organizations were created. Currently, the main challenge is to convert the information scattered in different systems, to organized, visible information and to make it available to various levels in health management. The MOH's solution is of implementing a selected information system from a specific vendor, at all the hospitals and all HMO's clinics, in order to achieve interoperability. The sys-tem will enable access to the patient's medical record history from any location.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena
2013-01-01
In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.
Student-Built Underwater Video and Data Capturing Device
NASA Astrophysics Data System (ADS)
Whitt, F.
2016-12-01
Students from Stockbridge High School Robotics Team invention is a low cost underwater video and data capturing device. This system is capable of shooting time-lapse photography and/or video for up to 3 days of video at a time. It can be used in remote locations without having to change batteries or adding additional external hard drives for data storage. The video capturing device has a unique base and mounting system which houses a pi drive and a programmable raspberry pi with a camera module. This system is powered by two 12 volt batteries, which makes it easier for users to recharge after use. Our data capturing device has the same unique base and mounting system as the underwater camera. The data capturing device consists of an Arduino and SD card shield that is capable of collecting continuous temperature and pH readings underwater. This data will then be logged onto the SD card for easy access and recording. The low cost underwater video and data capturing device can reach depths up to 100 meters while recording 36 hours of video on 1 terabyte of storage. It also features night vision infrared light capabilities. The cost to build our invention is $500. The goal of this was to provide a device that can easily be accessed by marine biologists, teachers, researchers and citizen scientists to capture photographic and water quality data in marine environments over extended periods of time.
Machine vision systems using machine learning for industrial product inspection
NASA Astrophysics Data System (ADS)
Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony
2002-02-01
Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.
Scarpina, Federica; Melzi, Lisa; Castelnuovo, Gianluca; Mauro, Alessandro; Marzoli, Stefania B.; Molinari, Enrico
2018-01-01
Non-organic vision loss (NOVL), a functional partial or global vision loss, might be considered a manifestation of conversion disorder. The few previous studies focused on investigating the relationship between cerebral activity and subjective symptoms in NOVL; however, the emotional processing is still neglected. In the present case-controls study, we investigated the capability of two individuals diagnosed with NOVL to recognize implicitly the emotions of fear and anger; this was assessed through a facial emotion recognition task based on the redundant target effect. In addition, the level of alexithymia was measured by asking them to judge explicitly their ability to identify and describe emotions. Both individuals showed selective difficulties in recognizing the emotion of fear when their performance was contrasted with a matched control sample; they also mislabeled other emotional stimuli, judging them as fearful, when they were not. However, they did not report alexithymia when measured using a standard questionnaire. This preliminary investigation reports a mismatch between the implicit (i.e., the behavior in the experimental paradigm) and the explicit (i.e., the subjective evaluation of one’s own emotional capability) components of the emotional processing in NOVL. Moreover, fear seems to represent a critical emotion in this condition, as has been reported in other psychiatric disorders. However, possible difficulties in the emotional processing of fear would emerge only when they are inferred from an implicit behavior, instead of a subjective evaluation of one’s own emotional processing capability. PMID:29692751
Design issues for stereo vision systems used on tele-operated robotic platforms
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, Jim; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-02-01
The use of tele-operated Unmanned Ground Vehicles (UGVs) for military uses has grown significantly in recent years with operations in both Iraq and Afghanistan. In both cases the safety of the Soldier or technician performing the mission is improved by the large standoff distances afforded by the use of the UGV, but the full performance capability of the robotic system is not utilized due to insufficient depth perception provided by the standard two dimensional video system, causing the operator to slow the mission to ensure the safety of the UGV given the uncertainty of the perceived scene using 2D. To address this Polaris Sensor Technologies has developed, in a series of developments funded by the Leonard Wood Institute at Ft. Leonard Wood, MO, a prototype Stereo Vision Upgrade (SVU) Kit for the Foster-Miller TALON IV robot which provides the operator with improved depth perception and situational awareness, allowing for shorter mission times and higher success rates. Because there are multiple 2D cameras being replaced by stereo camera systems in the SVU Kit, and because the needs of the camera systems for each phase of a mission vary, there are a number of tradeoffs and design choices that must be made in developing such a system for robotic tele-operation. Additionally, human factors design criteria drive optical parameters of the camera systems which must be matched to the display system being used. The problem space for such an upgrade kit will be defined, and the choices made in the development of this particular SVU Kit will be discussed.
NASA Technical Reports Server (NTRS)
Dorsey, John T.; Jones, Thomas C.; Doggett, W. R.; Brady, Jeffrey S.; Berry, Felecia C.; Ganoe, George G.; Anderson, Eric; King, Bruce D.; Mercer, David C.
2011-01-01
The first generation of a versatile high performance device for performing payload handling and assembly operations on planetary surfaces, the Lightweight Surface Manipulation System (LSMS), has been designed and built. Over the course of its development, conventional crane type payload handling configurations and operations have been successfully demonstrated and the range of motion, types of operations and the versatility greatly expanded. This enhanced set of 1st generation LSMS hardware is now serving as a laboratory test-bed allowing the continuing development of end effectors, operational techniques and remotely controlled and automated operations. This paper describes the most recent LSMS and test-bed development activities, that have focused on two major efforts. The first effort was to complete a preliminary design of the 2nd generation LSMS that has the capability for limited mobility and can reposition itself between lander decks, mobility chassis, and fixed base locations. A major portion of this effort involved conducting a study to establish the feasibility of, and define, the specifications for a lightweight cable-drive waist joint. The second effort was to continue expanding the versatility and autonomy of large planetary surface manipulators using the 1st generation LSMS as a test-bed. This has been accomplished by increasing manipulator capabilities and efficiencies through both design changes and tool and end effector development. A software development effort has expanded the operational capabilities of the LSMS test-bed to include; autonomous operations based on stored paths, use of a vision system for target acquisition and tracking, and remote command and control over a communications bridge.
NASA Technical Reports Server (NTRS)
Salomonson, Vincent V.
1999-01-01
In the near term NASA is entering into the peak activity period of the Earth Observing System (EOS). The EOS AM-1 /"Terra" spacecraft is nearing launch and operation to be followed soon by the New Millennium Program (NMP) Earth Observing (EO-1) mission. Other missions related to land imaging and studies include EOS PM-1 mission, the Earth System Sciences Program (ESSP) Vegetation Canopy Lidar (VCL) mission, the EOS/IceSat mission. These missions involve clear advances in technologies and observational capability including improvements in multispectral imaging and other observing strategies, for example, "formation flying". Plans are underway to define the next era of EOS missions, commonly called "EOS Follow-on" or EOS II. The programmatic planning includes concepts that represent advances over the present Landsat-7 mission that concomitantly recognize the advances being made in land imaging within the private sector. The National Polar Orbiting Environmental Satellite Series (NPOESS) Preparatory Project (NPP) is an effort that will help to transition EOS medium resolution (herein meaning spatial resolutions near 500 meters), multispectral measurement capabilities such as represented by the EOS Moderate Resolution Imaging Spectroradiometer (MODIS) into the NPOESS operational series of satellites. Developments in Synthetic Aperture Radar (SAR) and passive microwave land observing capabilities are also proceeding. Beyond these efforts the Earth Science Enterprise Technology Strategy is embarking efforts to advance technologies in several basic areas: instruments, flight systems and operational capability, and information systems. In the case of instruments architectures will be examined that offer significant reductions in mass, volume, power and observational flexibility. For flight systems and operational capability, formation flying including calibration and data fusion, systems operation autonomy, and mechanical and electronic innovations that can reduce spacecraft and subsystem resource requirements. The efforts in information systems will include better approaches for linking multiple data sets, extracting and visualizing information, and improvements in collecting, compressing, transmitting, processing, distributing and archiving data from multiple platforms. Overall concepts such as sensor webs, constellations of observing systems, and rapid and tailored data availability and delivery to multiple users comprise and notions Earth Science Vision for the future.
Helmet-mounted pilot night vision systems: Human factors issues
NASA Technical Reports Server (NTRS)
Hart, Sandra G.; Brickner, Michael S.
1989-01-01
Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.
Digital tripwire: a small automated human detection system
NASA Astrophysics Data System (ADS)
Fischer, Amber D.; Redd, Emmett; Younger, A. Steven
2009-05-01
A low cost, lightweight, easily deployable imaging sensor that can dependably discriminate threats from other activities within its field of view and, only then, alert the distant duty officer by transmitting a visual confirmation of the threat would provide a valuable asset to modern defense. At present, current solutions suffer from a multitude of deficiencies - size, cost, power endurance, but most notably, an inability to assess an image and conclude that it contains a threat. The human attention span cannot maintain critical surveillance over banks of displays constantly conveying such images from the field. DigitalTripwire is a small, self-contained, automated human-detection system capable of running for 1-5 days on two AA batteries. To achieve such long endurance, the DigitalTripwire system utilizes an FPGA designed with sleep functionality. The system uses robust vision algorithms, such as a partially unsupervised innovative backgroundmodeling algorithm, which employ several data reduction strategies to operate in real-time, and achieve high detection rates. When it detects human activity, either mounted or dismounted, it sends an alert including images to notify the command center. In this paper, we describe the hardware and software design of the DigitalTripwire system. In addition, we provide detection and false alarm rates across several challenging data sets demonstrating the performance of the vision algorithms in autonomously analyzing the video stream and classifying moving objects into four primary categories - dismounted human, vehicle, non-human, or unknown. Performance results across several challenging data sets are provided.
Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András
2017-07-01
The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Behavioural evidence for colour vision in an elasmobranch.
Van-Eyk, Sarah M; Siebeck, Ulrike E; Champ, Connor M; Marshall, Justin; Hart, Nathan S
2011-12-15
Little is known about the sensory abilities of elasmobranchs (sharks, skates and rays) compared with other fishes. Despite their role as apex predators in most marine and some freshwater habitats, interspecific variations in visual function are especially poorly studied. Of particular interest is whether they possess colour vision and, if so, the role(s) that colour may play in elasmobranch visual ecology. The recent discovery of three spectrally distinct cone types in three different species of ray suggests that at least some elasmobranchs have the potential for functional trichromatic colour vision. However, in order to confirm that these species possess colour vision, behavioural experiments are required. Here, we present evidence for the presence of colour vision in the giant shovelnose ray (Glaucostegus typus) through the use of a series of behavioural experiments based on visual discrimination tasks. Our results show that these rays are capable of discriminating coloured reward stimuli from other coloured (unrewarded) distracter stimuli of variable brightness with a success rate significantly different from chance. This study represents the first behavioural evidence for colour vision in any elasmobranch, using a paradigm that incorporates extensive controls for relative stimulus brightness. The ability to discriminate colours may have a strong selective advantage for animals living in an aquatic ecosystem, such as rays, as a means of filtering out surface-wave-induced flicker.
Application of aircraft navigation sensors to enhanced vision systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.
1993-01-01
In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.
Máthé, Koppány; Buşoniu, Lucian
2015-01-01
Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608
Tactical Cyber: Building A Strategy For Cyber Support To Corps And Below
Future U.S. Army cyber operations will need to be conducted jointly and at all echelons and must include both defensive and offensive components.1...The Army is now developing doctrine, concepts, and capabilities to conduct and support tactical cyber operations. We propose the following vision...statement: The Army will be able to employ organic cyber capabilities at the tactical echelon with dedicated personnel in support of tactical units while
FLORA™: Phase I development of a functional vision assessment for prosthetic vision users
Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy
2014-01-01
Background Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population in order to evaluate the impact of new vision restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional vision ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. Methods The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional vision tasks for observation of performance, and a case narrative summary. Results were analyzed to determine whether the interview questions and functional vision tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, twenty-six subjects were assessed with the FLORA. Seven different evaluators administered the assessment. Results All 14 interview questions were asked. All 35 functional vision tasks were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options -- impossible (33%), difficult (23%), moderate (24%) and easy (19%) -- were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with vision only occurring 75% on average with the System ON, and 29% with the System OFF. Conclusion The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as a functional vision and well-being assessment tool. PMID:25675964
Paina, Ligia; Vadrevu, Lalitha; Hanifi, S M Manzoor Ahmed; Akuze, Joseph; Rieder, Rachel; Chan, Kitty S; Peters, David H
2016-11-15
While community capabilities are recognized as important factors in developing resilient health systems and communities, appropriate metrics for these have not yet been developed. Furthermore, the role of community capabilities on access to maternal health services has been underexplored. In this paper, we summarize the development of a community capability score based on the Future Health System (FHS) project's experience in Bangladesh, India, and Uganda, and, examine the role of community capabilities as determinants of institutional delivery in these three contexts. We developed a community capability score using a pooled dataset containing cross-sectional household survey data from Bangladesh, India, and Uganda. Our main outcome of interest was whether the woman delivered in an institution. Our predictor variables included the community capability score, as well as a series of previously identified determinants of maternal health. We calculate both population-averaged effects (using GEE logistic regression), as well as sub-national level effects (using a mixed effects model). Our final sample for analysis included 2775 women, of which 1238 were from Bangladesh, 1199 from India, and 338 from Uganda. We found that individual-level determinants of institutional deliveries, such as maternal education, parity, and ante-natal care access were significant in our analysis and had a strong impact on a woman's odds of delivering in an institution. We also found that, in addition to individual-level determinants, greater community capability was significantly associated with higher odds of institutional delivery. For every additional capability, the odds of institutional delivery would increase by up to almost 6 %. Individual-level characteristics are strong determinants of whether a woman delivered in an institution. However, we found that community capability also plays an important role, and should be taken into account when designing programs and interventions to support institutional deliveries. Consideration of individual factors and the capabilities of the communities in which people live would contribute to the vision of supporting people-centered approaches to health.
Night vision: changing the way we drive
NASA Astrophysics Data System (ADS)
Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.
2001-03-01
A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.
A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)
Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon
1990-01-01
Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...