Sample records for enhanced vision system

  1. 78 FR 5557 - Twenty-First Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-25

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...

  2. 78 FR 16756 - Twenty-Second Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  3. 78 FR 55774 - Twenty Fourth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-11

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...

  4. 75 FR 17202 - Eighth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...

  5. 75 FR 44306 - Eleventh Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...

  6. 75 FR 71183 - Twelfth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-22

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...

  7. 76 FR 11847 - Thirteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-03

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  8. 76 FR 20437 - Fourteenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...

  9. 77 FR 16890 - Eighteenth Meeting: RTCA Special Committee 213, Enhanced Flight Visions Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...

  10. 77 FR 56254 - Twentieth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-12

    ... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...

  11. 75 FR 38863 - Tenth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-06

    ... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864

  12. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...

  13. Going Below Minimums: The Efficacy of Display Enhanced/Synthetic Vision Fusion for Go-Around Decisions during Non-Normal Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.

    2007-01-01

    The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.

  14. MMW radar enhanced vision systems: the Helicopter Autonomous Landing System (HALS) and Radar-Enhanced Vision System (REVS) are rotary and fixed wing enhanced flight vision systems that enable safe flight operations in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Cross, Jack; Schneider, John; Cariani, Pete

    2013-05-01

    Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.

  15. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  16. Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.

    1992-01-01

    This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.

  17. 77 FR 36331 - Nineteenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision Systems/Synthetic Vision...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... Document--Draft DO-XXX, Minimum Aviation Performance Standards (MASPS) for an Enhanced Flight Vision System... Discussion (9:00 a.m.-5:00 p.m.) Provide Comment Resolution of Document--Draft DO-XXX, Minimum Aviation.../Approve FRAC Draft for PMC Consideration--Draft DO- XXX, Minimum Aviation Performance Standards (MASPS...

  18. Driver's Enhanced Vision System (DEVS)

    DOT National Transportation Integrated Search

    1996-12-23

    This advisory circular (AC) contains performance standards, specifications, and : recommendations for Drivers Enhanced Vision sSystem (DEVS). The FAA recommends : the use of the guidance in this publication for the design and installation of : DEVS e...

  19. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  20. 77 FR 21861 - Special Conditions: Boeing, Model 777F; Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... System AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special conditions; request for... with an advanced, enhanced flight vision system (EFVS). The EFVS consists of a head-up display (HUD) system modified to display forward-looking infrared (FLIR) imagery. The applicable airworthiness...

  1. Low Vision Enhancement System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  2. Enhanced Vision for All-Weather Operations Under NextGen

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.

    2010-01-01

    Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.

  3. Obstacles encountered in the development of the low vision enhancement system.

    PubMed

    Massof, R W; Rickman, D L

    1992-01-01

    The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.

  4. Evaluation of Fused Synthetic and Enhanced Vision Display Concepts for Low-Visibility Approach and Landing

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III; Wilz, Susan J.

    2009-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. Improvements in lateral path control performance were realized when the Head-Up Display concepts included a tunnel, independent of the imagery (enhanced vision or fusion of enhanced and synthetic vision) presented with it. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, of itself, provide an improvement in runway incursion detection without being specifically tailored for this application.

  5. Flight Test Evaluation of Situation Awareness Benefits of Integrated Synthetic Vision System Technology f or Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III

    2005-01-01

    Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.

  6. Synthetic Vision Workshop 2

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J. (Compiler)

    1999-01-01

    The second NASA sponsored Workshop on Synthetic/Enhanced Vision (S/EV) Display Systems was conducted January 27-29, 1998 at the NASA Langley Research Center. The purpose of this workshop was to provide a forum for interested parties to discuss topics in the Synthetic Vision (SV) element of the NASA Aviation Safety Program and to encourage those interested parties to participate in the development, prototyping, and implementation of S/EV systems that enhance aviation safety. The SV element addresses the potential safety benefits of synthetic/enhanced vision display systems for low-end general aviation aircraft, high-end general aviation aircraft (business jets), and commercial transports. Attendance at this workshop consisted of about 112 persons including representatives from industry, the FAA, and other government organizations (NOAA, NIMA, etc.). The workshop provided opportunities for interested individuals to give presentations on the state of the art in potentially applicable systems, as well as to discuss areas of research that might be considered for inclusion within the Synthetic Vision Element program to contribute to the reduction of the fatal aircraft accident rate. Panel discussions on topical areas such as databases, displays, certification issues, and sensors were conducted, with time allowed for audience participation.

  7. Development and testing of the EVS 2000 enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.

  8. Enhanced operator perception through 3D vision and haptic feedback

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  9. Fusion of Synthetic and Enhanced Vision for All-Weather Commercial Aviation Operations

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence, III

    2007-01-01

    NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were not adversely impacted by the display concepts although the addition of Enhanced Vision did not, unto itself, provide an improvement in runway incursion detection.

  10. Using Vision System Technologies for Offset Approaches in Low Visibility Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.

    2015-01-01

    Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen

  11. Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.

    2005-01-01

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

  12. An augmented-reality edge enhancement application for Google Glass.

    PubMed

    Hwang, Alex D; Peli, Eli

    2014-08-01

    Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.

  13. Help for the Visually Impaired

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.

  14. Image processing for flight crew enhanced situation awareness

    NASA Technical Reports Server (NTRS)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  15. Analysis of Risk Compensation Behavior on Night Vision Enhancement System

    NASA Astrophysics Data System (ADS)

    Hiraoka, Toshihiro; Masui, Junya; Nishikawa, Seimei

    Advanced driver assistance systems (ADAS) such as a forward obstacle collision warning system (FOCWS) and a night vision enhancement system (NVES) aim to decrease driver's mental workload and enhance vehicle safety by provision of useful information to support driver's perception process and judgment process. On the other hand, the risk homeostasis theory (RHT) cautions that an enhanced safety and a reduced risk would cause a risk compensation behavior such as increasing the vehicle velocity. Therefore, the present paper performed the driving simulator experiments to discuss dependence on the NVES and emergence of the risk compensation behavior. Moreover, we verified the side-effects of spontaneous behavioral adaptation derived from the presentation of the fuel-consumption meter on the risk compensation behavior.

  16. Enhanced/synthetic vision and head-worn display technologies for terminal maneuvering area NextGen operations

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike

    2011-06-01

    NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.

  17. Enhanced/Synthetic Vision and Head-Worn Display Technologies for Terminal Maneuvering Area NextGen Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzell, Lawrence J.; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike

    2011-01-01

    NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.

  18. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    NASA Astrophysics Data System (ADS)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  19. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  20. An Augmented-Reality Edge Enhancement Application for Google Glass

    PubMed Central

    Hwang, Alex D.; Peli, Eli

    2014-01-01

    Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871

  1. Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III

    2006-01-01

    NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.

  2. Synthetic and Enhanced Vision System for Altair Lunar Lander

    NASA Technical Reports Server (NTRS)

    Prinzell, Lawrence J., III; Kramer, Lynda J.; Norman, Robert M.; Arthur, Jarvis J., III; Williams, Steven P.; Shelton, Kevin J.; Bailey, Randall E.

    2009-01-01

    Past research has demonstrated the substantial potential of synthetic and enhanced vision (SV, EV) for aviation (e.g., Prinzel & Wickens, 2009). These augmented visual-based technologies have been shown to significantly enhance situation awareness, reduce workload, enhance aviation safety (e.g., reduced propensity for controlled flight -into-terrain accidents/incidents), and promote flight path control precision. The issues that drove the design and development of synthetic and enhanced vision have commonalities to other application domains; most notably, during entry, descent, and landing on the moon and other planetary surfaces. NASA has extended SV/EV technology for use in planetary exploration vehicles, such as the Altair Lunar Lander. This paper describes an Altair Lunar Lander SV/EV concept and associated research demonstrating the safety benefits of these technologies.

  3. Design and testing of a dual-band enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.

  4. Benefit from NASA

    NASA Image and Video Library

    1985-01-01

    The NASA imaging processing technology, an advanced computer technique to enhance images sent to Earth in digital form by distant spacecraft, helped develop a new vision screening process. The Ocular Vision Screening system, an important step in preventing vision impairment, is a portable device designed especially to detect eye problems in children through the analysis of retinal reflexes.

  5. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  6. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  7. Real-time enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-05-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  8. Portable electronic vision enhancement systems in comparison with optical magnifiers for near vision activities: an economic evaluation alongside a randomized crossover trial.

    PubMed

    Bray, Nathan; Brand, Andrew; Taylor, John; Hoare, Zoe; Dickinson, Christine; Edwards, Rhiannon T

    2017-08-01

    To determine the incremental cost-effectiveness of portable electronic vision enhancement system (p-EVES) devices compared with optical low vision aids (LVAs), for improving near vision visual function, quality of life and well-being of people with a visual impairment. An AB/BA randomized crossover trial design was used. Eighty-two participants completed the study. Participants were current users of optical LVAs who had not tried a p-EVES device before and had a stable visual impairment. The trial intervention was the addition of a p-EVES device to the participant's existing optical LVA(s) for 2 months, and the control intervention was optical LVA use only, for 2 months. Cost-effectiveness and cost-utility analyses were conducted from a societal perspective. The mean cost of the p-EVES intervention was £448. Carer costs were £30 (4.46 hr) less for the p-EVES intervention compared with the LVA only control. The mean difference in total costs was £417. Bootstrapping gave an incremental cost-effectiveness ratio (ICER) of £736 (95% CI £481 to £1525) for a 7% improvement in near vision visual function. Cost per quality-adjusted life year (QALY) ranged from £56 991 (lower 95% CI = £19 801) to £66 490 (lower 95% CI = £23 055). Sensitivity analysis varying the commercial price of the p-EVES device reduced ICERs by up to 75%, with cost per QALYs falling below £30 000. Portable electronic vision enhancement system (p-EVES) devices are likely to be a cost-effective use of healthcare resources for improving near vision visual function, but this does not translate into cost-effective improvements in quality of life, capability or well-being. © 2016 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation and European Association for Vision & Eye Research.

  9. Flight Deck-Based Delegated Separation: Evaluation of an On-Board Interval Management System with Synthetic and Enhanced Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Shelton, Kevin J.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.; Norman, Rober M.; Ellis, Kyle K. E.; Barmore, Bryan E.

    2011-01-01

    An emerging Next Generation Air Transportation System concept - Equivalent Visual Operations (EVO) - can be achieved using an electronic means to provide sufficient visibility of the external world and other required flight references on flight deck displays that enable the safety, operational tempos, and visual flight rules (VFR)-like procedures for all weather conditions. Synthetic and enhanced flight vision system technologies are critical enabling technologies to EVO. Current research evaluated concepts for flight deck-based interval management (FIM) operations, integrated with Synthetic Vision and Enhanced Vision flight-deck displays and technologies. One concept involves delegated flight deck-based separation, in which the flight crews were paired with another aircraft and responsible for spacing and maintaining separation from the paired aircraft, termed, "equivalent visual separation." The operation required the flight crews to acquire and maintain an "equivalent visual contact" as well as to conduct manual landings in low-visibility conditions. The paper describes results that evaluated the concept of EVO delegated separation, including an off-nominal scenario in which the lead aircraft was not able to conform to the assigned spacing resulting in a loss of separation.

  10. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    NASA Technical Reports Server (NTRS)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  11. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  12. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  13. 76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-14

    ... detected by infrared sensors can be much different from that detected by natural pilot vision. On a dark... by many imaging infrared systems. On the other hand, contrasting colors in visual wavelengths may be... of the EFVS image and the level of EFVS infrared sensor performance could depend significantly on...

  14. Awareness and Detection of Traffic and Obstacles Using Synthetic and Enhanced Vision Systems

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.

    2012-01-01

    Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down display and head-up display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System

  15. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  16. Using Vision System Technologies to Enable Operational Improvements for Low Visibility Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.

    2014-01-01

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  17. Image segmentation for enhancing symbol recognition in prosthetic vision.

    PubMed

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  18. A pleiotropic interaction between vision loss and hypermelanism in Astyanax mexicanus cave x surface hybrids.

    PubMed

    Gross, Joshua B; Powers, Amanda K; Davis, Erin M; Kaplan, Shane A

    2016-06-30

    Cave-dwelling animals evolve various traits as a consequence of life in darkness. Constructive traits (e.g., enhanced non-visual sensory systems) presumably arise under strong selective pressures. The mechanism(s) driving regression of features, however, are not well understood. Quantitative trait locus (QTL) analyses in Astyanax mexicanus Pachón cave x surface hybrids revealed phenotypic effects associated with vision and pigmentation loss. Vision QTL were uniformly associated with reductions in the homozygous cave condition, however pigmentation QTL demonstrated mixed phenotypic effects. This implied pigmentation might be lost through both selective and neutral forces. Alternatively, in this report, we examined if a pleiotropic interaction may exist between vision and pigmentation since vision loss has been shown to result in darker skin in other fish and amphibian model systems. We discovered that certain members of Pachón x surface pedigrees are significantly darker than surface-dwelling fish. All of these "hypermelanic" individuals demonstrated severe visual system malformations suggesting they may be blind. A vision-mediated behavioral assay revealed that these fish, in stark contrast to surface fish, behaved the same as blind cavefish. Further, hypermelanic melanophores were larger and more dendritic in morphology compared to surface fish melanophores. However, hypermelanic melanophores responded normally to melanin-concentrating hormone suggesting darkening stemmed from vision loss, rather than a defect in pigment cell function. Finally, a number of genomic regions were coordinately associated with both reduced vision and increased pigmentation. This work suggests hypermelanism in hybrid Astyanax results from blindness. This finding provides an alternative explanation for phenotypic effect studies of pigmentation QTL as stemming (at least in part) from environmental, rather than exclusively genetic, interactions between two regressive phenotypes. Further, this analysis reveals persistence of background adaptation in Astyanax. As the eye was lost in cave-dwelling forms, enhanced pigmentation resulted. Given the extreme cave environment, which is often devoid of nutrition, enhanced pigmentation may impose an energetic cost. Such an energetic cost would be selected against, as a means of energy conservation. Thus, the pleiotropic interaction between vision loss and pigmentation may reveal an additional selective pressure favoring the loss of pigmentation in cave-dwelling animals.

  19. Multispectral image-fused head-tracked vision system (HTVS) for driving applications

    NASA Astrophysics Data System (ADS)

    Reese, Colin E.; Bender, Edward J.

    2001-08-01

    Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.

  20. Enhanced Flight Vision Systems Operational Feasibility Study Using Radar and Infrared Sensors

    NASA Technical Reports Server (NTRS)

    Etherington, Timothy J.; Kramer, Lynda J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2015-01-01

    Approach and landing operations during periods of reduced visibility have plagued aircraft pilots since the beginning of aviation. Although techniques are currently available to mitigate some of the visibility conditions, these operations are still ultimately limited by the pilot's ability to "see" required visual landing references (e.g., markings and/or lights of threshold and touchdown zone) and require significant and costly ground infrastructure. Certified Enhanced Flight Vision Systems (EFVS) have shown promise to lift the obscuration veil. They allow the pilot to operate with enhanced vision, in lieu of natural vision, in the visual segment to enable equivalent visual operations (EVO). An aviation standards document was developed with industry and government consensus for using an EFVS for approach, landing, and rollout to a safe taxi speed in visibilities as low as 300 feet runway visual range (RVR). These new standards establish performance, integrity, availability, and safety requirements to operate in this regime without reliance on a pilot's or flight crew's natural vision by use of a fail-operational EFVS. A pilot-in-the-loop high-fidelity motion simulation study was conducted at NASA Langley Research Center to evaluate the operational feasibility, pilot workload, and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 feet RVR by use of vision system technologies on a head-up display (HUD) without need or reliance on natural vision. Twelve crews flew various landing and departure scenarios in 1800, 1000, 700, and 300 RVR. This paper details the non-normal results of the study including objective and subjective measures of performance and acceptability. The study validated the operational feasibility of approach and departure operations and success was independent of visibility conditions. Failures were handled within the lateral confines of the runway for all conditions tested. The fail-operational concept with pilot in the loop needs further study.

  1. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  2. Vertically integrated photonic multichip module architecture for vision applications

    NASA Astrophysics Data System (ADS)

    Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong

    2000-05-01

    The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.

  3. 2013-2363

    NASA Image and Video Library

    2013-05-15

    (left to right) NASA Langley aerospace engineer Bruce Jackson briefs astronauts Rex Walheim and Gregory Johnson about the Synthetic Vision (SV) and Enhanced Vision (EV) systems in a flight simulator at the center's Cockpit Motion Facility. The astronauts were training to land the Dream Chaser spacecraft May 15th 2013. credit NASA/David C. Bowman

  4. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  5. Robust algebraic image enhancement for intelligent control systems

    NASA Technical Reports Server (NTRS)

    Lerner, Bao-Ting; Morrelli, Michael

    1993-01-01

    Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.

  6. Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.

    2016-01-01

    Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  7. Enhanced Flight Vision Systems and Synthetic Vision Systems for NextGen Approach and Landing Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.

    2013-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  8. Latency in Visionic Systems: Test Methods and Requirements

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  9. Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.

    2014-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.

  10. Recent progress in millimeter-wave sensor system capabilities for enhanced (synthetic) vision

    NASA Astrophysics Data System (ADS)

    Hellemann, Karlheinz; Zachai, Reinhard

    1999-07-01

    Weather- and daylight independent operation of modern traffic systems is strongly required for an optimized and economic availability. Mainly helicopters, small aircraft and military transport aircraft operating frequently close to the ground have a need for effective and cost-effective Enhanced Vision sensors. The technical progress in sensor technology and processing speed offer today the possibility for new concepts to be realized. Derived from this background the paper reports on the improvements which are under development within the HiVision program at DaimlerChrysler Aerospace. A sensor demonstrator based on FMCW radar technology with high information update-rate and operating in the mm-wave band, has been up-graded to improve performance and fitted to fly on an experimental base. The results achieved so far demonstrate the capability to produce a weather independent enhanced vision. In addition the demonstrator has been tested on board a high- speed ferry at the Baltic sea, because fast vessels have a similar need for weather-independent operation and anti- collision measures. In the future one sensor type may serve both 'worlds' and help ease and save traffic. The described demonstrator fills up the technology gap between optical sensors (Infrared) and standard pulse radars with its specific features such as high speed scanning and weather penetration with the additional benefit of cost-effectiveness.

  11. Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.

    2016-01-01

    Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.

  12. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  13. Optoelectronic instrumentation enhancement using data mining feedback for a 3D measurement system

    NASA Astrophysics Data System (ADS)

    Flores-Fuentes, Wendy; Sergiyenko, Oleg; Gonzalez-Navarro, Félix F.; Rivas-López, Moisés; Hernandez-Balbuena, Daniel; Rodríguez-Quiñonez, Julio C.; Tyrsa, Vera; Lindner, Lars

    2016-12-01

    3D measurement by a cyber-physical system based on optoelectronic scanning instrumentation has been enhanced by outliers and regression data mining feedback. The prototype has applications in (1) industrial manufacturing systems that include: robotic machinery, embedded vision, and motion control, (2) health care systems for measurement scanning, and (3) infrastructure by providing structural health monitoring. This paper presents new research performed in data processing of a 3D measurement vision sensing database. Outliers from multivariate data have been detected and removal to improve artificial intelligence regression algorithm results. Physical measurement error regression data has been used for 3D measurements error correction. Concluding, that the joint of physical phenomena, measurement and computation is an effectiveness action for feedback loops in the control of industrial, medical and civil tasks.

  14. Three-dimensional vision enhances task performance independently of the surgical method.

    PubMed

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  15. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.

  16. Enhanced Night Visibility Series, Volume XII : Overview of Phase II and Development of Phase III Experimental Plan

    DOT National Transportation Integrated Search

    2005-12-01

    This volume provides an overview of the six studies that compose Phase II of the Enhanced Night Visibility project and the experimental plan for its third and final portion, Phase III. The Phase II studies evaluated up to 12 vision enhancement system...

  17. COMPARISON OF RECENTLY USED PHACOEMULSIFICATION SYSTEMS USING A HEALTH TECHNOLOGY ASSESSMENT METHOD.

    PubMed

    Huang, Jiannan; Wang, Qi; Zhao, Caimin; Ying, Xiaohua; Zou, Haidong

    2017-01-01

    To compare the recently used phacoemulsification systems using a health technology assessment (HTA) model. A self-administered questionnaire, which included questions to gauge on the opinions of the recently used phacoemulsification systems, was distributed to the chief cataract surgeons in the departments of ophthalmology of eighteen tertiary hospitals in Shanghai, China. A series of senile cataract patients undergoing phacoemulsification surgery were enrolled in the study. The surgical results and the average costs related to their surgeries were all recorded and compared for the recently used phacoemulsification systems. The four phacoemulsification systems currently used in Shanghai are the Infiniti Vision, Centurion Vision, WhiteStar Signature, and Stellaris Vision Enhancement systems. All of the doctors confirmed that the systems they used would help cataract patients recover vision. A total of 150 cataract patients who underwent phacoemulsification surgery were enrolled in the present study. A significant difference was found among the four groups in cumulative dissipated energy, with the lowest value found in the Centurion group. No serious complications were observed and a positive trend in visual acuity was found in all four groups after cataract surgery. The highest total cost of surgery was associated with procedures conducted using the Centurion Vision system, and significant differences between systems were mainly because of the cost of the consumables used in the different surgeries. This HTA comparison of four recently used phacoemulsification systems found that each of system offers a satisfactory vision recovery outcome, but differs in surgical efficacy and costs.

  18. Distant touch hydrodynamic imaging with an artificial lateral line.

    PubMed

    Yang, Yingchen; Chen, Jack; Engel, Jonathan; Pandya, Saunvit; Chen, Nannan; Tucker, Craig; Coombs, Sheryl; Jones, Douglas L; Liu, Chang

    2006-12-12

    Nearly all underwater vehicles and surface ships today use sonar and vision for imaging and navigation. However, sonar and vision systems face various limitations, e.g., sonar blind zones, dark or murky environments, etc. Evolved over millions of years, fish use the lateral line, a distributed linear array of flow sensing organs, for underwater hydrodynamic imaging and information extraction. We demonstrate here a proof-of-concept artificial lateral line system. It enables a distant touch hydrodynamic imaging capability to critically augment sonar and vision systems. We show that the artificial lateral line can successfully perform dipole source localization and hydrodynamic wake detection. The development of the artificial lateral line is aimed at fundamentally enhancing human ability to detect, navigate, and survive in the underwater environment.

  19. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  20. Synthetic Vision Displays for Planetary and Lunar Lander Vehicles

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Shelton, Kevin J.; Kramer, Lynda J.; Bailey, Randall E.; Norman, Robert M.

    2008-01-01

    Aviation research has demonstrated that Synthetic Vision (SV) technology can substantially enhance situation awareness, reduce pilot workload, improve aviation safety, and promote flight path control precision. SV, and related flight deck technologies are currently being extended for application in planetary exploration vehicles. SV, in particular, holds significant potential for many planetary missions since the SV presentation provides a computer-generated view for the flight crew of the terrain and other significant environmental characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts, not available from other vision technologies. In addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. The paper accompanies a conference demonstration that introduced a prototype NASA Synthetic Vision system for lunar lander spacecraft. The paper will describe technical challenges and potential solutions to SV applications for the lunar landing mission, including the requirements for high-resolution lunar terrain maps, accurate positioning and orientation, and lunar cockpit display concepts to support projected mission challenges.

  1. Face adaptation improves gender discrimination.

    PubMed

    Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang

    2011-01-01

    Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. FELIN: tailored optronics and systems solutions for dismounted combat

    NASA Astrophysics Data System (ADS)

    Milcent, A. M.

    2009-05-01

    The FELIN French modernization program for dismounted combat provides the Armies with info-centric systems which dramatically enhance the performances of the soldier and the platoon. Sagem now has available a portfolio of various equipments, providing C4I, data and voice digital communication, and enhanced vision for day and night operations, through compact high performance electro-optics. The FELIN system provides the infantryman with a high-tech integrated and modular system which increases significantly their detection, recognition, identification capabilities, their situation awareness and information sharing, and this in any dismounted close combat situation. Among the key technologies used in this system, infrared and intensified vision provide a significant improvement in capability, observation performance and protection of the ground soldiers. This paper presents in detail the developed equipments, with an emphasis on lessons learned from the technical and operational feedback from dismounted close combat field tests.

  3. Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people

    PubMed Central

    Aguilar, Carlos; Castet, Eric

    2017-01-01

    People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)—conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading. PMID:28380004

  4. IMAGE ENHANCEMENT FOR IMPAIRED VISION: THE CHALLENGE OF EVALUATION

    PubMed Central

    PELI, ELI; WOODS, RUSSELL L

    2009-01-01

    With the aging of the population, the prevalence of eye diseases and thus of vision impairment is increasing. The TV watching habits of people with vision impairments are comparable to normally sighted people1, however their vision loss prevents them from fully benefiting from this medium. For over 20 years we have been developing video image-enhancement techniques designed to assist people with visual impairments, particularly those due to central retinal vision loss. A major difficulty in this endeavor is the lack of evaluation techniques to assess and compare the effectiveness of various enhancement methods. This paper reviews our approaches to image enhancement and the results we have obtained, with special emphasis on the difficulties encountered in the evaluation of the benefits of enhancement and the solutions we have developed to date. PMID:20161188

  5. Computer-aided system for detecting runway incursions

    NASA Astrophysics Data System (ADS)

    Sridhar, Banavar; Chatterji, Gano B.

    1994-07-01

    A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.

  6. The EnVision++ system: a new immunohistochemical method for diagnostics and research. Critical comparison with the APAAP, ChemMate, CSA, LABC, and SABC techniques.

    PubMed

    Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A

    1998-07-01

    To assess a newly developed immunohistochemical detection system, the EnVision++. A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload.

  7. The EnVision++ system: a new immunohistochemical method for diagnostics and research. Critical comparison with the APAAP, ChemMate, CSA, LABC, and SABC techniques.

    PubMed Central

    Sabattini, E; Bisgaard, K; Ascani, S; Poggi, S; Piccioli, M; Ceccarelli, C; Pieri, F; Fraternali-Orcioni, G; Pileri, S A

    1998-01-01

    AIM: To assess a newly developed immunohistochemical detection system, the EnVision++. METHODS: A large series of differently processed normal and pathological samples and 53 relevant monoclonal antibodies were chosen. A chessboard titration assay was used to compare the results provided by the EnVision++ system with those of the APAAP, CSA, LSAB, SABC, and ChemMate methods, when applied either manually or in a TechMate 500 immunostainer. RESULTS: With the vast majority of the antibodies, EnVision++ allowed two- to fivefold higher dilutions than the APAAP, LSAB, SABC, and ChemMate techniques, the staining intensity and percentage of expected positive cells being the same. With some critical antibodies (such as the anti-CD5), it turned out to be superior in that it achieved consistently reproducible results with differently fixed or overfixed samples. Only the CSA method, which includes tyramide based enhancement, allowed the same dilutions as the EnVision++ system, and in one instance (with the anti-cyclin D1 antibody) represented the gold standard. CONCLUSIONS: The EnVision++ is an easy to use system, which avoids the possibility of disturbing endogenous biotin and lowers the cost per test by increasing the dilutions of the primary antibodies. Being a two step procedure, it reduces both the assay time and the workload. Images PMID:9797726

  8. Efficient image enhancement using sparse source separation in the Retinex theory

    NASA Astrophysics Data System (ADS)

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  9. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    NASA Astrophysics Data System (ADS)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  10. Wearable design issues for electronic vision enhancement systems

    NASA Astrophysics Data System (ADS)

    Dvorak, Joe

    2006-09-01

    As the baby boomer generation ages, visual impairment will overtake a significant portion of the US population. At the same time, more and more of our world is becoming digital. These two trends, coupled with the continuing advances in digital electronics, argue for a rethinking in the design of aids for the visually impaired. This paper discusses design issues for electronic vision enhancement systems (EVES) [R.C. Peterson, J.S. Wolffsohn, M. Rubinstein, et al., Am. J. Ophthalmol. 136 1129 (2003)] that will facilitate their wearability and continuous use. We briefly discuss the factors affecting a person's acceptance of wearable devices. We define the concept of operational inertia which plays an important role in our design of wearable devices and systems. We then discuss how design principles based upon operational inertia can be applied to the design of EVES.

  11. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  12. Decadal Vision Progress Report Implementation Plans and Status for the Next Generation ARM Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, James

    The reconfiguration of the ARM facility, formally initiated in early 2014, is geared toward implementing the Next Generation of the ARM Facility, which will more tightly link ARM measurements and atmospheric models. The strategy is outlined in the ARM Climate Research Facility Decadal Vision (DOE 2014a). The strategy includes the implementation of a high-resolution model, initially at the Southern Great Plains (SGP) site, and enhancements at the SGP and North Slope of Alaska (NSA) sites to provide additional observations to support modeling and process studies. Enhancements at the SGP site focus on ground-based instruments while enhancements at the NSA makemore » use of Unmanned Aerial Systems (UAS) and Tethered Balloon Systems (TBS). It is also recognized that new data tools and data products will need to be developed to take full advantage of these improvements. This document provides an update on the status of these ARM facility enhancements, beginning with the measurement enhancements at the SGP and NSA, followed by a discussion of the modeling project including associated data-processing activities.« less

  13. Community health nursing vision for 2020: shaping the future.

    PubMed

    Schofield, Ruth; Ganann, Rebecca; Brooks, Sandy; McGugan, Jennifer; Dalla Bona, Kim; Betker, Claire; Dilworth, Katie; Parton, Laurie; Reid-Haughian, Cheryl; Slepkov, Marlene; Watson, Cori

    2011-12-01

    As health care is shifting from hospital to community, community health nurses (CHNs) are directly affected. This descriptive qualitative study sought to understand priority issues currently facing CHNs, explore development of a national vision for community health nursing, and develop recommendations to shape the future of the profession moving toward the year 2020. Focus groups and key informant interviews were conducted across Canada. Five key themes were identified: community health nursing in crisis now, a flawed health care system, responding to the public, vision for the future, and CHNs as solution makers. Key recommendations include developing a common definition and vision of community health nursing, collaborating on an aggressive plan to shift to a primary health care system, developing a comprehensive social marketing strategy, refocusing basic baccalaureate education, enhancing the capacity of community health researchers and knowledge in community health nursing, and establishing a community health nursing center of excellence.

  14. Broad Band Antireflection Coating on Zinc Sulphide Window for Shortwave infrared cum Night Vision System

    NASA Astrophysics Data System (ADS)

    Upadhyaya, A. S.; Bandyopadhyay, P. K.

    2012-11-01

    In state of art technology, integrated devices are widely used or their potential advantages. Common system reduces weight as well as total space covered by its various parts. In the state of art surveillance system integrated SWIR and night vision system used for more accurate identification of object. In this system a common optical window is used, which passes the radiation of both the regions, further both the spectral regions are separated in two channels. ZnS is a good choice for a common window, as it transmit both the region of interest, night vision (650 - 850 nm) as well as SWIR (0.9 - 1.7 μm). In this work a broad band anti reflection coating is developed on ZnS window to enhance the transmission. This seven layer coating is designed using flip flop design method. After getting the final design, some minor refinement is done, using simplex method. SiO2 and TiO2 coating material combination is used for this work. The coating is fabricated by physical vapour deposition process and the materials were evaporated by electron beam gun. Average transmission of both side coated substrate from 660 to 1700 nm is 95%. This coating also acts as contrast enhancement filter for night vision devices, as it reflect the region of 590 - 660 nm. Several trials have been conducted to check the coating repeatability, and it is observed that transmission variation in different trials is not very much and it is under the tolerance limit. The coating also passes environmental test for stability.

  15. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  16. Enhancement of vision by monocular deprivation in adult mice.

    PubMed

    Prusky, Glen T; Alam, Nazia M; Douglas, Robert M

    2006-11-08

    Plasticity of vision mediated through binocular interactions has been reported in mammals only during a "critical" period in juvenile life, wherein monocular deprivation (MD) causes an enduring loss of visual acuity (amblyopia) selectively through the deprived eye. Here, we report a different form of interocular plasticity of vision in adult mice in which MD leads to an enhancement of the optokinetic response (OKR) selectively through the nondeprived eye. Over 5 d of MD, the spatial frequency sensitivity of the OKR increased gradually, reaching a plateau of approximately 36% above pre-deprivation baseline. Eye opening initiated a gradual decline, but sensitivity was maintained above pre-deprivation baseline for 5-6 d. Enhanced function was restricted to the monocular visual field, notwithstanding the dependence of the plasticity on binocular interactions. Activity in visual cortex ipsilateral to the deprived eye was necessary for the characteristic induction of the enhancement, and activity in visual cortex contralateral to the deprived eye was necessary for its maintenance after MD. The plasticity also displayed distinct learning-like properties: Active testing experience was required to attain maximal enhancement and for enhancement to persist after MD, and the duration of enhanced sensitivity after MD was extended by increasing the length of MD, and by repeating MD. These data show that the adult mouse visual system maintains a form of experience-dependent plasticity in which the visual cortex can modulate the normal function of subcortical visual pathways.

  17. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering

    NASA Astrophysics Data System (ADS)

    Barnes, Nick; Scott, Adele F.; Lieby, Paulette; Petoe, Matthew A.; McCarthy, Chris; Stacey, Ashley; Ayton, Lauren N.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Lovell, Nigel H.; McDermott, Hugh J.; Walker, Janine G.; BVA Consortium,the

    2016-06-01

    Objective. One strategy to improve the effectiveness of prosthetic vision devices is to process incoming images to ensure that key information can be perceived by the user. This paper presents the first comprehensive results of vision function testing for a suprachoroidal retinal prosthetic device utilizing of 20 stimulating electrodes. Further, we investigate whether using image filtering can improve results on a light localization task for implanted participants compared to minimal vision processing. No controlled implanted participant studies have yet investigated whether vision processing methods that are not task-specific can lead to improved results. Approach. Three participants with profound vision loss from retinitis pigmentosa were implanted with a suprachoroidal retinal prosthesis. All three completed multiple trials of a light localization test, and one participant completed multiple trials of acuity tests. The visual representations used were: Lanczos2 (a high quality Nyquist bandlimited downsampling filter); minimal vision processing (MVP); wide view regional averaging filtering (WV); scrambled; and, system off. Main results. Using Lanczos2, all three participants successfully completed a light localization task and obtained a significantly higher percentage of correct responses than using MVP (p≤slant 0.025) or with system off (p\\lt 0.0001). Further, in a preliminary result using Lanczos2, one participant successfully completed grating acuity and Landolt C tasks, and showed significantly better performance (p=0.004) compared to WV, scrambled and system off on the grating acuity task. Significance. Participants successfully completed vision tasks using a 20 electrode suprachoroidal retinal prosthesis. Vision processing with a Nyquist bandlimited image filter has shown an advantage for a light localization task. This result suggests that this and targeted, more advanced vision processing schemes may become important components of retinal prostheses to enhance performance. ClinicalTrials.gov Identifier: NCT01603576.

  18. Moulded infrared optics making night vision for cars within reach

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Guimond, Yann; Franks, John; Van Den Bergh, Marleen

    2005-02-01

    Sustainable mobility is a major public concern, making increased safety one of the major challenges for the car of the future. About half of all serious traffic accidents occur at night, while only a minority of journeys is at night. Reduced visibility is one of the main reasons for these striking statistics and this explains the interest of the automobile industry in Enhanced Night Vision Systems. As an answer to the need for high volume, low cost optics for these applications, Umicore has developed GASIR. This material is transparent in the NEAR and FAR infrared, and is mouldable into high quality finished spherical, aspherical and diffractive lenses. Umicore's GASIR moulded lenses are an ideal solution for thermal imaging for cars (Night Vision) and for sensing systems like pedestrian detection, collision avoidance, occupation detection, intelligent airbag systems etc.

  19. Intelligent transportation systems and intermodal freight transportation. Final report, May-December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aylward, A.D.

    1996-12-01

    This paper describes the various advanced technologies already in use in the intermodal freight transportation industry and addresses the opportunity for improved communication between the public and private sector regarding technology applications to the freight transportation system that could enhance the capacity of the system as a whole. The current public interest in freight transportation policy creates an opportunity to develop a shared vision of the future needs of international intermodal freight transportation in the United States. The Federal government can impact this vision by taking action in the following areas: Provide Infrastructure Funding to Support Efficiency and Global Competitiveness;more » Support Regional and Corridor Efforts; Understand the Freight Sector and Develop a Shared Vision of Technology Benefits; Lead Transportation Technology Efforts of Federal Agencies; and Maintain Commitment to Open ITS Architecture.« less

  20. Image enhancement filters significantly improve reading performance for low vision observers

    NASA Technical Reports Server (NTRS)

    Lawton, T. B.

    1992-01-01

    As people age, so do their photoreceptors; many photoreceptors in central vision stop functioning when a person reaches their late sixties or early seventies. Low vision observers with losses in central vision, those with age-related maculopathies, were studied. Low vision observers no longer see high spatial frequencies, being unable to resolve fine edge detail. We developed image enhancement filters to compensate for the low vision observer's losses in contrast sensitivity to intermediate and high spatial frequencies. The filters work by boosting the amplitude of the less visible intermediate spatial frequencies. The lower spatial frequencies. These image enhancement filters not only reduce the magnification needed for reading by up to 70 percent, but they also increase the observer's reading speed by 2-4 times. A summary of this research is presented.

  1. Development and evaluation of vision rehabilitation devices.

    PubMed

    Luo, Gang; Peli, Eli

    2011-01-01

    We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.

  2. Integrated Communications, Navigation and Surveillance Technologies Keynote Address

    NASA Technical Reports Server (NTRS)

    Lebacqz, J. Victor

    2004-01-01

    Slides for the Keynote Address present graphics to enhance the discussion of NASA's vision, the National Space Exploration Initiative, current Mars exploration, and aeronautics exploration. The presentation also focuses on development of an Air Transportation System and transformation from present systems.

  3. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  4. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Astrophysics Data System (ADS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-02-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  5. Synthetic Vision CFIT Experiments for GA and Commercial Aircraft: "A Picture Is Worth A Thousand Lives"

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Hughes, Monica F.; Arthur, Jarvis J., III; Kramer, Lynda J.; Glaab, Louis J.; Bailey, Randy E.; Parrish, Russell V.; Uenking, Michael D.

    2003-01-01

    Because restricted visibility has been implicated in the majority of commercial and general aviation accidents, solutions will need to focus on how to enhance safety during instrument meteorological conditions (IMC). The NASA Synthetic Vision Systems (SVS) project is developing technologies to help achieve these goals through the synthetic presentation of how the outside world would look to the pilot if vision were not reduced. The potential safety outcome would be a significant reduction in several accident categories, such as controlled-flight-into-terrain (CFIT), that have restricted visibility as a causal factor. The paper describes two experiments that demonstrated the efficacy of synthetic vision technology to prevent CFIT accidents for both general aviation and commercial aircraft.

  6. 2006 NASA Strategic Plan

    NASA Technical Reports Server (NTRS)

    2006-01-01

    On January 14, 2004, President George W. Bush announced A Renewed Spirit of Discovery: The President's Vision for U.S. Space Exploration, a new directive for the Nation's space program. The fundamental goal of this directive is "to advance U.S. scientific, security, and economic interests through a robust space exploration program." In issuing it, the President committed the Nation to a journey of exploring the solar system and beyond: returning to the Moon in the next decade, then venturing further into the solar system, ultimately sending humans to Mars and beyond. He challenged NASA to establish new and innovative programs to enhance understanding of the planets, to ask new questions, and to answer questions that are as old as humankind. NASA enthusiastically embraced the challenge of extending a human presence throughout the solar system as the Agency's Vision, and in the NASA Authorization Act of 2005, Congress endorsed the Vision for Space Exploration and provided additional guidance for implementation. NASA is committed to achieving this Vision and to making all changes necessary to ensure success and a smooth transition. These changes will include increasing internal collaboration, leveraging personnel and facilities, developing strong, healthy NASA Centers,a nd fostering a safe environment of respect and open communication for employees at all levels. NASA also will ensure clear accountability and solid program management and reporting practices. Over the next 10 years, NASA will focus on six Strategic Goals to move forward in achieving the Vision for Space Exploration. Each of the six Strategic Goals is clearly defined and supported by multi-year outcomes that will enhance NASA's ability to measure and report Agency accomplishments in this quest.

  7. Federal regulation of vision enhancement devices for normal and abnormal vision

    NASA Astrophysics Data System (ADS)

    Drum, Bruce

    2006-09-01

    The Food and Drug Administration (FDA) evaluates the safety and effectiveness of medical devices and biological products as well as food and drugs. The FDA defines a device as a product that is intended, by physical means, to diagnose, treat, or prevent disease, or to affect the structure or function of the body. All vision enhancement devices fulfill this definition because they are intended to affect a function (vision) of the body. In practice, however, FDA historically has drawn a distinction between devices that are intended to enhance low vision as opposed to normal vision. Most low vision aids are therapeutic devices intended to compensate for visual impairment, and are actively regulated according to their level of risk to the patient. The risk level is usually low (e.g. Class I, exempt from 510(k) submission requirements for magnifiers that do not touch the eye), but can be as high as Class III (requiring a clinical trial and Premarket Approval (PMA) application) for certain implanted and prosthetic devices (e.g. intraocular telescopes and prosthetic retinal implants). In contrast, the FDA usually does not actively enforce its regulations for devices that are intended to enhance normal vision, are low risk, and do not have a medical intended use. However, if an implanted or prosthetic device were developed for enhancing normal vision, the FDA would likely decide to regulate it actively, because its intended use would entail a substantial medical risk to the user. Companies developing such devices should contact the FDA at an early stage to clarify their regulatory status.

  8. Enhanced and Synthetic Vision for Terminal Maneuvering Area NextGen Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Norman, R. Michael; Williams, Steven P.; Arthur, Jarvis J., III; Shelton, Kevin J.; Prinzel, Lawrence J., III

    2011-01-01

    Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility ground (taxi) operations and approach/landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O Hare environment. Various scenarios tested the potential for EFVS for operations in visibility as low as 1000 ft runway visibility range (RVR) and SVS to enable lower decision heights (DH) than can currently be flown today. Expanding the EFVS visual segment from DH to the runway in visibilities as low as 1000 RVR appears to be viable as touchdown performance was excellent without any workload penalties noted for the EFVS concept tested. A lower DH to 150 ft and/or possibly reduced visibility minima by virtue of SVS equipage appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.

  9. Wearable optical-digital assistive device for low vision students.

    PubMed

    Afinogenov, Boris I; Coles, James B; Parthasarathy, Sailashri; Press-Williams, Jessica; Tsykunova, Ralina; Vasilenko, Anastasia; Narain, Jaya; Hanumara, Nevan C; Winter, Amos; Satgunam, PremNandhini

    2016-08-01

    People with low vision have limited residual vision that can be greatly enhanced through high levels of magnification. Current assistive technologies are tailored for far field or near field magnification but not both. In collaboration with L.V. Prasad Eye Institute (LVPEI), a wearable, optical-digital assistive device was developed to meet the near and far field magnification needs of students. The critical requirements, system architecture and design decisions for each module were analyzed and quantified. A proof-of-concept prototype was fabricated that can achieve magnification up to 8x and a battery life of up to 8 hours. Potential user evaluation with a Snellen chart showed identification of characters not previously discernible. Further feedback suggested that the system could be used as a general accessibility aid.

  10. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  11. Synthetic and Enhanced Vision Systems for NextGen (SEVS) Simulation and Flight Test Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Shelton, Kevin J.; Kramer, Lynda J.; Ellis,Kyle K.; Rehfeld, Sherri A.

    2012-01-01

    The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.

  12. Present and future of vision systems technologies in commercial flight operations

    NASA Astrophysics Data System (ADS)

    Ward, Jim

    2016-05-01

    The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.

  13. Integrated navigation, flight guidance, and synthetic vision system for low-level flight

    NASA Astrophysics Data System (ADS)

    Mehler, Felix E.

    2000-06-01

    Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.

  14. 77 FR 55895 - Meeting: RTCA Program Management Committee (PMC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Enhanced Flight Vision System to Enable All-Weather Approach, Landing and Roll-Out to a Safe Taxi Speed... but limited to space availability. With the approval of the chairman, members of the public may...

  15. The Adoption of Technology-Enhanced Instruction to Support Education for All

    ERIC Educational Resources Information Center

    Khatib, Nahla M.

    2014-01-01

    This study aimed at investigating the efforts of academic tutors at Arab Open University in Jordan (AOU) to implement technology enhanced instruction through using the learning management system software Moodle (LMS). The AOU has adopted an open learning approach. The aim is to support its vision to reach students in different parts of Jordan and…

  16. Novel compact panomorph lens based vision system for monitoring around a vehicle

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2008-04-01

    Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.

  17. Ingestible wireless capsules for enhanced diagnostic inspection of gastrointestinal tract

    NASA Astrophysics Data System (ADS)

    Rasouli, Mahdi; Kencana, Andy Prima; Huynh, Van An; Ting, Eng Kiat; Lai, Joshua Chong Yue; Wong, Kai Juan; Tan, Su Lim; Phee, Soo Jay

    2011-03-01

    Wireless capsule endoscopy has become a common procedure for diagnostic inspection of gastrointestinal tract. This method offers a less-invasive alternative to traditional endoscopy by eliminating uncomfortable procedures of the traditional endoscopy. Moreover, it provides the opportunity for exploring inaccessible areas of the small intestine. Current capsule endoscopes, however, move by peristalsis and are not capable of detailed and on-demand inspection of desired locations. Here, we propose and develop two wireless endoscopes with maneuverable vision systems to enhance diagnosis of gastrointestinal disorders. The vision systems in these capsules are equipped with mechanical actuators to adjust the position of the camera. This may help to cover larger areas of the digestive tract and investigate desired locations. The preliminary experimental results showed that the developed platform could successfully communicate with the external control unit via human body and adjust the position of camera to limited degrees.

  18. Low-latency situational awareness for UxV platforms

    NASA Astrophysics Data System (ADS)

    Berends, David C.

    2012-06-01

    Providing high quality, low latency video from unmanned vehicles through bandwidth-limited communications channels remains a formidable challenge for modern vision system designers. SRI has developed a number of enabling technologies to address this, including the use of SWaP-optimized Systems-on-a-Chip which provide Multispectral Fusion and Contrast Enhancement as well as H.264 video compression. Further, the use of salience-based image prefiltering prior to image compression greatly reduces output video bandwidth by selectively blurring non-important scene regions. Combined with our customization of the VLC open source video viewer for low latency video decoding, SRI developed a prototype high performance, high quality vision system for UxV application in support of very demanding system latency requirements and user CONOPS.

  19. Synthetic vision in the cockpit: 3D systems for general aviation

    NASA Astrophysics Data System (ADS)

    Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth

    2001-08-01

    Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.

  20. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  1. Effects of contour enhancement on low-vision preference and visual search.

    PubMed

    Satgunam, Premnandhini; Woods, Russell L; Luo, Gang; Bronstad, P Matthew; Reynolds, Zachary; Ramachandra, Chaithanya; Mel, Bartlett W; Peli, Eli

    2012-09-01

    To determine whether image enhancement improves visual search performance and whether enhanced images were also preferred by subjects with vision impairment. Subjects (n = 24) with vision impairment (vision: 20/52 to 20/240) completed visual search and preference tasks for 150 static images that were enhanced to increase object contours' visual saliency. Subjects were divided into two groups and were shown three enhancement levels. Original and medium enhancements were shown to both groups. High enhancement was shown to group 1, and low enhancement was shown to group 2. For search, subjects pointed to an object that matched a search target displayed at the top left of the screen. An "integrated search performance" measure (area under the curve of cumulative correct response rate over search time) quantified performance. For preference, subjects indicated the preferred side when viewing the same image with different enhancement levels on side-by-side high-definition televisions. Contour enhancement did not improve performance in the visual search task. Group 1 subjects significantly (p < 0.001) rejected the High enhancement, and showed no preference for medium enhancement over the original images. Group 2 subjects significantly preferred (p < 0.001) both the medium and the low enhancement levels over original. Contrast sensitivity was correlated with both preference and performance; subjects with worse contrast sensitivity performed worse in the search task (ρ = 0.77, p < 0.001) and preferred more enhancement (ρ = -0.47, p = 0.02). No correlation between visual search performance and enhancement preference was found. However, a small group of subjects (n = 6) in a narrow range of mid-contrast sensitivity performed better with the enhancement, and most (n = 5) also preferred the enhancement. Preferences for image enhancement can be dissociated from search performance in people with vision impairment. Further investigations are needed to study the relationships between preference and performance for a narrow range of mid-contrast sensitivity where a beneficial effect of enhancement may exist.

  2. Dynamic mobility applications, program evaluation : national-level impacts and costs estimation : final report.

    DOT National Transportation Integrated Search

    2016-07-01

    The vision of the Dynamic Mobility Applications (DMA) program is to expedite the development, testing, and deployment of innovative mobility applications that maximize system productivity and enhance mobility of individuals within the surface transpo...

  3. Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes

    NASA Technical Reports Server (NTRS)

    Agapakis, John E.; Bolstad, Jon

    1993-01-01

    Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.

  4. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  5. Machine Vision Within The Framework Of Collective Neural Assemblies

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1990-03-01

    The proposed mechanism for designing a robust machine vision system is based on the dynamic activity generated by the various neural populations embedded in nervous tissue. It is postulated that a hierarchy of anatomically distinct tissue regions are involved in visual sensory information processing. Each region may be represented as a planar sheet of densely interconnected neural circuits. Spatially localized aggregates of these circuits represent collective neural assemblies. Four dynamically coupled neural populations are assumed to exist within each assembly. In this paper we present a state-variable model for a tissue sheet derived from empirical studies of population dynamics. Each population is modelled as a nonlinear second-order system. It is possible to emulate certain observed physiological and psychophysiological phenomena of biological vision by properly programming the interconnective gains . Important early visual phenomena such as temporal and spatial noise insensitivity, contrast sensitivity and edge enhancement will be discussed for a one-dimensional tissue model.

  6. 75 FR 38391 - Special Conditions: Boeing 757-200 With Enhanced Flight Vision System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    .... SUMMARY: These special conditions are issued for the Boeing Model 757- 200 series airplanes. These... system (EFVS). The EFVS is a novel or unusual design feature which consists of a head-up display (HUD... regulations do not contain adequate or appropriate safety standards for this design feature. These special...

  7. EVA Communications Avionics and Informatics

    NASA Technical Reports Server (NTRS)

    Carek, David Andrew

    2005-01-01

    The Glenn Research Center is investigating and developing technologies for communications, avionics, and information systems that will significantly enhance extra vehicular activity capabilities to support the Vision for Space Exploration. Several of the ongoing research and development efforts are described within this presentation including system requirements formulation, technology development efforts, trade studies, and operational concept demonstrations.

  8. Low vision goggles: optical design studies

    NASA Astrophysics Data System (ADS)

    Levy, Ofer; Apter, Boris; Efron, Uzi

    2006-08-01

    Low Vision (LV) due to Age Related Macular Degeneration (AMD), Glaucoma or Retinitis Pigmentosa (RP) is a growing problem, which will affect more than 15 million people in the U.S alone in 2010. Low Vision Aid Goggles (LVG) have been under development at Ben-Gurion University and the Holon Institute of Technology. The device is based on a unique Image Transceiver Device (ITD), combining both functions of imaging and Display in a single chip. Using the ITD-based goggles, specifically designed for the visually impaired, our aim is to develop a head-mounted device that will allow the capture of the ambient scenery, perform the necessary image enhancement and processing, and re-direct it to the healthy part of the patient's retina. This design methodology will allow the Goggles to be mobile, multi-task and environmental-adaptive. In this paper we present the optical design considerations of the Goggles, including a preliminary performance analysis. Common vision deficiencies of LV patients are usually divided into two main categories: peripheral vision loss (PVL) and central vision loss (CVL), each requiring different Goggles design. A set of design principles had been defined for each category. Four main optical designs are presented and compared according to the design principles. Each of the designs is presented in two main optical configurations: See-through system and Video imaging system. The use of a full-color ITD-Based Goggles is also discussed.

  9. Advanced Materials, Technologies, and Complex Systems Analyses: Emerging Opportunities to Enhance Urban Water Security.

    PubMed

    Zodrow, Katherine R; Li, Qilin; Buono, Regina M; Chen, Wei; Daigger, Glen; Dueñas-Osorio, Leonardo; Elimelech, Menachem; Huang, Xia; Jiang, Guibin; Kim, Jae-Hong; Logan, Bruce E; Sedlak, David L; Westerhoff, Paul; Alvarez, Pedro J J

    2017-09-19

    Innovation in urban water systems is required to address the increasing demand for clean water due to population growth and aggravated water stress caused by water pollution, aging infrastructure, and climate change. Advances in materials science, modular water treatment technologies, and complex systems analyses, coupled with the drive to minimize the energy and environmental footprints of cities, provide new opportunities to ensure a resilient and safe water supply. We present a vision for enhancing efficiency and resiliency of urban water systems and discuss approaches and research needs for overcoming associated implementation challenges.

  10. Color vision testing with a computer graphics system: preliminary results.

    PubMed

    Arden, G; Gündüz, K; Perry, S

    1988-06-01

    We report a method for computer enhancement of color vision tests. In our graphics system 256 colors are selected from a much larger range and displayed on a screen divided into 768 x 288 pixels. Eight-bit digital-to-analogue converters drive a high quality monitor with separate inputs to the red, green, and blue amplifiers and calibrated gun chromaticities. The graphics are controlled by a PASCAL program written for a personal computer, which calculates the values of the red, green, and blue signals and specifies them in Commité Internationale d'Eclairage X, Y, and Z fundamentals, so changes in chrominance occur without changes in luminance. The system for measuring color contrast thresholds with gratings is more than adequate in normal observers. In patients with mild retinal damage in whom other tests of visual function are normal, this method of testing color vision shows specific increases in contrast thresholds along tritan color-confusion lines. By the time the Hardy-Rand-Rittler and Farnsworth-Munsell 100-hue tests disclose abnormalities, gross defects in color contrast threshold can be seen with our system.

  11. NASA's strategic plan for education. A strategy for change, 1993-1998

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA's education vision is to promote excellence in America's education system through enhancing and expanding scientific and technological competence. In doing so, NASA strives to be recognized by the education community as the premier mission agency in support of the National Education Goals and in the development and implementation of education standards. To realize this vision, NASA has clearly defined and developed three specific goals to promote excellence in education. Specific objectives and milestones are defined for each goal in the body of this strategic plan.

  12. Comparison of vacuum rise time, vacuum limit accuracy, and occlusion break surge of 3 new phacoemulsification systems.

    PubMed

    Han, Young Keun; Miller, Kevin M

    2009-08-01

    To compare vacuum rise time, vacuum limit accuracy, and occlusion break surge of 3 new phacoemulsification machines. Jules Stein Eye Institute and Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, California, USA. The vacuum rise time under normal and enhanced aspiration modes, vacuum limit accuracy, and occlusion break surge of the Infiniti Vision System, Stellaris Vision Enhancement System, and WhiteStar Signature Phacoemulsification System were tested. Vacuum rise time and limit accuracy were measured at limit settings of 400 mm Hg and 600 mm Hg. Surge area was recorded at vacuum limit settings of 200 mm Hg, 300 mm Hg, 400 mm Hg, and 500 mm Hg. The Infiniti had the fastest vacuum rise times under normal and enhanced aspiration modes. At 4 seconds, the vacuum limit accuracy was greatest with the Infiniti at the 400 mm Hg limit and the Signature at the 600 mm Hg limit. The Stellaris did not reach either vacuum target. The Infiniti performed better than the other 2 machines during testing of occlusion break surge at all vacuum limit settings above 200 mm Hg. Under controlled laboratory test conditions, the Infiniti had the fastest vacuum rise time, greatest vacuum limit accuracy at 400 mm Hg, and least occlusion break surge. These results can be explained by the lower compliance of the Infiniti system.

  13. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  14. 75 FR 28852 - Ninth Meeting: Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ... approach and landing. FOR FURTHER INFORMATION CONTACT: (1) RTCA Secretariat, 1828 L Street, NW., Suite 805, Washington, DC 20036; telephone (202) 833-9339; fax (202) 833-9434; Web site http://www.rtca.org...

  15. A landscape vision for integrating industrial crops into biofuel systems

    USDA-ARS?s Scientific Manuscript database

    Achieving energy independence and security through domestic production of renewable biofuels is feasible but will require a different landscape than we have with current agricultural practices. Integrating industrial crops such as Canola, Camelina, or Cuphea could offer many opportunities to enhance...

  16. Helicopter flights with night-vision goggles: Human factors aspects

    NASA Technical Reports Server (NTRS)

    Brickner, Michael S.

    1989-01-01

    Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.

  17. A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; Abernathy, Mike

    2004-01-01

    A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.

  18. Performance and cost improvements in the display control module for DVE

    NASA Astrophysics Data System (ADS)

    Thomas, J.; Lorimer, S.,

    2009-05-01

    The DCM for the Drivers Vision Enhancer system is the display part of a relatively low cost IR imaging system for land-vehicles. Recent changes to operational needs, associated with asymmetrical warfare have added daytime operations to the uses for this mature system. This paper will discuss cost/performance tradeoffs and provide thoughts for "DVE of the future" in light of these new operational needs for the system.

  19. Latency Requirements for Head-Worn Display S/EVS Applications

    NASA Technical Reports Server (NTRS)

    Bailey, Randall E.; Trey Arthur, J. J., III; Williams, Steven P.

    2004-01-01

    NASA s Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas flight control, flight simulation, and virtual reality are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.

  20. Latency requirements for head-worn display S/EVS applications

    NASA Astrophysics Data System (ADS)

    Bailey, Randall E.; Arthur, Jarvis J., III; Williams, Steven P.

    2004-08-01

    NASA's Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas - flight control, flight simulation, and virtual reality - are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined.

  1. Performance, Cost, and Financial Parameters of Geothermal District Heating Systems for Market Penetration Modeling under Various Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckers, Koenraad J; Young, Katherine R

    Geothermal district heating (GDH) systems have limited penetration in the U.S., with an estimated installed capacity of only 100 MWth for a total of 21 sites. We see higher deployment in other regions, for example, in Europe with an installed capacity of more than 4,700 MWth for 257 GDH sites. The U.S. Department of Energy Geothermal Vision (GeoVision) Study is currently looking at the potential to increase the deployment in the U.S. and to understand the impact of this increased deployment. This paper reviews 31 performance, cost, and financial parameters as input for numerical simulations describing GDH system deployment inmore » support of the GeoVision effort. The focus is on GDH systems using hydrothermal and Enhanced Geothermal System resources in the U.S.; ground-source heat pumps and heat-to-electricity conversion technology were excluded. Parameters investigated include 1) capital and operation and maintenance costs for both subsurface and surface equipment; 2) performance factors such as resource recovery factors, well flow rates, and system efficiencies; and 3) financial parameters such as inflation, interest, and tax rates. Current values as well as potential future improved values under various scenarios are presented. Sources of data considered include academic and popular literature, software tools such as GETEM and GEOPHIRES, industry interviews, and analysis conducted by other task forces for the GeoVision Study, e.g., on the drilling costs and reservoir performance.« less

  2. Eyes Wide Shut: the impact of dim-light vision on neural investment in marine teleosts.

    PubMed

    Iglesias, Teresa L; Dornburg, Alex; Warren, Dan L; Wainwright, Peter C; Schmitz, Lars; Economo, Evan P

    2018-05-28

    Understanding how organismal design evolves in response to environmental challenges is a central goal of evolutionary biology. In particular, assessing the extent to which environmental requirements drive general design features among distantly related groups is a major research question. The visual system is a critical sensory apparatus that evolves in response to changing light regimes. In vertebrates, the optic tectum is the primary visual processing centre of the brain and yet it is unclear how or whether this structure evolves while lineages adapt to changes in photic environment. On one hand, dim-light adaptation is associated with larger eyes and enhanced light-gathering power that could require larger information processing capacity. On the other hand, dim-light vision may evolve to maximize light sensitivity at the cost of acuity and colour sensitivity, which could require less processing power. Here, we use X-ray microtomography and phylogenetic comparative methods to examine the relationships between diel activity pattern, optic morphology, trophic guild and investment in the optic tectum across the largest radiation of vertebrates-teleost fishes. We find that despite driving the evolution of larger eyes, enhancement of the capacity for dim-light vision generally is accompanied by a decrease in investment in the optic tectum. These findings underscore the importance of considering diel activity patterns in comparative studies and demonstrate how vision plays a role in brain evolution, illuminating common design principles of the vertebrate visual system. © 2018 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2018 European Society For Evolutionary Biology.

  3. TOXICITY TESTING IN THE 21ST CENTURY: A VISION AND A STRATEGY

    PubMed Central

    Krewski, Daniel; Acosta, Daniel; Andersen, Melvin; Anderson, Henry; Bailar, John C.; Boekelheide, Kim; Brent, Robert; Charnley, Gail; Cheung, Vivian G.; Green, Sidney; Kelsey, Karl T.; Kerkvliet, Nancy I.; Li, Abby A.; McCray, Lawrence; Meyer, Otto; Patterson, Reid D.; Pennie, William; Scala, Robert A.; Solomon, Gina M.; Stephens, Martin; Yager, James; Zeise, Lauren

    2015-01-01

    With the release of the landmark report Toxicity Testing in the 21st Century: A Vision and a Strategy, the U.S. National Academy of Sciences, in 2007, precipitated a major change in the way toxicity testing is conducted. It envisions increased efficiency in toxicity testing and decreased animal usage by transitioning from current expensive and lengthy in vivo testing with qualitative endpoints to in vitro toxicity pathway assays on human cells or cell lines using robotic high-throughput screening with mechanistic quantitative parameters. Risk assessment in the exposed human population would focus on avoiding significant perturbations in these toxicity pathways. Computational systems biology models would be implemented to determine the dose-response models of perturbations of pathway function. Extrapolation of in vitro results to in vivo human blood and tissue concentrations would be based on pharmacokinetic models for the given exposure condition. This practice would enhance human relevance of test results, and would cover several test agents, compared to traditional toxicological testing strategies. As all the tools that are necessary to implement the vision are currently available or in an advanced stage of development, the key prerequisites to achieving this paradigm shift are a commitment to change in the scientific community, which could be facilitated by a broad discussion of the vision, and obtaining necessary resources to enhance current knowledge of pathway perturbations and pathway assays in humans and to implement computational systems biology models. Implementation of these strategies would result in a new toxicity testing paradigm firmly based on human biology. PMID:20574894

  4. Concept of Operations for Commercial and Business Aircraft Synthetic Vision Systems. 1.0

    NASA Technical Reports Server (NTRS)

    Williams Daniel M.; Waller, Marvin C.; Koelling, John H.; Burdette, Daniel W.; Capron, William R.; Barry, John S.; Gifford, Richard B.; Doyle, Thomas M.

    2001-01-01

    A concept of operations (CONOPS) for the Commercial and Business (CaB) aircraft synthetic vision systems (SVS) is described. The CaB SVS is expected to provide increased safety and operational benefits in normal and low visibility conditions. Providing operational benefits will promote SVS implementation in the Net, improve aviation safety, and assist in meeting the national aviation safety goal. SVS will enhance safety and enable consistent gate-to-gate aircraft operations in normal and low visibility conditions. The goal for developing SVS is to support operational minima as low as Category 3b in a variety of environments. For departure and ground operations, the SVS goal is to enable operations with a runway visual range of 300 feet. The system is an integrated display concept that provides a virtual visual environment. The SVS virtual visual environment is composed of three components: an enhanced intuitive view of the flight environment, hazard and obstacle defection and display, and precision navigation guidance. The virtual visual environment will support enhanced operations procedures during all phases of flight - ground operations, departure, en route, and arrival. The applications selected for emphasis in this document include low visibility departures and arrivals including parallel runway operations, and low visibility airport surface operations. These particular applications were selected because of significant potential benefits afforded by SVS.

  5. Vision-Based People Detection System for Heavy Machine Applications

    PubMed Central

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-01

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance. PMID:26805838

  6. Vision-Based People Detection System for Heavy Machine Applications.

    PubMed

    Fremont, Vincent; Bui, Manh Tuan; Boukerroui, Djamal; Letort, Pierrick

    2016-01-20

    This paper presents a vision-based people detection system for improving safety in heavy machines. We propose a perception system composed of a monocular fisheye camera and a LiDAR. Fisheye cameras have the advantage of a wide field-of-view, but the strong distortions that they create must be handled at the detection stage. Since people detection in fisheye images has not been well studied, we focus on investigating and quantifying the impact that strong radial distortions have on the appearance of people, and we propose approaches for handling this specificity, adapted from state-of-the-art people detection approaches. These adaptive approaches nevertheless have the drawback of high computational cost and complexity. Consequently, we also present a framework for harnessing the LiDAR modality in order to enhance the detection algorithm for different camera positions. A sequential LiDAR-based fusion architecture is used, which addresses directly the problem of reducing false detections and computational cost in an exclusively vision-based system. A heavy machine dataset was built, and different experiments were carried out to evaluate the performance of the system. The results are promising, in terms of both processing speed and performance.

  7. Overview of sports vision

    NASA Astrophysics Data System (ADS)

    Moore, Linda A.; Ferreira, Jannie T.

    2003-03-01

    Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).

  8. Towards a Decision Support System for Space Flight Operations

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Hogle, Charles; Ruszkowski, James

    2013-01-01

    The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.

  9. CFIT Prevention Using Synthetic Vision

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Parrish, Russell V.

    2003-01-01

    In commercial aviation, over 30-percent of all fatal accidents worldwide are categorized as Controlled Flight Into Terrain (CFIT) accidents where a fully functioning airplane is inadvertently flown into the ground, water, or an obstacle. An experiment was conducted at NASA Langley Research Center investigating the presentation of a synthetic terrain database scene to the pilot on a Primary Flight Display (PFD). The major hypothesis for the experiment is that a synthetic vision system (SVS) will improve the pilot s ability to detect and avoid a potential CFIT compared to conventional flight instrumentation. All display conditions, including the baseline, contained a Terrain Awareness and Warning System (TAWS) and Vertical Situation Display (VSD) enhanced Navigation Display (ND). Sixteen pilots each flew 22 approach - departure maneuvers in Instrument Meteorological Conditions (IMC) to the terrain challenged Eagle County Regional Airport (EGE) in Colorado. For the final run, the flight guidance cues were altered such that the departure path went into the terrain. All pilots with a SVS enhanced PFD (12 of 16 pilots) noticed and avoided the potential CFIT situation. All of the pilots who flew the anomaly with the baseline display configuration (which included a TAWS and VSD enhanced ND) had a CFIT event.

  10. Pavement Distress Evaluation Using 3D Depth Information from Stereo Vision

    DOT National Transportation Integrated Search

    2012-07-01

    The focus of the current project funded by MIOH-UTC for the period 9/1/2010-8/31/2011 is to : enhance our earlier effort in providing a more robust image processing based pavement distress : detection and classification system. During the last few de...

  11. Making Distance Education Borderless.

    ERIC Educational Resources Information Center

    Srisa-An, Wichit

    1997-01-01

    Begins with a tribute to Professor G. Ram Reddy (founder of Indira Gandhi National Open University), then focuses on enhancing the role of open universities in providing borderless distance education. Highlights include the need for open distance-education; philosophy and vision; the distance teaching system; the role of information technology;…

  12. Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.; Altschuler, B. R.; Taboada, J.

    1981-01-01

    It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.

  13. 78 FR 32078 - Special Conditions: Gulfstream Model G280 Airplane, Enhanced Flight Vision System (EFVS) With...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... document refers to a system comprised of a head-up display, imaging sensor(s), and avionics interfaces that display the sensor imagery on the HUD, and which overlay that imagery with alpha-numeric and symbolic... the sensor imagery, with or without other flight information, on a head-down display. For clarity, the...

  14. Setting the vision: applied patient-reported outcomes and smart, connected digital healthcare systems to improve patient-centered outcomes prediction in critical illness.

    PubMed

    Wysham, Nicholas G; Abernethy, Amy P; Cox, Christopher E

    2014-10-01

    Prediction models in critical illness are generally limited to short-term mortality and uncommonly include patient-centered outcomes. Current outcome prediction tools are also insensitive to individual context or evolution in healthcare practice, potentially limiting their value over time. Improved prognostication of patient-centered outcomes in critical illness could enhance decision-making quality in the ICU. Patient-reported outcomes have emerged as precise methodological measures of patient-centered variables and have been successfully employed using diverse platforms and technologies, enhancing the value of research in critical illness survivorship and in direct patient care. The learning health system is an emerging ideal characterized by integration of multiple data sources into a smart and interconnected health information technology infrastructure with the goal of rapidly optimizing patient care. We propose a vision of a smart, interconnected learning health system with integrated electronic patient-reported outcomes to optimize patient-centered care, including critical care outcome prediction. A learning health system infrastructure integrating electronic patient-reported outcomes may aid in the management of critical illness-associated conditions and yield tools to improve prognostication of patient-centered outcomes in critical illness.

  15. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  16. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  17. Compact, self-contained enhanced-vision system (EVS) sensor simulator

    NASA Astrophysics Data System (ADS)

    Tiana, Carlo

    2007-04-01

    We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.

  18. Emotion improves and impairs early vision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2009-06-01

    Recent studies indicate that emotion enhances early vision, but the generality of this finding remains unknown. Do the benefits of emotion extend to all basic aspects of vision, or are they limited in scope? Our results show that the brief presentation of a fearful face, compared with a neutral face, enhances sensitivity for the orientation of subsequently presented low-spatial-frequency stimuli, but diminishes orientation sensitivity for high-spatial-frequency stimuli. This is the first demonstration that emotion not only improves but also impairs low-level vision. The selective low-spatial-frequency benefits are consistent with the idea that emotion enhances magnocellular processing. Additionally, we suggest that the high-spatial-frequency deficits are due to inhibitory interactions between magnocellular and parvocellular pathways. Our results suggest an emotion-induced trade-off in visual processing, rather than a general improvement. This trade-off may benefit perceptual dimensions that are relevant for survival at the expense of those that are less relevant.

  19. Vision for a Global Registry of Anticipated Public Health Studies

    PubMed Central

    Choi, Bernard C.K.; Frank, John; Mindell, Jennifer S.; Orlova, Anna; Lin, Vivian; Vaillancourt, Alain D.M.G.; Puska, Pekka; Pang, Tikki; Skinner, Harvey A.; Marsh, Marsha; Mokdad, Ali H.; Yu, Shun-Zhang; Lindner, M. Cristina; Sherman, Gregory; Barreto, Sandhi M.; Green, Lawrence W.; Svenson, Lawrence W.; Sainsbury, Peter; Yan, Yongping; Zhang, Zuo-Feng; Zevallos, Juan C.; Ho, Suzanne C.; de Salazar, Ligia M.

    2007-01-01

    In public health, the generation, management, and transfer of knowledge all need major improvement. Problems in generating knowledge include an imbalance in research funding, publication bias, unnecessary studies, adherence to fashion, and undue interest in novel and immediate issues. Impaired generation of knowledge, combined with a dated and inadequate process for managing knowledge and an inefficient system for transferring knowledge, mean a distorted body of evidence available for decisionmaking in public health. This article hopes to stimulate discussion by proposing a Global Registry of Anticipated Public Health Studies. This prospective, comprehensive system for tracking research in public health could help enhance collaboration and improve efficiency. Practical problems must be discussed before such a vision can be further developed. PMID:17413073

  20. Shared control of a medical robot with haptic guidance.

    PubMed

    Xiong, Linfei; Chng, Chin Boon; Chui, Chee Kong; Yu, Peiwu; Li, Yao

    2017-01-01

    Tele-operation of robotic surgery reduces the radiation exposure during the interventional radiological operations. However, endoscope vision without force feedback on the surgical tool increases the difficulty for precise manipulation and the risk of tissue damage. The shared control of vision and force provides a novel approach of enhanced control with haptic guidance, which could lead to subtle dexterity and better maneuvrability during MIS surgery. The paper provides an innovative shared control method for robotic minimally invasive surgery system, in which vision and haptic feedback are incorporated to provide guidance cues to the clinician during surgery. The incremental potential field (IPF) method is utilized to generate a guidance path based on the anatomy of tissue and surgical tool interaction. Haptic guidance is provided at the master end to assist the clinician during tele-operative surgical robotic task. The approach has been validated with path following and virtual tumor targeting experiments. The experiment results demonstrate that comparing with vision only guidance, the shared control with vision and haptics improved the accuracy and efficiency of surgical robotic manipulation, where the tool-position error distance and execution time are reduced. The validation experiment demonstrates that the shared control approach could help the surgical robot system provide stable assistance and precise performance to execute the designated surgical task. The methodology could also be implemented with other surgical robot with different surgical tools and applications.

  1. Ultraviolet sensor as integrity monitor for enhanced flight vision system (EFVS) approaches to Cat II RVR conditions

    NASA Astrophysics Data System (ADS)

    McKinley, John B.; Pierson, Roger; Ertem, M. C.; Krone, Norris J., Jr.; Cramer, James A.

    2008-04-01

    Flight tests were conducted at Greenbrier Valley Airport (KLWB) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Norris Electro Optical Systems Corporation (NEOC) developmental ultraviolet (UV) sensor. These flights were sponsored by NEOC under a Federal Aviation Administration program, and the ultraviolet concepts, technology, system mechanization, and hardware for landing during low visibility landing conditions have been patented by NEOC. Imagery from the UV sensor, HUD guidance cues, and out-the-window videos were separately recorded at the engineering workstation for each approach. Inertial flight path data were also recorded. Various configurations of portable UV emitters were positioned along the runway edge and threshold. The UV imagery of the runway outline was displayed on the HUD along with guidance generated from the mission computer. Enhanced Flight Vision System (EFVS) approaches with the UV sensor were conducted from the initial approach fix to the ILS decision height in both VMC and IMC. Although the availability of low visibility conditions during the flight test period was limited, results from previous fog range testing concluded that UV EFVS has the performance capability to penetrate CAT II runway visual range obscuration. Furthermore, independent analysis has shown that existing runway light emit sufficient UV radiation without the need for augmentation other than lens replacement with UV transmissive quartz lenses. Consequently, UV sensors should qualify as conforming to FAA requirements for EFVS approaches. Combined with Synthetic Vision System (SVS), UV EFVS would function as both a precision landing aid, as well as an integrity monitor for the GPS and SVS database.

  2. The use of contact lens telescopic systems in low vision rehabilitation.

    PubMed

    Vincent, Stephen J

    2017-06-01

    Refracting telescopes are afocal compound optical systems consisting of two lenses that produce an apparent magnification of the retinal image. They are routinely used in visual rehabilitation in the form of monocular or binocular hand held low vision aids, and head or spectacle-mounted devices to improve distance visual acuity, and with slight modifications, to enhance acuity for near and intermediate tasks. Since the advent of ground glass haptic lenses in the 1930's, contact lenses have been employed as a useful refracting element of telescopic systems; primarily as a mobile ocular lens (the eyepiece), that moves with the eye. Telescopes which incorporate a contact lens eyepiece significantly improve the weight, comesis, and field of view compared to traditional spectacle-mounted telescopes, in addition to potential related psycho-social benefits. This review summarises the underlying optics and use of contact lenses to provide telescopic magnification from the era of Descartes, to Dallos, and the present day. The limitations and clinical challenges associated with such devices are discussed, along with the potential future use of reflecting telescopes incorporated within scleral lenses and tactile contact lens systems in low vision rehabilitation. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  3. Transformative Professional Learning: A System to Enhance Teacher and Student Motivation

    ERIC Educational Resources Information Center

    Ginsberg, Margery B.

    2011-01-01

    This book's bold new vision for professional learning emphasizes the importance of intrinsic motivation and respect for cultural diversity to create a transformative approach to school improvement. Knowing that culture and motivation are inseparable influences on learning, Margery B. Ginsberg provides practical steps that focus on teaching and…

  4. Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement

    PubMed Central

    Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.

    2017-01-01

    Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975

  5. Spaceborne GPS Current Status and Future Visions

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Hartman, Kate; Lightsey, E. Glenn

    1998-01-01

    The Global Positioning System (GPS), developed by the Department of Defense, is quickly revolutionizing the architecture of future spacecraft and spacecraft systems. Significant savings in spacecraft life cycle cost, in power, and in mass can be realized by exploiting Global Positioning System (GPS) technology in spaceborne vehicles. These savings are realized because GPS is a systems sensor-it combines the ability to sense space vehicle trajectory, attitude, time, and relative ranging between vehicles into one package. As a result, a reduced spacecraft sensor complement can be employed on spacecraft and significant reductions in space vehicle operations cost can be realized through enhanced on- board autonomy. This paper provides an overview of the current status of spaceborne GPS, a description of spaceborne GPS receivers available now and in the near future, a description of the 1997-1999 GPS flight experiments and the spaceborne GPS team's vision for the future.

  6. Spaceborne GPS: Current Status and Future Visions

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Hartman, Kate; Lightsey, E. Glenn

    1998-01-01

    The Global Positioning System (GPS), developed by the Department of Defense is quickly revolutionizing the architecture of future spacecraft and spacecraft systems. Significant savings in spacecraft life cycle cost, in power, and in mass can be realized by exploiting GPS technology in spaceborne vehicles. These savings are realized because GPS is a systems sensor--it combines the ability to sense space vehicle trajectory, attitude, time, and relative ranging between vehicles into one package. As a result, a reduced spacecraft sensor complement can be employed and significant reductions in space vehicle operations cost can be realized through enhanced on-board autonomy. This paper provides an overview of the current status of spaceborne GPS, a description of spaceborne GPS receivers available now and in the near future, a description of the 1997-2000 GPS flight experiments, and the spaceborne GPS team's vision for the future.

  7. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A portable W-band radar system for enhancement of infrared vision in fire fighting operations

    NASA Astrophysics Data System (ADS)

    Klenner, Mathias; Zech, Christian; Hülsmann, Axel; Kühn, Jutta; Schlechtweg, Michael; Hahmann, Konstantin; Kleiner, Bernhard; Ulrich, Michael; Ambacher, Oliver

    2016-10-01

    In this paper, we present a millimeter wave radar system which will enhance the performance of infrared cameras used for fire-fighting applications. The radar module is compact and lightweight such that the system can be combined with inertial sensors and integrated in a hand-held infrared camera. This allows for precise distance measurements in harsh environmental conditions, such as tunnel or industrial fires, where optical sensors are unreliable or fail. We discuss the design of the RF front-end, the antenna and a quasi-optical lens for beam shaping as well as signal processing and demonstrate the performance of the system by in situ measurements in a smoke filled environment.

  9. Federal employees dental and vision insurance program. Final rule.

    PubMed

    2008-08-26

    The Office of Personnel Management (OPM) is issuing final regulations to administer the Federal Employee Dental and Vision Benefits Enhancement Act of 2004, signed into law December 23, 2004. This law establishes dental and vision benefits programs for Federal employees, annuitants, and their eligible family members.

  10. Advanced electro-mechanical micro-shutters for thermal infrared night vision imaging and targeting systems

    NASA Astrophysics Data System (ADS)

    Durfee, David; Johnson, Walter; McLeod, Scott

    2007-04-01

    Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.

  11. Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.

    PubMed

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.

  12. A Discussion of Commercially Available Intra-ocular Telescopic Implants for Patients with Age-Related Macular Degeneration.

    PubMed

    Dunbar, Hannah M P; Dhawahir-Scala, Felipe E

    2018-06-01

    Age-related macular degeneration (AMD) is the leading cause of visual impairment in the western world, causing significant reduction in quality of life. Despite treatment advances, the burden of visual impairment caused by AMD continues to rise. In addition to traditional low vision rehabilitation and support, optical and electronic aids, and strategies to enhance the use of peripheral vision, implantable telescopic devices have been indicated as a surgical means of enhancing vision. Here we examine the literature on commercially available telescopic devices discussing their design, mode of action, surgical procedure and published outcomes on visual acuity, quality of life, surgical complication rates and cost effectiveness data where available.Funding Article processing charges were funded by VisionCare Inc.

  13. Using an Augmented Reality Device as a Distance-based Vision Aid-Promise and Limitations.

    PubMed

    Kinateder, Max; Gualtieri, Justin; Dunn, Matt J; Jarosz, Wojciech; Yang, Xing-Dong; Cooper, Emily A

    2018-06-06

    For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  14. Aeromedical implications of the X-Chrom lens for improving color vision deficiencies.

    DOT National Transportation Integrated Search

    1978-04-01

    The X-Chrom contact lens is a recent device recommended to improve defective color vision. The red lens is usually worn on the nondominant eye and may require extended wearing for optimum color vision enhancement. A battery of tests was given to 24 i...

  15. Chapter 11. Quality evaluation of apple by computer vision

    USDA-ARS?s Scientific Manuscript database

    Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...

  16. NASA's Strategic Plan for Education. A Strategy for Change: 1993-1998. First Edition.

    ERIC Educational Resources Information Center

    National Aeronautics and Space Administration, Washington, DC.

    The National Aeronautics and Space Administration's (NASA's) education vision is to promote excellence in America's education system through enhancing and expanding scientific and technological competence. In doing so, NASA strives to be recognized by the education community as the premier mission agency in support of the National Education Goals…

  17. Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision

    ERIC Educational Resources Information Center

    Prull, Matthew W.; Banks, William P.

    2005-01-01

    We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…

  18. 78 FR 54790 - Revisions to Operational Requirements for the Use of Enhanced Flight Vision Systems (EFVS) and to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-06

    ....gov . SUPPLEMENTARY INFORMATION: See the ``Additional Information'' section for information on how to comment on this proposal and how the FAA will handle comments received. The ``Additional Information..., environmental, energy, or federalism impacts that might result from adopting the proposals in this document. The...

  19. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  20. Man-machine interactive imaging and data processing using high-speed digital mass storage

    NASA Technical Reports Server (NTRS)

    Alsberg, H.; Nathan, R.

    1975-01-01

    The role of vision in teleoperation has been recognized as an important element in the man-machine control loop. In most applications of remote manipulation, direct vision cannot be used. To overcome this handicap, the human operator's control capabilities are augmented by a television system. This medium provides a practical and useful link between workspace and the control station from which the operator perform his tasks. Human performance deteriorates when the images are degraded as a result of instrumental and transmission limitations. Image enhancement is used to bring out selected qualities in a picture to increase the perception of the observer. A general purpose digital computer, an extensive special purpose software system is used to perform an almost unlimited repertoire of processing operations.

  1. Flight Simulator Evaluation of Synthetic Vision Display Concepts to Prevent Controlled Flight Into Terrain (CFIT)

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Kramer, Lynda J.; Parrish, Russell V.; Bailey, Randall E.

    2004-01-01

    In commercial aviation, over 30-percent of all fatal accidents worldwide are categorized as Controlled Flight Into Terrain (CFIT) accidents, where a fully functioning airplane is inadvertently flown into the ground. The major hypothesis for a simulation experiment conducted at NASA Langley Research Center was that a Primary Flight Display (PFD) with synthetic terrain will improve pilots ability to detect and avoid potential CFITs compared to conventional instrumentation. All display conditions, including the baseline, contained a Terrain Awareness and Warning System (TAWS) and Vertical Situation Display (VSD) enhanced Navigation Display (ND). Each pilot flew twenty-two approach departure maneuvers in Instrument Meteorological Conditions (IMC) to the terrain challenged Eagle County Regional Airport (EGE) in Colorado. For the final run, flight guidance cues were altered such that the departure path went into terrain. All pilots with a synthetic vision system (SVS) PFD (twelve of sixteen pilots) noticed and avoided the potential CFIT situation. The four pilots who flew the anomaly with the conventional baseline PFD configuration (which included a TAWS and VSD enhanced ND) had a CFIT event. Additionally, all the SVS display concepts enhanced the pilot s situational awareness, decreased workload and improved flight technical error (FTE) compared to the baseline configuration.

  2. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  3. Precision of computer-assisted core decompression drilling of the knee.

    PubMed

    Beckmann, J; Goetz, J; Bäthis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-06-01

    Core decompression by exact drilling into the ischemic areas is the treatment of choice in early stages of osteonecrosis of the femoral condyle. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision-navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. 20 sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany). Ten sawbones were drilled by fluoroscopic control only. A statistically significant difference with a mean distance of 0.58 mm in the navigated group and 0.98 mm in the control group regarding the distance to the desired mid-point of the lesion could be stated. Significant difference was further found in the number of drilling corrections as well as radiation time needed. The fluoroscopic-based VectorVision-navigation system shows a high feasibility and precision of computer-guided drilling with simultaneously reduction of radiation time and therefore could be integrated into clinical routine.

  4. A stakeholder visioning exercise to enhance chronic care and the integration of community pharmacy services.

    PubMed

    Franco-Trigo, L; Tudball, J; Fam, D; Benrimoj, S I; Sabater-Hernández, D

    2018-02-21

    Collaboration between relevant stakeholders in health service planning enables service contextualization and facilitates its success and integration into practice. Although community pharmacy services (CPSs) aim to improve patients' health and quality of life, their integration in primary care is far from ideal. Key stakeholders for the development of a CPS intended at preventing cardiovascular disease were identified in a previous stakeholder analysis. Engaging these stakeholders to create a shared vision is the subsequent step to focus planning directions and lay sound foundations for future work. This study aims to develop a stakeholder-shared vision of a cardiovascular care model which integrates community pharmacists and to identify initiatives to achieve this vision. A participatory visioning exercise involving 13 stakeholders across the healthcare system was performed. A facilitated workshop, structured in three parts (i.e., introduction; developing the vision; defining the initiatives towards the vision), was designed. The Chronic Care Model inspired the questions that guided the development of the vision. Workshop transcripts, researchers' notes and materials produced by participants were analyzed using qualitative content analysis. Stakeholders broadened the objective of the vision to focus on the management of chronic diseases. Their vision yielded 7 principles for advanced chronic care: patient-centered care; multidisciplinary team approach; shared goals; long-term care relationships; evidence-based practice; ease of access to healthcare settings and services by patients; and good communication and coordination. Stakeholders also delineated six environmental factors that can influence their implementation. Twenty-four initiatives to achieve the developed vision were defined. The principles and factors identified as part of the stakeholder shared-vision were combined in a preliminary model for chronic care. This model and initiatives can guide policy makers as well as healthcare planners and researchers to develop and integrate chronic disease services, namely CPSs, in real-world settings. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    PubMed

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  6. The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.

    PubMed

    Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F

    2016-10-01

    Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.

  7. Grounding Robot Autonomy in Emotion and Self-awareness

    NASA Astrophysics Data System (ADS)

    Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita

    Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.

  8. Digital enhancement of haematoxylin- and eosin-stained histological images for red-green colour-blind observers.

    PubMed

    Landini, G; Perryer, G

    2009-06-01

    Individuals with red-green colour-blindness (CB) commonly experience great difficulty differentiating between certain histological stain pairs, notably haematoxylin-eosin (H&E). The prevalence of red-green CB is high (6-10% of males), including among medical and laboratory personnel, and raises two major concerns: first, accessibility and equity issues during the education and training of individuals with this disability, and second, the likelihood of errors in critical tasks such as interpreting histological images. Here we show two methods to enhance images of H&E-stained samples so the differently stained tissues can be well discriminated by red-green CBs while remaining usable by people with normal vision. Method 1 involves rotating and stretching the range of H&E hues in the image to span the perceptual range of the CB observers. Method 2 digitally unmixes the original dyes using colour deconvolution into two separate images and repositions the information into hues that are more distinctly perceived. The benefits of these methods were tested in 36 volunteers with normal vision and 11 with red-green CB using a variety of H&E stained tissue sections paired with their enhanced versions. CB subjects reported they could better perceive the different stains using the enhanced images for 85% of preparations (method 1: 90%, method 2: 73%), compared to the H&E-stained original images. Many subjects with normal vision also preferred the enhanced images to the original H&E. The results suggest that these colour manipulations confer considerable advantage for those with red-green colour vision deficiency while not disadvantaging people with normal colour vision.

  9. Comparison of occlusion break responses and vacuum rise times of phacoemulsification systems.

    PubMed

    Sharif-Kashani, Pooria; Fanney, Douglas; Injev, Val

    2014-07-30

    Occlusion break surge during phacoemulsification cataract surgery can lead to potential surgical complications. The purpose of this study was to quantify occlusion break surge and vacuum rise time of current phacoemulsification systems used in cataract surgery. Occlusion break surge at vacuum pressures between 200 and 600 mmHg was assessed with the Infiniti® Vision System, the WhiteStar Signature® Phacoemulsification System, and the Centurion® Vision System using gravity-fed fluidics. Centurion Active FluidicsTM were also tested at multiple intraoperative pressure target settings. Vacuum rise time was evaluated for Infiniti, WhiteStar Signature, Centurion, and Stellaris® Vision Enhancement systems. Rise time to vacuum limits of 400 and 600 mmHg was assessed at flow rates of 30 and 60 cc/minute. Occlusion break surge was analyzed by 2-way analysis of variance. The Centurion system exhibited substantially less occlusion break surge than the other systems tested. Surge area with Centurion Active Fluidics was similar to gravity fluidics at an equivalent bottle height. At all Centurion Active Fluidics intraoperative pressure target settings tested, surge was smaller than with Infiniti and WhiteStar Signature. Infiniti had the fastest vacuum rise time and Stellaris had the slowest. No system tested reached the 600-mmHg vacuum limit. In this laboratory study, Centurion had the least occlusion break surge and similar vacuum rise times compared with the other systems tested. Reducing occlusion break surge may increase safety of phacoemulsification cataract surgery.

  10. Comparison of occlusion break responses and vacuum rise times of phacoemulsification systems

    PubMed Central

    2014-01-01

    Background Occlusion break surge during phacoemulsification cataract surgery can lead to potential surgical complications. The purpose of this study was to quantify occlusion break surge and vacuum rise time of current phacoemulsification systems used in cataract surgery. Methods Occlusion break surge at vacuum pressures between 200 and 600 mmHg was assessed with the Infiniti® Vision System, the WhiteStar Signature® Phacoemulsification System, and the Centurion® Vision System using gravity-fed fluidics. Centurion Active FluidicsTM were also tested at multiple intraoperative pressure target settings. Vacuum rise time was evaluated for Infiniti, WhiteStar Signature, Centurion, and Stellaris® Vision Enhancement systems. Rise time to vacuum limits of 400 and 600 mmHg was assessed at flow rates of 30 and 60 cc/minute. Occlusion break surge was analyzed by 2-way analysis of variance. Results The Centurion system exhibited substantially less occlusion break surge than the other systems tested. Surge area with Centurion Active Fluidics was similar to gravity fluidics at an equivalent bottle height. At all Centurion Active Fluidics intraoperative pressure target settings tested, surge was smaller than with Infiniti and WhiteStar Signature. Infiniti had the fastest vacuum rise time and Stellaris had the slowest. No system tested reached the 600-mmHg vacuum limit. Conclusions In this laboratory study, Centurion had the least occlusion break surge and similar vacuum rise times compared with the other systems tested. Reducing occlusion break surge may increase safety of phacoemulsification cataract surgery. PMID:25074069

  11. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  12. A Structured Light Sensor System for Tree Inventory

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong; Zemek, Michael C.

    2000-01-01

    Tree Inventory is referred to measurement and estimation of marketable wood volume in a piece of land or forest for purposes such as investment or for loan applications. Exist techniques rely on trained surveyor conducting measurements manually using simple optical or mechanical devices, and hence are time consuming subjective and error prone. The advance of computer vision techniques makes it possible to conduct automatic measurements that are more efficient, objective and reliable. This paper describes 3D measurements of tree diameters using a uniquely designed ensemble of two line laser emitters rigidly mounted on a video camera. The proposed laser camera system relies on a fixed distance between two parallel laser planes and projections of laser lines to calculate tree diameters. Performance of the laser camera system is further enhanced by fusion of information induced from structured lighting and that contained in video images. Comparison will be made between the laser camera sensor system and a stereo vision system previously developed for measurements of tree diameters.

  13. Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal

    NASA Astrophysics Data System (ADS)

    Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir

    2014-06-01

    This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.

  14. A Leadership Perspective on a Shared Vision for Healthcare.

    PubMed

    Kitch, Tracy

    2017-01-01

    Our country's recent negotiations for a new Health Accord have shone light on the importance of more accessible and better home care. The direction being taken on health funding investments has sent a strong message about healthcare system redesign. It is time to design a healthcare system that moves us away from a hospital-focused model to one that is more effective, integrated and sustainable and one that places a greater emphasis on primary care, community care and home care. The authors of the lead paper (Sharkey and Lefebre 2017) provide their vision for people-powered care and explore the opportunity for nursing leaders to draw upon the unique expertise and insights of home care nursing as a strategic lever to bring about real health system transformation across all settings. Understanding what really matters at the beginning of the healthcare journey and honouring the tenants of partnership and empowerment as a universal starting point to optimize health outcomes along the continuum of care present a very important opportunity. However, as nursing leaders in the health system change, it is important that we extend the conversation beyond one setting. It is essential that as leaders, we seek to design models of care delivery that achieve a shared vision, focused on seamless coordinated care across the continuum that is person-centred. Bringing about real system change requires us to think differently and consider the role of nursing across all settings, collaboratively co-designing so that our collective skills and knowledge can work within a complementary framework. Focusing our leadership efforts on enhancing integration across healthcare settings will ensure that nurses can be important leaders and active decision-makers in health system change. A shared vision for healthcare requires all of us to look beyond the usual practices and structures, hospitals and institutional walls.

  15. Smart unattended sensor networks with scene understanding capabilities

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2006-05-01

    Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.

  16. Two-Phase Flow Technology Developed and Demonstrated for the Vision for Exploration

    NASA Technical Reports Server (NTRS)

    Sankovic, John M.; McQuillen, John B.; Lekan, Jack F.

    2005-01-01

    NASA s vision for exploration will once again expand the bounds of human presence in the universe with planned missions to the Moon and Mars. To attain the numerous goals of this vision, NASA will need to develop technologies in several areas, including advanced power-generation and thermal-control systems for spacecraft and life support. The development of these systems will have to be demonstrated prior to implementation to ensure safe and reliable operation in reduced-gravity environments. The Two-Phase Flow Facility (T(PHI) FFy) Project will provide the path to these enabling technologies for critical multiphase fluid products. The safety and reliability of future systems will be enhanced by addressing focused microgravity fluid physics issues associated with flow boiling, condensation, phase separation, and system stability, all of which are essential to exploration technology. The project--a multiyear effort initiated in 2004--will include concept development, normal-gravity testing (laboratories), reduced gravity aircraft flight campaigns (NASA s KC-135 and C-9 aircraft), space-flight experimentation (International Space Station), and model development. This project will be implemented by a team from the NASA Glenn Research Center, QSS Group, Inc., ZIN Technologies, Inc., and the Extramural Strategic Research Team composed of experts from academia.

  17. Availability of vision and tactile gating: vision enhances tactile sensitivity.

    PubMed

    Colino, Francisco L; Lee, Ji-Hang; Binsted, Gordon

    2017-01-01

    A multitude of events bombard our sensory systems at every moment of our lives. Thus, it is important for the sensory and motor cortices to gate unimportant events. Tactile suppression is a well-known phenomenon defined as a reduced ability to detect tactile events on the skin before and during movement. Previous experiments (Buckingham et al. in Exp Brain Res 201(3):411-419, 2010; Colino et al. in Physiol Rep 2(3):e00267, 2014) found detection rates decrease just prior to and during finger abduction and decrease according to the proximity of the moving effector. However, what effect does vision have on tactile gating? There is ample evidence (see Serino and Haggard in Neurosci Biobehav Rev 34:224-236, 2010) observing increased tactile acuity when participants see their limbs. The present study examined how tactile detection changes in response to visual condition (vision/no vision). Ten human participants used their right hand to reach and grasp a cylinder. Tactors were attached to the index finger and the forearm of both the right and left arm and vibrated at various epochs relative to a "go" tone. Results replicate previous findings from our laboratory (Colino et al. in Physiol Rep 2(3):e00267, 2014). Also, tactile acuity decreased when participants did not have vision. These results indicate that the vision affects the somatosensation via inputs from parietal areas (Konen and Haggard in Cereb Cortex 24(2):501-507, 2014) but does so in a reach-to-grasp context.

  18. Effectiveness of portable electronic and optical magnifiers for near vision activities in low vision: a randomised crossover trial.

    PubMed

    Taylor, John J; Bambrick, Rachel; Brand, Andrew; Bray, Nathan; Dutton, Michelle; Harper, Robert A; Hoare, Zoe; Ryan, Barbara; Edwards, Rhiannon T; Waterman, Heather; Dickinson, Christine

    2017-07-01

    To compare the performance of near vision activities using additional portable electronic vision enhancement systems (p-EVES), to using optical magnifiers alone, by individuals with visual impairment. A total of 100 experienced optical aid users were recruited from low vision clinics at Manchester Royal Eye Hospital, Manchester, UK, to a prospective two-arm cross-over randomised controlled trial. Reading, performance of near vision activities, and device usage were evaluated at baseline; and at the end of each study arm (Intervention A: existing optical aids plus p-EVES; Intervention B: optical aids only) which was after 2 and 4 months. A total of 82 participants completed the study. Overall, maximum reading speed for high contrast sentences was not statistically significantly different for optical aids and p-EVES, although the critical print size and threshold print size which could be accessed with p-EVES were statistically significantly smaller (p < 0.001 in both cases). The optical aids were used for a larger number of tasks (p < 0.001), and used more frequently (p < 0.001). However p-EVES were preferred for leisure reading by 70% of participants, and allowed longer duration of reading (p < 0.001). During the study arm when they had a p-EVES device, participants were able to carry out more tasks independently (p < 0.001), and reported less difficulty with a range of near vision activities (p < 0.001). The study provides evidence that p-EVES devices can play a useful role in supplementing the range of low vision aids used to reduce activity limitation for near vision tasks. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  19. Adjustable typography: an approach to enhancing low vision text accessibility.

    PubMed

    Arditi, Aries

    2004-04-15

    Millions of people have low vision, a disability condition caused by uncorrectable or partially correctable disorders of the eye. The primary goal of low vision rehabilitation is increasing access to printed material. This paper describes how adjustable typography, a computer graphic approach to enhancing text accessibility, can play a role in this process, by allowing visually-impaired users to customize fonts to maximize legibility according to their own visual needs. Prototype software and initial testing of the concept is described. The results show that visually-impaired users tend to produce a variety of very distinct fonts, and that the adjustment process results in greatly enhanced legibility. But this initial testing has not yet demonstrated increases in legibility over and above the legibility of highly legible standard fonts such as Times New Roman.

  20. 78 FR 34935 - Revisions to Operational Requirements for the Use of Enhanced Flight Vision Systems (EFVS) and to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-11

    ... Transportation Safety Board OEM Original equipment manufacturer OpSpec Operation specification PAR Precision....1) B. Consolidate EFVS requirements in part 91 in a new section (Sec. 91.176) C. Establish equipment... single new section for organizational and regulatory clarity. Section 91.189 would be amended to permit...

  1. Towards Achieving a Critical Thinking Society in Malaysia: A Challenge to School Libraries and Educational Systems.

    ERIC Educational Resources Information Center

    Yaacob, Raja Abdullah; Seman, Norma Abur

    One of the challenges facing Malaysia amidst its economic development is the achievement of a critical thinking society. This would enhance and guarantee the success of research and development programs in addition to having other socioeconomic effects. This paper covers the following topics: Vision 2020, Malaysia's goal to reach developed nation…

  2. The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Sodnik, Jaka; Jakus, Grega; Tomazic, Saso

    2012-01-01

    Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…

  3. U.S. Geological Survey core science systems strategy: characterizing, synthesizing, and understanding the critical zone through a modular science framework

    USGS Publications Warehouse

    Bristol, R. Sky; Euliss, Ned H.; Booth, Nathaniel L.; Burkardt, Nina; Diffendorfer, Jay E.; Gesch, Dean B.; McCallum, Brian E.; Miller, David M.; Morman, Suzette A.; Poore, Barbara S.; Signell, Richard P.; Viger, Roland J.

    2013-01-01

    Core Science Systems is a new mission of the U.S. Geological Survey (USGS) that resulted from the 2007 Science Strategy, "Facing Tomorrow's Challenges: U.S. Geological Survey Science in the Decade 2007-2017." This report describes the Core Science Systems vision and outlines a strategy to facilitate integrated characterization and understanding of the complex Earth system. The vision and suggested actions are bold and far-reaching, describing a conceptual model and framework to enhance the ability of the USGS to bring its core strengths to bear on pressing societal problems through data integration and scientific synthesis across the breadth of science. The context of this report is inspired by a direction set forth in the 2007 Science Strategy. Specifically, ecosystem-based approaches provide the underpinnings for essentially all science themes that define the USGS. Every point on Earth falls within a specific ecosystem where data, other information assets, and the expertise of USGS and its many partners can be employed to quantitatively understand how that ecosystem functions and how it responds to natural and anthropogenic disturbances. Every benefit society obtains from the planet-food, water, raw materials to build infrastructure, homes and automobiles, fuel to heat homes and cities, and many others, are derived from or affect ecosystems. The vision for Core Science Systems builds on core strengths of the USGS in characterizing and understanding complex Earth and biological systems through research, modeling, mapping, and the production of high quality data on the Nation's natural resource infrastructure. Together, these research activities provide a foundation for ecosystem-based approaches through geologic mapping, topographic mapping, and biodiversity mapping. The vision describes a framework founded on these core mapping strengths that makes it easier for USGS scientists to discover critical information, share and publish results, and identify potential collaborations that transcend all USGS missions. The framework is designed to improve the efficiency of scientific work within USGS by establishing a means to preserve and recall data for future applications, organizing existing scientific knowledge and data to facilitate new use of older information, and establishing a future workflow that naturally integrates new data, applications, and other science products to make interdisciplinary research easier and more efficient. Given the increasing need for integrated data and interdisciplinary approaches to solve modern problems, leadership by the Core Science Systems mission will facilitate problem solving by all USGS missions in ways not formerly possible. The report lays out a strategy to achieve this vision through three goals with accompanying objectives and actions. The first goal builds on and enhances the strengths of the Core Science Systems mission in characterizing and understanding the Earth system from the geologic framework to the topographic characteristics of the land surface and biodiversity across the Nation. The second goal enhances and develops new strengths in computer and information science to make it easier for USGS scientists to discover data and models, share and publish results, and discover connections between scientific information and knowledge. The third goal brings additional focus to research and development methods to address complex issues affecting society that require integration of knowledge and new methods for synthesizing scientific information. Collectively, the report lays out a strategy to create a seamless connection between all USGS activities to accelerate and make USGS science more efficient by fully integrating disciplinary expertise within a new and evolving science paradigm for a changing world in the 21st century.

  4. Science strategy for Core Science Systems in the U.S. Geological Survey, 2013-2023

    USGS Publications Warehouse

    Bristol, R. Sky; Euliss, Ned H.; Booth, Nathaniel L.; Burkardt, Nina; Diffendorfer, Jay E.; Gesch, Dean B.; McCallum, Brian E.; Miller, David M.; Morman, Suzette A.; Poore, Barbara S.; Signell, Richard P.; Viger, Roland J.

    2012-01-01

    Core Science Systems is a new mission of the U.S. Geological Survey (USGS) that grew out of the 2007 Science Strategy, “Facing Tomorrow’s Challenges: U.S. Geological Survey Science in the Decade 2007–2017.” This report describes the vision for this USGS mission and outlines a strategy for Core Science Systems to facilitate integrated characterization and understanding of the complex earth system. The vision and suggested actions are bold and far-reaching, describing a conceptual model and framework to enhance the ability of USGS to bring its core strengths to bear on pressing societal problems through data integration and scientific synthesis across the breadth of science.The context of this report is inspired by a direction set forth in the 2007 Science Strategy. Specifically, ecosystem-based approaches provide the underpinnings for essentially all science themes that define the USGS. Every point on earth falls within a specific ecosystem where data, other information assets, and the expertise of USGS and its many partners can be employed to quantitatively understand how that ecosystem functions and how it responds to natural and anthropogenic disturbances. Every benefit society obtains from the planet—food, water, raw materials to build infrastructure, homes and automobiles, fuel to heat homes and cities, and many others, are derived from or effect ecosystems.The vision for Core Science Systems builds on core strengths of the USGS in characterizing and understanding complex earth and biological systems through research, modeling, mapping, and the production of high quality data on the nation’s natural resource infrastructure. Together, these research activities provide a foundation for ecosystem-based approaches through geologic mapping, topographic mapping, and biodiversity mapping. The vision describes a framework founded on these core mapping strengths that makes it easier for USGS scientists to discover critical information, share and publish results, and identify potential collaborations that transcend all USGS missions. The framework is designed to improve the efficiency of scientific work within USGS by establishing a means to preserve and recall data for future applications, organizing existing scientific knowledge and data to facilitate new use of older information, and establishing a future workflow that naturally integrates new data, applications, and other science products to make it easier and more efficient to conduct interdisciplinary research over time. Given the increasing need for integrated data and interdisciplinary approaches to solve modern problems, leadership by the Core Science Systems mission will facilitate problem solving by all USGS missions in ways not formerly possible.The report lays out a strategy to achieve this vision through three goals with accompanying objectives and actions. The first goal builds on and enhances the strengths of the Core Science Systems mission in characterizing and understanding the earth system from the geologic framework to the topographic characteristics of the land surface and biodiversity across the nation. The second goal enhances and develops new strengths in computer and information science to make it easier for USGS scientists to discover data and models, share and publish results, and discover connections between scientific information and knowledge. The third goal brings additional focus to research and development methods to address complex issues affecting society that require integration of knowledge and new methods for synthesizing scientific information. Collectively, the report lays out a strategy to create a seamless connection between all USGS activities to accelerate and make USGS science more efficient by fully integrating disciplinary expertise within a new and evolving science paradigm for a changing world in the 21st century.

  5. Flight Deck Display Technologies for 4DT and Surface Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Jones, Denis R.; Shelton, Kevin J.; Arthur, Jarvis J., III; Bailey, Randall E.; Allamandola, Angela S.; Foyle, David C.; Hooey, Becky L.

    2009-01-01

    NASA research is focused on flight deck display technologies that may significantly enhance situation awareness, enable new operating concepts, and reduce the potential for incidents/accidents for terminal area and surface operations. The display technologies include surface map, head-up, and head-worn displays; 4DT guidance algorithms; synthetic and enhanced vision technologies; and terminal maneuvering area traffic conflict detection and alerting systems. This work is critical to ensure that the flight deck interface technologies and the role of the human participants can support the full realization of the Next Generation Air Transportation System (NextGen) and its novel operating concepts.

  6. Enhanced computer vision with Microsoft Kinect sensor: a review.

    PubMed

    Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie

    2013-10-01

    With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.

  7. A Common Vision: Teacher Quality Enhancement in the Middle Grades in Illinois. Collaborating for Success: Lessons Learned from Illinois' TQE Grant Implementation

    ERIC Educational Resources Information Center

    Klostermann, Brenda K.; Presley, Jennifer B.

    2005-01-01

    Findings are presented from our case study evaluating the implementation of the IBHE's federally funded grant, "A Common Vision: Teacher Quality Enhancement in the Middle Grades in Illinois." Four sites were examined in terms of how they organized to attain the grant goals, and what aspects of organizational culture and leadership…

  8. (En)visioning Success: The Anatomy and Functions of Vision in the Basic Course.

    ERIC Educational Resources Information Center

    Williams, Glen

    The success of the basic course in speech communication depends largely on a vision that values the course and its place in the undergraduate curriculum; emphasizes the necessity of ongoing training and development of teaching assistants and instructors; and values scholarship that will enhance those efforts as well as improve dedication. Vision…

  9. The University and the Hamlets: Revitalizing Low-Income Communities through University Outreach and Community Visioning Exercises.

    ERIC Educational Resources Information Center

    Weinberg, Adam S.

    1999-01-01

    Explores an effort by Colgate University (New York) to enhance economic development in two low-income hamlets in New York through community-visioning programs. Describes the process of community visioning and shows how Colgate has been instrumental in its promotion. Argues that universities are better situated than governments or nonprofits to…

  10. An Analysis of Helicopter Pilot Scan Techniques While Flying at Low Altitudes and High Speed

    DTIC Science & Technology

    2012-09-01

    Manager SV Synthetic Vision TFH Total Flight Hours TOFT Tactical Operational Flight Trainer VFR Visual Flight Rules VMC Visual Meteorological...Crognale, 2008). Recently, the use of synthetic vision (SV) and a heads-up- display (HUD) have been a topic of discussion in the aviation community... Synthetic vision uses external cameras to provide the pilot with an enhanced view of the outside world, usually with the assistance of night vision

  11. Civil use of night vision goggles within the National Airspace System

    NASA Astrophysics Data System (ADS)

    Winkel, James G.; Faber, Lorelei

    2001-08-01

    When properly employed, Night Vision Goggles (NVGs) improve a pilot's ability to see during periods of darkness. The resultant enhancement in situational awareness achieved when using NVGs, increases light safety during night VFR operations. FAA is constrained with a lack of requisite regulatory and guidance infrastructure to adequately facilitate the civil request for use in NVGs within the National Airspace System (NAS) Appliances and Equipment, is formed and tasked to develop: operational concept and operational requirements for NVG implementation into the NAS, minimum operational performance standards for NVGs, and training guidelines and considerations for NVG operations. This paper provides a historical perspective on use of NVGs within the NAS, the status of SC-196 work in progress, FAA integration of SC-196 committee products and the harmonization effort between EUROCAEs NVG committee and SC- 196.

  12. Hi-Vision telecine system using pickup tube

    NASA Astrophysics Data System (ADS)

    Iijima, Goro

    1992-08-01

    Hi-Vision broadcasting, offering far more lifelike pictures than those produced by existing television broadcasting systems, has enormous potential in both industrial and commercial fields. The dissemination of the Hi-Vision system will enable vivid, movie theater quality pictures to be readily enjoyed in homes in the near future. To convert motion film pictures into Hi-Vision signals, a telecine system is needed. The Hi-Vision telecine systems currently under development are the "laser telecine," "flying-spot telecine," and "Saticon telecine" systems. This paper provides an overview of the pickup tube type Hi-Vision telecine system (referred to herein as the Saticon telecine system) developed and marketed by Ikegami Tsushinki Co., Ltd.

  13. Real-time millimeter-wave imaging radiometer for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.

    1994-07-01

    ThermoTrex Corporation (TTC) has developed an imaging radiometer, the passive microwave camera (PMC), that uses an array of frequency-scanned antennas coupled to a multi-channel acousto-optic (Bragg cell) spectrum analyzer to form visible images of a scene through acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output of the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. One application of this system could be its incorporation into an enhanced vision system to provide pilots with a clear view of the runway during fog and other adverse weather conditions. The unique PMC system architecture will allow compact large-aperture implementations because of its flat antenna sensor. Other potential applications include air traffic control, all-weather area surveillance, fire detection, and security. This paper describes the architecture of the TTC PMC and shows examples of images acquired with the system.

  14. A Practical Solution Using A New Approach To Robot Vision

    NASA Astrophysics Data System (ADS)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.

  15. Proteus: a reconfigurable computational network for computer vision

    NASA Astrophysics Data System (ADS)

    Haralick, Robert M.; Somani, Arun K.; Wittenbrink, Craig M.; Johnson, Robert; Cooper, Kenneth; Shapiro, Linda G.; Phillips, Ihsin T.; Hwang, Jenq N.; Cheung, William; Yao, Yung H.; Chen, Chung-Ho; Yang, Larry; Daugherty, Brian; Lorbeski, Bob; Loving, Kent; Miller, Tom; Parkins, Larye; Soos, Steven L.

    1992-04-01

    The Proteus architecture is a highly parallel MIMD, multiple instruction, multiple-data machine, optimized for large granularity tasks such as machine vision and image processing The system can achieve 20 Giga-flops (80 Giga-flops peak). It accepts data via multiple serial links at a rate of up to 640 megabytes/second. The system employs a hierarchical reconfigurable interconnection network with the highest level being a circuit switched Enhanced Hypercube serial interconnection network for internal data transfers. The system is designed to use 256 to 1,024 RISC processors. The processors use one megabyte external Read/Write Allocating Caches for reduced multiprocessor contention. The system detects, locates, and replaces faulty subsystems using redundant hardware to facilitate fault tolerance. The parallelism is directly controllable through an advanced software system for partitioning, scheduling, and development. System software includes a translator for the INSIGHT language, a parallel debugger, low and high level simulators, and a message passing system for all control needs. Image processing application software includes a variety of point operators neighborhood, operators, convolution, and the mathematical morphology operations of binary and gray scale dilation, erosion, opening, and closing.

  16. Expedient range enhanced 3-D robot colour vision

    NASA Astrophysics Data System (ADS)

    Jarvis, R. A.

    1983-01-01

    Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.

  17. Part-Task Simulation of Synthetic and Enhanced Vision Concepts for Lunar Landing

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Bailey, Randall E.; Jackson, E. Bruce; Williams, Steven P.; Kramer, Lynda J.; Barnes, James R.

    2010-01-01

    During Apollo, the constraints placed by the design of the Lunar Module (LM) window for crew visibility and landing trajectory were a major problem. Lunar landing trajectories were tailored to provide crew visibility using nearly 70 degrees look-down angle from the canted LM windows. Apollo landings were scheduled only at specific times and locations to provide optimal sunlight on the landing site. The complications of trajectory design and crew visibility are still a problem today. Practical vehicle designs for lunar lander missions using optimal or near-optimal fuel trajectories render the natural vision of the crew from windows inadequate for the approach and landing task. Further, the sun angles for the desirable landing areas in the lunar polar regions create visually powerful, season-long shadow effects. Fortunately, Synthetic and Enhanced Vision (S/EV) technologies, conceived and developed in the aviation domain, may provide solutions to this visibility problem and enable additional benefits for safer, more efficient lunar operations. Piloted simulation evaluations have been conducted to assess the handling qualities of the various lunar landing concepts, including the influence of cockpit displays and the informational data and formats. Evaluation pilots flew various landing scenarios with S/EV displays. For some of the evaluation trials, an eye glasses-mounted, monochrome monocular display, coupled with head tracking, was worn. The head-worn display scene consisted of S/EV fusion concepts. The results of this experiment showed that a head-worn system did not increase the pilot s workload when compared to using just the head-down displays. As expected, the head-worn system did not provide an increase in performance measures. Some pilots commented that the head-worn system provided greater situational awareness compared to just head-down displays.

  18. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  19. Part-task simulation of synthetic and enhanced vision concepts for lunar landing

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J., III; Bailey, Randall E.; Jackson, E. Bruce; Barnes, James R.; Williams, Steven P.; Kramer, Lynda J.

    2010-04-01

    During Apollo, the constraints placed by the design of the Lunar Module (LM) window for crew visibility and landing trajectory were "a major problem." Lunar landing trajectories were tailored to provide crew visibility using nearly 70 degrees look-down angle from the canted LM windows. Apollo landings were scheduled only at specific times and locations to provide optimal sunlight on the landing site. The complications of trajectory design and crew visibility are still a problem today. Practical vehicle designs for lunar lander missions using optimal or near-optimal fuel trajectories render the natural vision of the crew from windows inadequate for the approach and landing task. Further, the sun angles for the desirable landing areas in the lunar polar regions create visually powerful, season-long shadow effects. Fortunately, Synthetic and Enhanced Vision (S/EV) technologies, conceived and developed in the aviation domain, may provide solutions to this visibility problem and enable additional benefits for safer, more efficient lunar operations. Piloted simulation evaluations have been conducted to assess the handling qualities of the various lunar landing concepts, including the influence of cockpit displays and the informational data and formats. Evaluation pilots flew various landing scenarios with S/EV displays. For some of the evaluation trials, an eye glasses-mounted, monochrome monocular display, coupled with head tracking, was worn. The head-worn display scene consisted of S/EV fusion concepts. The results of this experiment showed that a head-worn system did not increase the pilot's workload when compared to using just the head-down displays. As expected, the head-worn system did not provide an increase in performance measures. Some pilots commented that the head-worn system provided greater situational awareness compared to just head-down displays.

  20. Optimizing the temporal dynamics of light to human perception.

    PubMed

    Rieiro, Hector; Martinez-Conde, Susana; Danielson, Andrew P; Pardo-Vazquez, Jose L; Srivastava, Nishit; Macknik, Stephen L

    2012-11-27

    No previous research has tuned the temporal characteristics of light-emitting devices to enhance brightness perception in human vision, despite the potential for significant power savings. The role of stimulus duration on perceived contrast is unclear, due to contradiction between the models proposed by Bloch and by Broca and Sulzer over 100 years ago. We propose that the discrepancy is accounted for by the observer's "inherent expertise bias," a type of experimental bias in which the observer's life-long experience with interpreting the sensory world overcomes perceptual ambiguities and biases experimental outcomes. By controlling for this and all other known biases, we show that perceived contrast peaks at durations of 50-100 ms, and we conclude that the Broca-Sulzer effect best describes human temporal vision. We also show that the plateau in perceived brightness with stimulus duration, described by Bloch's law, is a previously uncharacterized type of temporal brightness constancy that, like classical constancy effects, serves to enhance object recognition across varied lighting conditions in natural vision-although this is a constancy effect that normalizes perception across temporal modulation conditions. A practical outcome of this study is that tuning light-emitting devices to match the temporal dynamics of the human visual system's temporal response function will result in significant power savings.

  1. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  2. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  3. Remote sensing of vegetation structure using computer vision

    NASA Astrophysics Data System (ADS)

    Dandois, Jonathan P.

    High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal conditions under which forest structure measurements should be obtained with UAS-SFM remote sensing. Ultimately remote sensing of vegetation by computer vision offers the potential to provide an 'ecologist's eye view', capturing not only canopy 3D and spectral properties, but also seeing the trees in the forest and the leaves on the trees.

  4. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  5. Laboratory analysis of phacoemulsifier compliance and capacity.

    PubMed

    Nejad, Mitra; Injev, Valentine P; Miller, Kevin M

    2012-11-01

    To compare the compliance and capacity of 7 fluidics modules used by 6 phacoemulsifiers from 3 manufacturers. Jules Stein Eye Institute, Los Angeles, California, USA. Experimental study. Previous-model and current-model phacoemulsifiers from 3 manufacturers were subjected to laboratory analysis of compliance and capacity. Previous-generation models tested included the Legacy Advantec, Whitestar Sovereign Phacoemulsification System, and Millennium Microsurgical System. Current models tested were the Infiniti Vision System with standard and Intrepid cassettes, Whitestar Signature Phacoemulsification System, and Stellaris Vision Enhancement System. To measure compliance, the aspiration line was connected to an electronic pressure transducer and small volumes of fluid were injected or aspirated. To measure capacity, the space between the distal end of the aspiration line and the pump was filled with methylene blue-dyed fluid. The Legacy was the most compliant phacoemulsifier. The old and new Whitestar systems, Millennium system, and Stellaris system showed similar midrange compliances. The Infiniti Vision System with the Intrepid fluidic management system was the least compliant. The Infiniti cassettes had the greatest capacity, which is a detriment from a surge-control perspective, and Signature cassettes had the least capacity. The Infiniti Intrepid system had the lowest compliance of the 6 units tested, which is optimum from a surge-control perspective. All other things being equal, the Infiniti should have the safest occlusion-break surge response. Mr. Injev is an employee of Alcon Laboratories. Dr. Miller is a consultant to and investigator for Alcon Laboratories. Ms. Nejad has no financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  6. Ergonomic Enhancement for Older Readers with Low Vision

    ERIC Educational Resources Information Center

    Watson, Gale R.; Ramsey, Vincent; De l'Aune, William; Elk, Arona

    2004-01-01

    This study found that the provision of ergonomic workstations for 12 older persons with age-related macular degeneration who used low vision devices significantly increased the participants' reading speed and decreased their discomfort when reading.

  7. Connectionist model-based stereo vision for telerobotics

    NASA Technical Reports Server (NTRS)

    Hoff, William; Mathis, Donald

    1989-01-01

    Autonomous stereo vision for range measurement could greatly enhance the performance of telerobotic systems. Stereo vision could be a key component for autonomous object recognition and localization, thus enabling the system to perform low-level tasks, and allowing a human operator to perform a supervisory role. The central difficulty in stereo vision is the ambiguity in matching corresponding points in the left and right images. However, if one has a priori knowledge of the characteristics of the objects in the scene, as is often the case in telerobotics, a model-based approach can be taken. Researchers describe how matching ambiguities can be resolved by ensuring that the resulting three-dimensional points are consistent with surface models of the expected objects. A four-layer neural network hierarchy is used in which surface models of increasing complexity are represented in successive layers. These models are represented using a connectionist scheme called parameter networks, in which a parametrized object (for example, a planar patch p=f(h,m sub x, m sub y) is represented by a collection of processing units, each of which corresponds to a distinct combination of parameter values. The activity level of each unit in the parameter network can be thought of as representing the confidence with which the hypothesis represented by that unit is believed. Weights in the network are set so as to implement gradient descent in an energy function.

  8. Diabetes Self-Management Education Enhanced by the Low Vision Professional

    ERIC Educational Resources Information Center

    Sokol-McKay, Debra A.

    2007-01-01

    Diabetes currently affects 20.8 million people in the United States and is the leading cause of blindness in people between the ages of 20 and 74 years. The author uses a fictional but typical example to explain the ways in which low vision specialists can improve the diabetes self-management program of a person with low vision and demonstrates…

  9. Brain Structure-Function Couplings: Year 2 Accomplishments and Programmatic Plans

    DTIC Science & Technology

    2013-06-01

    performance through individual-specific neurotechnologies and enhance Soldier protection technologies to minimize neural injury. The long-term vision of this...envision pathways that enable our basic science accomplishments to foster development of revolutionary Soldier neurotechnologies and Soldier protection...improve Soldier-system performance with Soldier-specific neurotechnologies . We expect mid-term impact with models linking structure and function that can

  10. 75 FR 47176 - Special Conditions: Dassault Aviation Model Falcon 7X; Enhanced Flight Visibility System (EFVS)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-05

    ...), imaging sensor(s), and avionics interfaces that display the sensor imagery on the HUD and overlay it with... that display the sensor imagery, with or without other flight information, on a head-down display. To... infrared sensors can be much different from that detected by natural pilot vision. On a dark night, thermal...

  11. Emission Measurements of Ultracell XX25 Reformed Methanol Fuel Cell System

    DTIC Science & Technology

    2008-06-01

    combination of electrochemical devices such as fuel cell and battery. Polymer electrolyte membrane fuel cells ( PEMFC ) using hydrogen or liquid...communications and computers, sensors and night vision capabilities. High temperature PEMFC offers some advantages such as enhanced electrode kinetics and better...tolerance of carbon monoxide that will poison the conventional PEMFC . Ultracell Corporation, Livermore, California has developed a first

  12. Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers

    PubMed Central

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616

  13. Synthetic Vision Enhances Situation Awareness and RNP Capabilities for Terrain-Challenged Approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III

    2003-01-01

    The Synthetic Vision Systems (SVS) Project of Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-Up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation / Terrain Awareness and Warning System displays. These independent variables were evaluated for situation awareness, path error, and workload while making approaches to Runway 25 and 07 and during simulated engine-out Cottonwood 2 and KREMM departures. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the pathway and pursuit guidance used within the SVS concepts achieved required navigation performance (RNP) criteria.

  14. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.

  15. Enhanced modeling and simulation of EO/IR sensor systems

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Miller, Brian; May, Christopher

    2015-05-01

    The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.

  16. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  17. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  18. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  19. Synthetic Vision for Lunar and Planetary Landing Vehicles

    NASA Technical Reports Server (NTRS)

    Williams, Steven P.; Arthur, Jarvis (Trey) J., III; Shelton, Kevin J.; Prinzel, Lawrence J., III; Norman, R. Michael

    2008-01-01

    The Crew Vehicle Interface (CVI) group of the Integrated Intelligent Flight Deck Technologies (IIFDT) has done extensive research in the area of Synthetic Vision (SV), and has shown that SV technology can substantially enhance flight crew situation awareness, reduce pilot workload, promote flight path control precision and improve aviation safety. SV technology is being extended to evaluate its utility for lunar and planetary exploration vehicles. SV may hold significant potential for many lunar and planetary missions since the SV presentation provides a computer-generated view of the terrain and other significant environment characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts and in addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. This paper discusses preliminary SV concepts for tactical and strategic displays for a lunar landing vehicle. The technical challenges and potential solutions to SV applications for the lunar landing mission are explored, including the requirements for high resolution terrain lunar maps and an accurate position and orientation of the vehicle that is essential in providing lunar Synthetic Vision System (SVS) cockpit displays. The paper also discusses the technical challenge of creating an accurate synthetic terrain portrayal using an ellipsoid lunar digital elevation model which eliminates projection errors and can be efficiently rendered in real-time.

  20. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  1. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, H; Jenkins, C; Yu, S

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct formore » camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.« less

  2. Synthetic vision systems: operational considerations simulation experiment

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  3. Wearable Improved Vision System for Color Vision Deficiency Correction

    PubMed Central

    Riccio, Daniel; Di Perna, Luigi; Sanniti Di Baja, Gabriella; De Nino, Maurizio; Rossi, Settimio; Testa, Francesco; Simonelli, Francesca; Frucci, Maria

    2017-01-01

    Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$p = 0.03$ \\end{document}) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD. PMID:28507827

  4. A comparative examination of neural circuit and brain patterning between the lamprey and amphioxus reveals the evolutionary origin of the vertebrate visual center.

    PubMed

    Suzuki, Daichi G; Murakami, Yasunori; Escriva, Hector; Wada, Hiroshi

    2015-02-01

    Vertebrates are equipped with so-called camera eyes, which provide them with image-forming vision. Vertebrate image-forming vision evolved independently from that of other animals and is regarded as a key innovation for enhancing predatory ability and ecological success. Evolutionary changes in the neural circuits, particularly the visual center, were central for the acquisition of image-forming vision. However, the evolutionary steps, from protochordates to jaw-less primitive vertebrates and then to jawed vertebrates, remain largely unknown. To bridge this gap, we present the detailed development of retinofugal projections in the lamprey, the neuroarchitecture in amphioxus, and the brain patterning in both animals. Both the lateral eye in larval lamprey and the frontal eye in amphioxus project to a light-detecting visual center in the caudal prosencephalic region marked by Pax6, which possibly represents the ancestral state of the chordate visual system. Our results indicate that the visual system of the larval lamprey represents an evolutionarily primitive state, forming a link from protochordates to vertebrates and providing a new perspective of brain evolution based on developmental mechanisms and neural functions. © 2014 Wiley Periodicals, Inc.

  5. Crossing the quality chasm in resource-limited settings.

    PubMed

    Maru, Duncan Smith-Rohrberg; Andrews, Jason; Schwarz, Dan; Schwarz, Ryan; Acharya, Bibhav; Ramaiya, Astha; Karelas, Gregory; Rajbhandari, Ruma; Mate, Kedar; Shilpakar, Sona

    2012-11-30

    Over the last decade, extensive scientific and policy innovations have begun to reduce the "quality chasm"--the gulf between best practices and actual implementation that exists in resource-rich medical settings. While limited data exist, this chasm is likely to be equally acute and deadly in resource-limited areas. While health systems have begun to be scaled up in impoverished areas, scale-up is just the foundation necessary to deliver effective healthcare to the poor. This perspective piece describes a vision for a global quality improvement movement in resource-limited areas. The following action items are a first step toward achieving this vision: 1) revise global health investment mechanisms to value quality; 2) enhance human resources for improving health systems quality; 3) scale up data capacity; 4) deepen community accountability and engagement initiatives; 5) implement evidence-based quality improvement programs; 6) develop an implementation science research agenda.

  6. Operator vision aids for space teleoperation assembly and servicing

    NASA Technical Reports Server (NTRS)

    Brooks, Thurston L.; Ince, Ilhan; Lee, Greg

    1992-01-01

    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.

  7. Digital enhancement of multispectral MSS data for maximum image visibility

    NASA Technical Reports Server (NTRS)

    Algazi, V. R.

    1973-01-01

    A systematic approach to the enhancement of images has been developed. This approach exploits two principal features involved in the observation of images: the properties of human vision and the statistics of the images being observed. The rationale of the enhancement procedure is as follows: in the observation of some features of interest in an image, the range of objective luminance-chrominance values being displayed is generally limited and does not use the whole perceptual range of vision of the observer. The purpose of the enhancement technique is to expand and distort in a systematic way the grey scale values of each of the multispectral bands making up a color composite, to enhance the average visibility of the features being observed.

  8. Visual wetness perception based on image color statistics.

    PubMed

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  9. NASA Runway Incursion Prevention System (RIPS) Dallas-Fort Worth Demonstration Performance Analysis

    NASA Technical Reports Server (NTRS)

    Cassell, Rick; Evers, Carl; Esche, Jeff; Sleep, Benjamin; Jones, Denise R. (Technical Monitor)

    2002-01-01

    NASA's Aviation Safety Program Synthetic Vision System project conducted a Runway Incursion Prevention System (RIPS) flight test at the Dallas-Fort Worth International Airport in October 2000. The RIPS research system includes advanced displays, airport surveillance system, data links, positioning system, and alerting algorithms to provide pilots with enhanced situational awareness, supplemental guidance cues, a real-time display of traffic information, and warnings of runway incursions. This report describes the aircraft and ground based runway incursion alerting systems and traffic positioning systems (Automatic Dependent Surveillance - Broadcast (ADS-B) and Traffic Information Service - Broadcast (TIS-B)). A performance analysis of these systems is also presented.

  10. Visual Prostheses: The Enabling Technology to Give Sight to the Blind

    PubMed Central

    Maghami, Mohammad Hossein; Sodagar, Amir Masoud; Lashay, Alireza; Riazi-Esfahani, Hamid; Riazi-Esfahani, Mohammad

    2014-01-01

    Millions of patients are either slowly losing their vision or are already blind due to retinal degenerative diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) or because of accidents or injuries. Employment of artificial means to treat extreme vision impairment has come closer to reality during the past few decades. Currently, many research groups work towards effective solutions to restore a rudimentary sense of vision to the blind. Aside from the efforts being put on replacing damaged parts of the retina by engineered living tissues or microfabricated photoreceptor arrays, implantable electronic microsystems, referred to as visual prostheses, are also sought as promising solutions to restore vision. From a functional point of view, visual prostheses receive image information from the outside world and deliver them to the natural visual system, enabling the subject to receive a meaningful perception of the image. This paper provides an overview of technical design aspects and clinical test results of visual prostheses, highlights past and recent progress in realizing chronic high-resolution visual implants as well as some technical challenges confronted when trying to enhance the functional quality of such devices. PMID:25709777

  11. EnVision+, a new dextran polymer-based signal enhancement technique for in situ hybridization (ISH).

    PubMed

    Wiedorn, K H; Goldmann, T; Henne, C; Kühl, H; Vollmer, E

    2001-09-01

    Seventy paraffin-embedded cervical biopsy specimens and condylomata were tested for the presence of human papillomavirus (HPV) by conventional in situ hybridization (ISH) and ISH with subsequent signal amplification. Signal amplification was performed either by a commercial biotinyl-tyramide-based detection system [GenPoint (GP)] or by the novel two-layer dextran polymer visualization system EnVision+ (EV), in which both EV-horseradish peroxidase (EV-HRP) and EV-alkaline phosphatase (EV-AP) were applied. We could demonstrate for the first time, that EV in combination with preceding ISH results in a considerable increase in signal intensity and sensitivity without loss of specificity compared to conventional ISH. Compared to GP, EV revealed a somewhat lower sensitivity, as measured by determination of the integrated optical density (IOD) of the positively stained cells. However, EV is easier to perform, requires a shorter assay time, and does not raise the background problems that may be encountered with biotinyl-tyramide-based amplification systems. (J Histochem Cytochem 49:1067-1071, 2001)

  12. Augmented reality and haptic interfaces for robot-assisted surgery.

    PubMed

    Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N

    2012-03-01

    Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Disparity channels in early vision

    PubMed Central

    Roe, AW; Parker, AJ; Born, RT; DeAngelis, GC

    2008-01-01

    The last decade has seen a dramatic increase in our knowledge of the neural basis of stereopsis. New cortical areas have been found to represent binocular disparities, new representations of disparity information (e.g., relative disparity signals) have been uncovered, the first topographic maps of disparity have been measured, and the first causal links between neural activity and depth perception have been established. Equally exciting is the finding that training and experience affects how signals are channeled through different brain areas, a flexibility that may be crucial for learning, plasticity, and recovery of function. The collective efforts of several laboratories have established stereo vision as one of the most productive model systems for elucidating the neural basis of perception. Much remains to be learned about how the disparity signals that are initially encoded in primary visual cortex are routed to and processed by extrastriate areas to mediate the diverse capacities of 3D vision that enhance our daily experience of the world. PMID:17978018

  14. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  15. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  16. SOFIA Update and Science Vision

    NASA Technical Reports Server (NTRS)

    Smith, Kimberly

    2017-01-01

    I will present an overview of the SOFIA program, its science vision and upcoming plans for the observatory. The talk will feature several scientific highlights since full operations, along with summaries of planned science observations for this coming year, platform enhancements and new instrumentation.

  17. Perceptual Issues of Augmented and Virtual Environments

    DTIC Science & Technology

    2007-07-01

    distinct preference for one eye over the other. This is typically, quickly, and easily found through sighting tests (Peli, 1990). This eye dominance...been researched extensively and for different purposes. The entertainment industry has also experimented with synthetic smell production, in the...form of accompanying smells to enhance the experience of films (Lefcowitz, 2001, Somerson, 2001). In the Aroma Rama and the Smell -o-vision systems

  18. Hypersonic airbreathing vehicle visions and enhancing technologies

    NASA Astrophysics Data System (ADS)

    Hunt, James L.; Lockwood, Mary Kae; Petley, Dennis H.; Pegg, Robert J.

    1997-01-01

    This paper addresses the visions for hypersonic airbreathing vehicles and the advanced technologies that forge and enhance the designs. The matrix includes space access vehicles (single-stage-to-orbit (SSTO), two-stage-to-orbit (2STO) and three-stage-to-orbit (3STO)) and endoatmospheric vehicles (airplanes—missiles are omitted). The characteristics, the performance potential, the technologies and the synergies will be discussed. A common design constraint is that all vehicles (space access and endoatmospheric) have enclosed payload bays.

  19. Infrared sensors and systems for enhanced vision/autonomous landing applications

    NASA Technical Reports Server (NTRS)

    Kerr, J. Richard

    1993-01-01

    There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.

  20. Personal vision: enhancing work engagement and the retention of women in the engineering profession.

    PubMed

    Buse, Kathleen R; Bilimoria, Diana

    2014-01-01

    This study examines how personal vision enhances work engagement and the retention of women in the engineering profession. Using a mixed method approach to understand the factors related to the retention of women in the engineering profession, we first interviewed women who persisted and women who opted out of the profession (Buse and Bilimoria, 2014). In these rich stories, we found that women who persisted had a personal vision that included their profession, and that this personal vision enabled them to overcome the bias, barriers and discrimination in the engineering workplace. To validate this finding on a larger population, we developed a scale to measure one's personal vision conceptualized as the ideal self (Boyatzis and Akrivou, 2006). The measure was tested in a pilot study and then used in a study of 495 women with engineering degrees. The findings validate that the ideal self is comprised of self-efficacy, hope, optimism and core identity. For these women, the ideal self directly impacts work engagement and work engagement directly impacts career commitment to engineering. The findings add to extant theory related to the role of personal vision and intentional change theory. From a practical perspective, these findings will aid efforts to retain women in engineering and other STEM professions.

  1. Personal vision: enhancing work engagement and the retention of women in the engineering profession

    PubMed Central

    Buse, Kathleen R.; Bilimoria, Diana

    2014-01-01

    This study examines how personal vision enhances work engagement and the retention of women in the engineering profession. Using a mixed method approach to understand the factors related to the retention of women in the engineering profession, we first interviewed women who persisted and women who opted out of the profession (Buse and Bilimoria, 2014). In these rich stories, we found that women who persisted had a personal vision that included their profession, and that this personal vision enabled them to overcome the bias, barriers and discrimination in the engineering workplace. To validate this finding on a larger population, we developed a scale to measure one's personal vision conceptualized as the ideal self (Boyatzis and Akrivou, 2006). The measure was tested in a pilot study and then used in a study of 495 women with engineering degrees. The findings validate that the ideal self is comprised of self-efficacy, hope, optimism and core identity. For these women, the ideal self directly impacts work engagement and work engagement directly impacts career commitment to engineering. The findings add to extant theory related to the role of personal vision and intentional change theory. From a practical perspective, these findings will aid efforts to retain women in engineering and other STEM professions. PMID:25538652

  2. The accomplishments of the global initiative VISION 2020: The Right to Sight and the focus for the next 8 years of the campaign

    PubMed Central

    Ackland, Peter

    2012-01-01

    In the first 12 years of VISION 2020 sound programmatic approaches have been developed that are capable of delivering equitable eye health services to even the most remote and impoverished communities. A body of evidence around the economic arguments for investment in eye health has been developed that has fuelled successful advocacy work resulting in supportive high level policy statements. More than a 100 national plans to achieve the elimination of avoidable blindness have been developed and some notable contributions made from the corporate and government sectors to resource eye health programs. Good progress has been made to control infectious blinding diseases and at the very least there is anecdotal evidence to suggest that the global increase in the prevalence of blindness and visual impairment has been reversed in recent years, despite the ever increasing and more elderly global population. However if we are to achieve the goal of VISION 2020 we require a considerable scaling up of current efforts–this will depend on our future success in two key areas: i) Successful advocacy and engagement at individual country level to secure significantly enhanced national government commitment to financing their own VISION 2020 plans.ii) A new approach to VISION 2020 thinking that integrates eye health into health system development and develops new partnerships with wider health development initiatives. PMID:22944746

  3. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  4. 14 CFR 382.121 - What mobility aids and other assistive devices may passengers with a disability bring into the...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... with impaired vision), crutches, and walkers; and (3) Other assistive devices for stowage or use within... syringes or auto-injectors, vision-enhancing devices, and POCs, ventilators and respirators that use non...

  5. A Vision for the Next Ten Years for Integrated Ocean Observing Data

    NASA Astrophysics Data System (ADS)

    Willis, Z. S.

    2012-12-01

    Ocean observing has come a long way since the Ocean Sciences Decadal Committee met over a decade ago. Since then, our use of the ocean and coast and their vast resources has increased substantially - with increased shipping, fishing, offshore energy development and recreational boating. That increased use has also spearheaded advances in observing systems. Cutting-edge autonomous and remotely operated vehicles scour the surface and travel to depths collecting essential biogeochemical data for better managing our marine resources. Satellites enable the global mapping of practically every physical ocean variable imaginable. A nationally-integrated coastal network of high-frequency radars lines the borders of the U.S. feeding critical navigation, response, and environmental information continuously. Federal, academic, and industry communities have joined in unique partnerships at regional, national, and global levels to address common challenges to monitoring our ocean. The 2002 Workshop, Building Consensus: Toward an Integrated and Sustained Ocean Observing System laid the framework for the current United States Integrated Ocean Observing System (U.S. IOOS). Ten years later, U.S. IOOS has moved from concept to reality, though much work remains to meet the nation's ocean observing needs. Today, new research and technologies, evolving users and user requirements, economic and funding challenges, and diverse institutional mandates all influence the future growth and implementation of U.S. IOOS. In light of this new environment, the Interagency Ocean Observation Committee (IOOC) will host the 2012 Integrated Ocean Observing System Summit in November 2012, providing a forum to develop a comprehensive ocean observing vision for the next decade, utilizing the knowledge and expertise gained by the IOOS-wide community over the past ten years. This effort to bring together ocean observing stakeholders at the regional, national, and global levels to address these challenges going forward: - Enhancing information delivery and integration to save lives, enhance the economy and protect the environment - Disseminating seamless information across regional and national boundaries - Harnessing technological innovations for new frontiers and opportunities The anticipated outcomes of the IOOS Summit include a highlight of the past decade of progress towards an integrated system, revisiting and updating user requirements, an assessment of existing observing system capabilities and gaps, identifying integration challenges/opportunities, and, establishing an U.S. IOOS-community-wide vision for the next 10 years of ocean observing. Most important will be the execution of priorities identified before and during the Summit, carrying them forward into a new decade of an enhanced Integrated and Sustained Ocean Observing System.

  6. A summary of image segmentation techniques

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.

  7. Improving Vision

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Many people are familiar with the popular science fiction series Star Trek: The Next Generation, a show featuring a blind character named Geordi La Forge, whose visor-like glasses enable him to see. What many people do not know is that a product very similar to Geordi's glasses is available to assist people with vision conditions, and a NASA engineer's expertise contributed to its development. The JORDY(trademark) (Joint Optical Reflective Display) device, designed and manufactured by a privately-held medical device company known as Enhanced Vision, enables people with low vision to read, write, and watch television. Low vision, which includes macular degeneration, diabetic retinopathy, and glaucoma, describes eyesight that is 20/70 or worse, and cannot be fully corrected with conventional glasses.

  8. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  9. Hypersonic airbreathing vehicle visions and enhancing technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, J.L.; Lockwood, M.K.; Petley, D.H.

    1997-01-01

    This paper addresses the visions for hypersonic airbreathing vehicles and the advanced technologies that forge and enhance the designs. The matrix includes space access vehicles (single-stage-to-orbit (SSTO), two-stage-to-orbit (2STO) and three-stage-to-orbit (3STO)) and endoatmospheric vehicles (airplanes{emdash}missiles are omitted). The characteristics, the performance potential, the technologies and the synergies will be discussed. A common design constraint is that all vehicles (space access and endoatmospheric) have enclosed payload bays. {copyright} {ital 1997 American Institute of Physics.}

  10. A real-time monitoring system for night glare protection

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Ni, Xuxiang

    2010-11-01

    When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.

  11. VLSI chips for vision-based vehicle guidance

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1994-02-01

    Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.

  12. Employment Retention after Vision Loss: Intensive Case Studies.

    ERIC Educational Resources Information Center

    Crudden, Adele; Fireison, Cara K.

    This study examined the lives of 10 individuals with blindness or severe visual impairment who maintained competitive employment despite their vision loss. The study was designed to provide information regarding the personal characteristics and current practices related to work environment alterations which enhance competitive employment…

  13. High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired.

    PubMed

    Moshtael, Howard; Aslam, Tariq; Underwood, Ian; Dhillon, Baljean

    2015-08-01

    Recent advances in digital image processing provide promising methods for maximizing the residual vision of the visually impaired. This paper seeks to introduce this field to the readership and describe its current state as found in the literature. A systematic search revealed 37 studies that measure the value of image processing techniques for subjects with low vision. The techniques used are categorized according to their effect and the principal findings are summarized. The majority of participants preferred enhanced images over the original for a wide range of enhancement types. Adapting the contrast and spatial frequency content often improved performance at object recognition and reading speed, as did techniques that attenuate the image background and a technique that induced jitter. A lack of consistency in preference and performance measures was found, as well as a lack of independent studies. Nevertheless, the promising results should encourage further research in order to allow their widespread use in low-vision aids.

  14. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  15. 3D vision system assessment

    NASA Astrophysics Data System (ADS)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  16. The visually impaired patient.

    PubMed

    Rosenberg, Eric A; Sperazza, Laura C

    2008-05-15

    Blindness or low vision affects more than 3 million Americans 40 years and older, and this number is projected to reach 5.5 million by 2020. In addition to treating a patient's vision loss and comorbid medical issues, physicians must be aware of the physical limitations and social issues associated with vision loss to optimize health and independent living for the visually impaired patient. In the United States, the four most prevalent etiologies of vision loss in persons 40 years and older are age-related macular degeneration, cataracts, glaucoma, and diabetic retinopathy. Exudative macular degeneration is treated with laser therapy, and progression of nonexudative macular degeneration in its advanced stages may be slowed with high-dose antioxidant and zinc regimens. The value of screening for glaucoma is uncertain; management of this condition relies on topical ocular medications. Cataract symptoms include decreased visual acuity, decreased color perception, decreased contrast sensitivity, and glare disability. Lifestyle and environmental interventions can improve function in patients with cataracts, but surgery is commonly performed if the condition worsens. Diabetic retinopathy responds to tight glucose control, and severe cases marked by macular edema are treated with laser photocoagulation. Vision-enhancing devices can help magnify objects, and nonoptical interventions include special filters and enhanced lighting.

  17. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  18. Visual Detection and Tracking System for a Spherical Amphibious Robot

    PubMed Central

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-01-01

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134

  19. Visual Detection and Tracking System for a Spherical Amphibious Robot.

    PubMed

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-04-15

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.

  20. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  1. Biometrics: Facing Up to Terrorism

    DTIC Science & Technology

    2001-10-01

    ment committee appointed by Secretary of Trans- portation Norman Y. Mineta to review airport security measures will recommend that facial recogni- tion...on the Role Facial Recognition Technology Can Play in Enhancing Airport Security .” Joseph Atick, the CEO of Visionics, testified before the government...system at a U.S. air- port. This deployment is believed to be the first-in-the-nation use of face-recognition technology for airport security . The sys

  2. An Application of the L2 Motivational Self System to Motivate Elementary School English Learners in Singapore

    ERIC Educational Resources Information Center

    Magid, Michael

    2014-01-01

    I will describe a program that I developed to motivate grade five elementary school learners of English in Singapore to put more time and effort into learning English by enhancing their vision of their Ideal second language (L2) self and making their goals for learning English more clear and specific. The Ideal L2 self is the representation of the…

  3. Open and Distance Education Systems: Do They Enhance Graduates' Soft Skills? The Results from 2009 Universitas Terbuka Tracer Study

    ERIC Educational Resources Information Center

    Ratnaningsih, Dewi Juliah

    2013-01-01

    The vision and mission of Universitas Terbuka (UT) is to become a highly qualified open and distance education institution and to provide higher education access to all communities. Graduates of UT are expected to acquire adequate knowledge, hard skills and soft skills. Soft skills play important roles in the world of work. The aim of this article…

  4. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    PubMed

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  5. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  6. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  7. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  8. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  9. Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones

    PubMed Central

    Chen, Jing; Cao, Ruochen; Wang, Yongtian

    2015-01-01

    Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters. PMID:26690439

  10. Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones.

    PubMed

    Chen, Jing; Cao, Ruochen; Wang, Yongtian

    2015-12-10

    Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters.

  11. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  12. Vision-Enhancing Interventions in Nursing Home Residents and Their Short-Term Impact on Physical and Cognitive Function

    PubMed Central

    Elliott, Amanda F.; McGwin, Gerald; Owsley, Cynthia

    2009-01-01

    OBJECTIVE To evaluate the effect of vision-enhancing interventions (i.e., cataract surgery or refractive error correction) on physical function and cognitive status in nursing home residents. DESIGN Longitudinal cohort study. SETTING Seventeen nursing homes in Birmingham, AL. PARTICIPANTS A total of 187 English-speaking older adults (>55 years of age). INTERVENTION Participants took part in one of two vision-enhancing interventions: cataract surgery or refractive error correction. Each group was compared against a control group (persons eligible for but who declined cataract surgery, or who received delayed correction of refractive error). MEASUREMENTS Physical function (i.e., ability to perform activities of daily living and mobility) was assessed with a series of self-report and certified nursing assistant ratings at baseline and at 2 months for the refractive error correction group, and at 4 months for the cataract surgery group. The Mini Mental State Exam was also administered. RESULTS No significant differences existed within or between groups from baseline to follow-up on any of the measures of physical function. Mental status scores significantly declined from baseline to follow-up for both the immediate (p= 0.05) and delayed (p< 0.02) refractive error correction groups and for the cataract surgery control group (p= 0.05). CONCLUSION Vision-enhancing interventions did not lead to short-term improvements in physical functioning or cognitive status in this sample of elderly nursing home residents. PMID:19170783

  13. A methodology for coupling a visual enhancement device to human visual attention

    NASA Astrophysics Data System (ADS)

    Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman

    2009-02-01

    The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.

  14. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    PubMed

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  15. Deep learning-based artificial vision for grasp classification in myoelectric hands.

    PubMed

    Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush

    2017-06-01

    Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb Ultra TM prosthetic hand and a motion control TM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

  16. Deep learning-based artificial vision for grasp classification in myoelectric hands

    NASA Astrophysics Data System (ADS)

    Ghazaei, Ghazal; Alameer, Ali; Degenaar, Patrick; Morgan, Graham; Nazarpour, Kianoush

    2017-06-01

    Objective. Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. Approach. We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at {{5}\\circ} intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. Main results. The classification accuracy in the offline tests reached 85 % for the seen and 75 % for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of 84 % in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to 88 % . In addition, we show that with training, subjects’ performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. Significance. The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.

  17. The reliability of a VISION COACH task as a measure of psychomotor skills.

    PubMed

    Xi, Yubin; Rosopa, Patrick J; Mossey, Mary; Crisler, Matthew C; Drouin, Nathalie; Kopera, Kevin; Brooks, Johnell O

    2014-10-01

    The VISION COACH™ interactive light board is designed to test and enhance participants' psychomotor skills. The primary goal of this study was to examine the test-retest reliability of the Full Field 120 VISION COACH task. One hundred eleven male and 131 female adult participants completed six trials where they responded to 120 randomly distributed lights displayed on the VISION COACH interactive light board. The mean time required for a participant to complete a trial was 101 seconds. Intraclass correlation coefficients, ranging from 0.962 to 0.987 suggest the VISION COACH Full Field 120 task was a reliable task. Cohen's d's of adjacent pairs of trials suggest learning effects did not negatively affect reliability after the third trial.

  18. New pediatric vision screener employing polarization-modulated, retinal-birefringence-scanning-based strabismus detection and bull's eye focus detection with an improved target system: opto-mechanical design and operation

    NASA Astrophysics Data System (ADS)

    Irsch, Kristina; Gramatikov, Boris I.; Wu, Yi-Kai; Guyton, David L.

    2014-06-01

    Amblyopia ("lazy eye") is a major public health problem, caused by misalignment of the eyes (strabismus) or defocus. If detected early in childhood, there is an excellent response to therapy, yet most children are detected too late to be treated effectively. Commercially available vision screening devices that test for amblyopia's primary causes can detect strabismus only indirectly and inaccurately via assessment of the positions of external light reflections from the cornea, but they cannot detect the anatomical feature of the eyes where fixation actually occurs (the fovea). Our laboratory has been developing technology to detect true foveal fixation, by exploiting the birefringence of the uniquely arranged Henle fibers delineating the fovea using retinal birefringence scanning (RBS), and we recently described a polarization-modulated approach to RBS that enables entirely direct and reliable detection of true foveal fixation, with greatly enhanced signal-to-noise ratio and essentially independent of corneal birefringence (a confounding variable with all polarization-sensitive ophthalmic technology). Here, we describe the design and operation of a new pediatric vision screener that employs polarization-modulated, RBS-based strabismus detection and bull's eye focus detection with an improved target system, and demonstrate the feasibility of this new approach.

  19. Human Detection from a Mobile Robot Using Fusion of Laser and Vision Information

    PubMed Central

    Fotiadis, Efstathios P.; Garzón, Mario; Barrientos, Antonio

    2013-01-01

    This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method. PMID:24008280

  20. Human detection from a mobile robot using fusion of laser and vision information.

    PubMed

    Fotiadis, Efstathios P; Garzón, Mario; Barrientos, Antonio

    2013-09-04

    This paper presents a human detection system that can be employed on board a mobile platform for use in autonomous surveillance of large outdoor infrastructures. The prediction is based on the fusion of two detection modules, one for the laser and another for the vision data. In the laser module, a novel feature set that better encapsulates variations due to noise, distance and human pose is proposed. This enhances the generalization of the system, while at the same time, increasing the outdoor performance in comparison with current methods. The vision module uses the combination of the histogram of oriented gradients descriptor and the linear support vector machine classifier. Current approaches use a fixed-size projection to define regions of interest on the image data using the range information from the laser range finder. When applied to small size unmanned ground vehicles, these techniques suffer from misalignment, due to platform vibrations and terrain irregularities. This is effectively addressed in this work by using a novel adaptive projection technique, which is based on a probabilistic formulation of the classifier performance. Finally, a probability calibration step is introduced in order to optimally fuse the information from both modules. Experiments in real world environments demonstrate the robustness of the proposed method.

  1. New pediatric vision screener employing polarization-modulated, retinal-birefringence-scanning-based strabismus detection and bull's eye focus detection with an improved target system: opto-mechanical design and operation.

    PubMed

    Irsch, Kristina; Gramatikov, Boris I; Wu, Yi-Kai; Guyton, David L

    2014-06-01

    Amblyopia ("lazy eye") is a major public health problem, caused by misalignment of the eyes (strabismus) or defocus. If detected early in childhood, there is an excellent response to therapy, yet most children are detected too late to be treated effectively. Commercially available vision screening devices that test for amblyopia's primary causes can detect strabismus only indirectly and inaccurately via assessment of the positions of external light reflections from the cornea, but they cannot detect the anatomical feature of the eyes where fixation actually occurs (the fovea). Our laboratory has been developing technology to detect true foveal fixation, by exploiting the birefringence of the uniquely arranged Henle fibers delineating the fovea using retinal birefringence scanning (RBS), and we recently described a polarization-modulated approach to RBS that enables entirely direct and reliable detection of true foveal fixation, with greatly enhanced signal-to-noise ratio and essentially independent of corneal birefringence (a confounding variable with all polarization-sensitive ophthalmic technology). Here, we describe the design and operation of a new pediatric vision screener that employs polarization-modulated, RBS-based strabismus detection and bull's eye focus detection with an improved target system, and demonstrate the feasibility of this new approach.

  2. Evaluation of 5 different labeled polymer immunohistochemical detection systems.

    PubMed

    Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A

    2010-01-01

    Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.

  3. Real Time Target Tracking Using Dedicated Vision Hardware

    NASA Astrophysics Data System (ADS)

    Kambies, Keith; Walsh, Peter

    1988-03-01

    This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.

  4. Look Sharp While Seeing Sharp

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The two scientists James B. Stephens and Dr. Charles G. Miller were tasked with studying the harmful properties of light in space, as well as the artificial radiation produced during laser and welding work, for the purpose of creating an enhanced means of eye protection in industrial welding applications. While working to apply their space research to these terrestrial applications, Stephens and Miller became engrossed with previously discovered research showing evidence that the eyes of hawks, eagles, and other birds of prey contain unique oil droplets that actually protect them from intensely radiated light rays (blue, violet, ultraviolet) while allowing vision-enhancing light rays (red, orange, green) to pass through. These oil droplets absorb short wavelength light rays which, in turn, reduce glare and provide heightened color contrast and definition for optimal visual acuity. Accordingly, birds of prey possess the ability to distinguish their targeted prey in natural surroundings and from great distances. Pairing the findings from their initial studies with what they learned from the bird studies, the scientists devised a methodology to incorporate the light-filtering/vision-enhancing dual-action benefits into a filtering system, using light-filtering dyes and tiny particles of zinc oxide. (Zinc oxide, which absorbs ultraviolet light, is also found in sunscreen lotions that protect the skin from sunburn.)

  5. Synthetic Vision Enhanced Surface Operations and Flight Procedures Rehearsal Tool

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Kramer, Lynda J.

    2006-01-01

    Limited visibility has been cited as predominant causal factor for both Controlled-Flight-Into-Terrain (CFIT) and runway incursion accidents. NASA is conducting research and development of Synthetic Vision Systems (SVS) technologies which may potentially mitigate low visibility conditions as a causal factor to these accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Two experimental evaluation studies were performed to determine the efficacy of two concepts: 1) head-worn display application of SVS technology to enhance transport aircraft surface operations, and 2) three-dimensional SVS electronic flight bag display concept for flight plan preview, mission rehearsal and controller-pilot data link communications interface of flight procedures. In the surface operation study, pilots evaluated two display devices and four display modes during taxi under unlimited and CAT II visibility conditions. In the mission rehearsal study, pilots flew approaches and departures in an operationally-challenged airport environment, including CFIT scenarios. Performance using the SVS concepts was compared to traditional baseline displays with paper charts only or EFB information. In general, the studies evince the significant situation awareness and enhanced operational capabilities afforded from these advanced SVS display concepts. The experimental results and conclusions from these studies are discussed along with future directions.

  6. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  7. A Vision of Success: How Nutrient Management Will Enhance and Sustain Ecosystem Services

    EPA Science Inventory

    Clean air and water, ample food, renewable fuels, productive fisheries, diverse ecosystems, resilient coasts and watersheds: these are some of the benefits that depend on sustainable nitrogen use and management. Thus, in our vision of the future, uses of reactive nitrogen are suf...

  8. Frame Rate and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  9. Visions for Children: African American Early Childhood Education Program.

    ERIC Educational Resources Information Center

    Hale-Benson, Janice

    The features of an early childhood education demonstration program, Visions for Children, are delineated in this paper. The program was designed to facilitate the intellectual development, boost the academic achievement, and enhance the self-concepts of African-American preschool children. The program implements a curriculum that focuses on…

  10. Driving and Low Vision: An Evidence-Based Review of Rehabilitation

    ERIC Educational Resources Information Center

    Strong, J. Graham; Jutai, Jeffrey W.; Russell-Minda, Elizabeth; Evans, Mal

    2008-01-01

    This systematic review of the effectiveness of driver rehabilitation interventions found that driver training programs enhance driving skills and awareness, but further research is needed to determine their effectiveness in improving driving performance of drivers with low vision. More research is also needed to determine the effectiveness of low…

  11. Foveal Processing Under Concurrent Peripheral Load in Profoundly Deaf Adults

    PubMed Central

    2016-01-01

    Development of the visual system typically proceeds in concert with the development of audition. One result is that the visual system of profoundly deaf individuals differs from that of those with typical auditory systems. While past research has suggested deaf people have enhanced attention in the visual periphery, it is still unclear whether or not this enhancement entails deficits in central vision. Profoundly deaf and typically hearing adults were administered a variant of the useful field of view task that independently assessed performance on concurrent central and peripheral tasks. Identification of a foveated target was impaired by a concurrent selective peripheral attention task, more so in profoundly deaf adults than in the typically hearing. Previous findings of enhanced performance on the peripheral task were not replicated. These data are discussed in terms of flexible allocation of spatial attention targeted towards perceived task demands, and support a modified “division of labor” hypothesis whereby attentional resources co-opted to process peripheral space result in reduced resources in the central visual field. PMID:26657078

  12. Solar System Exploration Augmented by In-Situ Resource Utilization: Human Mercury and Saturn Exploration

    NASA Technical Reports Server (NTRS)

    Palaszewski, Bryan

    2015-01-01

    Human and robotic missions to Mercury and Saturn are presented and analyzed. Unique elements of the local planetary environments are discussed and included in the analyses and assessments. Using historical studies of space exploration, in-situ resource utilization (ISRU), and industrialization all point to the vastness of natural resources in the solar system. Advanced propulsion benefitted from these resources in many way. While advanced propulsion systems were proposed in these historical studies, further investigation of nuclear options using high power nuclear thermal and nuclear pulse propulsion as well as advanced chemical propulsion can significantly enhance these scenarios. Updated analyses based on these historical visions will be presented. Nuclear thermal propulsion and ISRU enhanced chemical propulsion landers are assessed for Mercury missions. At Saturn, nuclear pulse propulsion with alternate propellant feed systems and Titan exploration with chemical propulsion options are discussed.

  13. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  14. Mars Rover imaging systems and directional filtering

    NASA Technical Reports Server (NTRS)

    Wang, Paul P.

    1989-01-01

    Computer literature searches were carried out at Duke University and NASA Langley Research Center. The purpose is to enhance personal knowledge based on the technical problems of pattern recognition and image understanding which must be solved for the Mars Rover and Sample Return Mission. Intensive study effort of a large collection of relevant literature resulted in a compilation of all important documents in one place. Furthermore, the documents are being classified into: Mars Rover; computer vision (theory); imaging systems; pattern recognition methodologies; and other smart techniques (AI, neural networks, fuzzy logic, etc).

  15. Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.

    1995-06-01

    Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.

  16. Ultraviolet-Blocking Lenses Protect, Enhance Vision

    NASA Technical Reports Server (NTRS)

    2010-01-01

    To combat the harmful properties of light in space, as well as that of artificial radiation produced during laser and welding work, Jet Propulsion Laboratory (JPL) scientists developed a lens capable of absorbing, filtering, and scattering the dangerous light while not obstructing vision. SunTiger Inc. now Eagle Eyes Optics, of Calabasas, California was formed to market a full line of sunglasses based on the JPL discovery that promised 100-percent elimination of harmful wavelengths and enhanced visual clarity. The technology was recently inducted into the Space Technology Hall of Fame.

  17. Sharpening vision by adapting to flicker.

    PubMed

    Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A

    2016-11-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.

  18. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

    PubMed Central

    Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin

    2017-01-01

    The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950

  19. Sensor Needs for Control and Health Management of Intelligent Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Gang, Sanjay; Hunter, Gary W.; Guo, Ten-Huei; Semega, Kenneth J.

    2004-01-01

    NASA and the U.S. Department of Defense are conducting programs which support the future vision of "intelligent" aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized.

  20. Evaluation of Equivalent Vision Technologies for Supersonic Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Wilz, Susan P.; Arthur, Jarvis J., III; Bailey, Randall E.

    2009-01-01

    Twenty-four air transport-rated pilots participated as subjects in a fixed-based simulation experiment to evaluate the use of Synthetic/Enhanced Vision (S/EV) and eXternal Vision System (XVS) technologies as enabling technologies for future all-weather operations. Three head-up flight display concepts were evaluated a monochromatic, collimated Head-up Display (HUD) and a color, non-collimated XVS display with a field-of-view (FOV) equal to and also, one significantly larger than the collimated HUD. Approach, landing, departure, and surface operations were conducted. Additionally, the apparent angle-of-attack (AOA) was varied (high/low) to investigate the vertical field-of-view display requirements and peripheral, side window visibility was experimentally varied. The data showed that lateral approach tracking performance and lateral landing position were excellent regardless of the display type and AOA condition being evaluated or whether or not there were peripheral cues in the side windows. Longitudinal touchdown and glideslope tracking were affected by the display concepts. Larger FOV display concepts showed improved longitudinal touchdown control, superior glideslope tracking, significant situation awareness improvements and workload reductions compared to smaller FOV display concepts.

  1. Crepuscular and Nocturnal Illumination and Its Effects on Color Perception by the Nocturnal Hawkmoth Deilephila elpenor

    DTIC Science & Technology

    2006-01-01

    vision may enhance recognition of conspecifics or be used in mating. While mating in moths is thought to be entirely mediated by olfaction , most tasks are...time, unambiguous evidence for true color vision under scotopic conditions has only recently been acquired (Kelber et al., 2002; Roth and Kelber, 2004...color under starlight and dim moonlight, respectively, raise at least two issues. First, what is the selective advantage of color vision in these

  2. Flight Simulator Evaluation of Display Media Devices for Synthetic Vision Concepts

    NASA Technical Reports Server (NTRS)

    Arthur, J. J., III; Williams, Steven P.; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.

    2004-01-01

    The Synthetic Vision Systems (SVS) Project of the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSP) is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft. To accomplish these safety and capacity improvements, the SVS concept is designed to provide a clear view of the world around the aircraft through the display of computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information. Display media devices with which to implement SVS technology that have been evaluated so far within the Project include fixed field of view head up displays and head down Primary Flight Displays with pilot-selectable field of view. A simulation experiment was conducted comparing these display devices to a fixed field of view, unlimited field of regard, full color Helmet-Mounted Display system. Subject pilots flew a visual circling maneuver in IMC at a terrain-challenged airport. The data collected for this experiment is compared to past SVS research studies.

  3. Toward Head-Up and Head-Worn Displays for Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Arthur, Jarvis J.; Bailey, Randall E.; Shelton, Kevin J.; Kramer, Lynda J.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.; Ellis, Kyle K.

    2015-01-01

    A key capability envisioned for the future air transportation system is the concept of equivalent visual operations (EVO). EVO is the capability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. Enhanced Flight Vision Systems (EFVS) offer a path to achieve EVO. NASA has successfully tested EFVS for commercial flight operations that has helped establish the technical merits of EFVS, without reliance on natural vision, to runways without category II/III ground-based navigation and lighting requirements. The research has tested EFVS for operations with both Head-Up Displays (HUDs) and "HUD equivalent" Head-Worn Displays (HWDs). The paper describes the EVO concept and representative NASA EFVS research that demonstrate the potential of these technologies to safely conduct operations in visibilities as low as 1000 feet Runway Visual Range (RVR). Future directions are described including efforts to enable low-visibility approach, landing, and roll-outs using EFVS under conditions as low as 300 feet RVR.

  4. Flight Test Evaluation of Synthetic Vision Concepts at a Terrain Challenged Airport

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Prince, Lawrence J., III; Bailey, Randell E.; Arthur, Jarvis J., III; Parrish, Russell V.

    2004-01-01

    NASA's Synthetic Vision Systems (SVS) Project is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation/Terrain Awareness and Warning System displays. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the tunnel guidance display concept used within the SVS concepts achieved required navigation performance (RNP) criteria.

  5. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation

    PubMed Central

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-01-01

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780

  6. Measuring perceived video quality of MPEG enhancement by people with impaired vision

    PubMed Central

    Fullerton, Matthew; Woods, Russell L.; Vera-Diaz, Fuensanta A.; Peli, Eli

    2007-01-01

    We used a new method to measure the perceived quality of contrast-enhanced motion video. Patients with impaired vision (n = 24) and normally-sighted subjects (n = 6) adjusted the level of MPEG-based enhancement of 8 videos (4 minutes each) drawn from 4 categories. They selected the level of enhancement that provided the preferred view of the videos, using a reducing-step-size staircase procedure. Most patients made consistent selections of the preferred level of enhancement, indicating an appreciation of and a perceived benefit from the MPEG-based enhancement. The selections varied between patients and were correlated with letter contrast sensitivity, but the selections were not affected by training, experience or video category. We measured just noticeable differences (JNDs) directly for videos, and mapped the image manipulation (enhancement in our case) onto an approximately linear perceptual space. These tools and approaches will be of value in other evaluations of the image quality of motion video manipulations. PMID:18059909

  7. Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)

    1993-01-01

    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.

  8. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  9. Integration of a vision-based tracking platform, visual instruction, and error analysis models for an efficient billiard training system

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong; Hsiung, Pao-Ann; Wan, Chieh-Hao; Koong, Chorng-Shiuh; Liu, Tang-Kun; Yang, Yuanfan; Lin, Chu-Hsing; Chu, William Cheng-Chung

    2009-02-01

    A billiard ball tracking system is designed to combine with a visual guide interface to instruct users for a reliable strike. The integrated system runs on a PC platform. The system makes use of a vision system for cue ball, object ball and cue stick tracking. A least-squares error calibration process correlates the real-world and the virtual-world pool ball coordinates for a precise guidance line calculation. Users are able to adjust the cue stick on the pool table according to a visual guidance line instruction displayed on a PC monitor. The ideal visual guidance line extended from the cue ball is calculated based on a collision motion analysis. In addition to calculating the ideal visual guide, the factors influencing selection of the best shot among different object balls and pockets are explored. It is found that a tolerance angle around the ideal line for the object ball to roll into a pocket determines the difficulty of a strike. This angle depends in turn on the distance from the pocket to the object, the distance from the object to the cue ball, and the angle between these two vectors. Simulation results for tolerance angles as a function of these quantities are given. A selected object ball was tested extensively with respect to various geometrical parameters with and without using our integrated system. Players with different proficiency levels were selected for the experiment. The results indicate that all players benefit from our proposed visual guidance system in enhancing their skills, while low-skill players show the maximum enhancement in skill with the help of our system. All exhibit enhanced maximum and average hit-in rates. Experimental results on hit-in rates have shown a pattern consistent with that of the analysis. The hit-in rate is thus tightly connected with the analyzed tolerance angles for sinking object balls into a target pocket. These results prove the efficiency of our system, and the analysis results can be used to attain an efficient game-playing strategy.

  10. Heuristics in primary care for recognition of unreported vision loss in older people: a technology development study.

    PubMed

    Wijeyekoon, Skanda; Kharicha, Kalpa; Iliffe, Steve

    2015-09-01

    To evaluate heuristics (rules of thumb) for recognition of undetected vision loss in older patients in primary care. Vision loss is associated with ageing, and its prevalence is increasing. Visual impairment has a broad impact on health, functioning and well-being. Unrecognised vision loss remains common, and screening interventions have yet to reduce its prevalence. An alternative approach is to enhance practitioners' skills in recognising undetected vision loss, by having a more detailed picture of those who are likely not to act on vision changes, report symptoms or have eye tests. This paper describes a qualitative technology development study to evaluate heuristics for recognition of undetected vision loss in older patients in primary care. Using a previous modelling study, two heuristics in the form of mnemonics were developed to aid pattern recognition and allow general practitioners to identify potential cases of unreported vision loss. These heuristics were then analysed with experts. Findings It was concluded that their implementation in modern general practice was unsuitable and an alternative solution should be sort.

  11. Monovision techniques for telerobots

    NASA Technical Reports Server (NTRS)

    Goode, P. W.; Carnils, K.

    1987-01-01

    The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.

  12. Design of a dynamic test platform for autonomous robot vision systems

    NASA Technical Reports Server (NTRS)

    Rich, G. C.

    1980-01-01

    The concept and design of a dynamic test platform for development and evluation of a robot vision system is discussed. The platform is to serve as a diagnostic and developmental tool for future work with the RPI Mars Rover's multi laser/multi detector vision system. The platform allows testing of the vision system while its attitude is varied, statically or periodically. The vision system is mounted on the test platform. It can then be subjected to a wide variety of simulated can thus be examined in a controlled, quantitative fashion. Defining and modeling Rover motions and designing the platform to emulate these motions are also discussed. Individual aspects of the design process are treated separately, as structural, driving linkages, and motors and transmissions.

  13. Image-Enhancement Aid For The Partially Sighted

    NASA Technical Reports Server (NTRS)

    Lawton, T. A.; Gennery, D. B.

    1989-01-01

    Digital filtering enhances ability to read and to recognize objects. Possible to construct portable vision aid by combining miniature video equipment to observe scene and display images with very-large-scale integrated circuits to implement real-time digital image-data processing. Afflicted observer views scene through magnifier to shift spatial frequencies downward and thereby improves perceived image. However, less magnification needed, larger the scene observed. Thus, one measure of effectiveness of new system is amount of magnification required with and without it. In series of tests, found 27 to 70 percent more magnification needed for afflicted observers to recognize unfiltered words than to recognize filtered words.

  14. Impaired Waters and TMDLs: New Vision for the CWA 303(d) Program – An Updated Framework for Implementing the CWA 303(d) Program Responsibilities

    EPA Pesticide Factsheets

    A vision detailing enhancements made to the Clean Water Act 303(d) Program informed by the experience gained over the past two decades in assessing and reporting on water quality and in developing approximately 65,000 TMDLs.

  15. 77 FR 42704 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-20

    ... Vision Sensors, 12 AN/APG-78 Fire Control Radars (FCR) with Radar Electronics Unit (LONGBOW component... Target Acquisition and Designation Sight, 27 AN/AAR-11 Modernized Pilot Night Vision Sensors, 12 AN/APG... enhance the protection of key oil and gas infrastructure and platforms which are vital to U.S. and western...

  16. Principles of 'servant leadership' and how they can enhance practice.

    PubMed

    Waterman, Harold

    2011-02-01

    This article explores the concept of service in modern health and social care. It examines the principles of servant leadership in the contexts of service, community and vision, in that, if service is what leaders do, community is who they do it for and vision is how the two concepts are brought together.

  17. Using an auditory sensory substitution device to augment vision: evidence from eye movements.

    PubMed

    Wright, Thomas D; Margolis, Aaron; Ward, Jamie

    2015-03-01

    Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.

  18. Detecting Motion from a Moving Platform; Phase 3: Unification of Control and Sensing for More Advanced Situational Awareness

    DTIC Science & Technology

    2011-11-01

    RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica

  19. An approach to integrate the human vision psychology and perception knowledge into image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Huang, Xifeng; Ping, Jiang

    2009-07-01

    Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.

  20. Automatic detection system of shaft part surface defect based on machine vision

    NASA Astrophysics Data System (ADS)

    Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang

    2015-05-01

    Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.

  1. The influence of stimulants, sedatives, and fatigue on tunnel vision: risk factors for driving and piloting.

    PubMed

    Mills, K C; Spruill, S E; Kanne, R W; Parkman, K M; Zhang, Y

    2001-01-01

    A computerized task was used in two studies to examine the influence of stimulants, sedatives, and fatigue on single-target and divided-attention responses in different parts of the visual field. The drug effects were evaluated over time with repeated behavioral and subjective measures against ascending and descending drug levels. In the first study, 18 fully rested participants received placebo, alprazolam (0.5 mg), and dextroamphetamine (10 mg). Alprazolam impairs performance, whereas dextroamphetamine induces enhancement and tunnel vision. Study 2 exposed 32 participants to fatigue and no fatigue with a repeated-measures crossover design. Four independent groups subsequently received placebo, dextroamphetamine (10 mg), caffeine (250 mg), or alcohol (.07%). Under fatigue, stimulants have no performance-enhancing effects, whereas impairment from alcohol is severe. Under no fatigue, alcohol has a modest effect, caffeine has no effect, and dextroamphetamine significantly enhances divided-attention performance coincident with tunnel vision. Participants rate all drug effects more stimulating and less sedating while fatigued. Implications for transportation safety are discussed. Actual or potential applications of this research include driver and pilot training.

  2. Strengthening responses to the HIV/AIDS pandemic: an internal evaluation of the Caribbean Health Leadership Institute.

    PubMed

    Umble, K; Bain, B; Ruddock-Small, M; Mahanna, E; Baker, E L

    2012-07-01

    Leadership development is a strategy for improving national responses to HIV/AIDS. The University of the West Indies offers the Caribbean Health Leadership Institute (CHLI) to enhance leaders' effectiveness and responses to HIV/AIDS through a cooperative agreement with the Centers for Disease Control and Prevention. CHLI enrolls leaders in annual cohorts numbering 20-40. To examine how CHLI influenced graduates' self-understanding, skills, approaches, vision, commitments, courage, confidence, networks, and contributions to program, organizational, policy, and systems improvements. Web-based surveys and interviews of graduates. CHLI increased graduates' self-understanding and skills and strengthened many graduates' vision, confidence, and commitments to improving systems. It helped graduates improve programs, policies, and systems by: motivating them and giving them ideas for changes to pursue, encouraging them to share their vision, deepening skills in areas such as systems thinking, policy advocacy, and communication, strengthening their inclusion of partners and team members, and influencing how they interacted with others. Training both HIV-focused and general health leaders can help both kinds of leaders foster improvements in HIV services and policies. Learners greatly valued self-assessments, highly interactive sessions, and the opportunity to build a network of professional colleagues. Projects provided opportunities to address substantive issues and immediately apply learning to work. Leadership development evaluations in the United States have also emphasized the complementary benefits of assessment and feedback, skills development, and network development. Global leadership programs should find ways to combine these components in both traditional face-to-face and distance-learning contexts.

  3. Vehicle-based vision sensors for intelligent highway systems

    NASA Astrophysics Data System (ADS)

    Masaki, Ichiro

    1989-09-01

    This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.

  4. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  5. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  6. Tunnel vision: sharper gradient of spatial attention in autism.

    PubMed

    Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I

    2013-04-17

    Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.

  7. Machine vision systems using machine learning for industrial product inspection

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Chen, Tie Q.; Chen, Jie; Zhang, Jian; Tisler, Anthony

    2002-02-01

    Machine vision inspection requires efficient processing time and accurate results. In this paper, we present a machine vision inspection architecture, SMV (Smart Machine Vision). SMV decomposes a machine vision inspection problem into two stages, Learning Inspection Features (LIF), and On-Line Inspection (OLI). The LIF is designed to learn visual inspection features from design data and/or from inspection products. During the OLI stage, the inspection system uses the knowledge learnt by the LIF component to inspect the visual features of products. In this paper we will present two machine vision inspection systems developed under the SMV architecture for two different types of products, Printed Circuit Board (PCB) and Vacuum Florescent Displaying (VFD) boards. In the VFD board inspection system, the LIF component learns inspection features from a VFD board and its displaying patterns. In the PCB board inspection system, the LIF learns the inspection features from the CAD file of a PCB board. In both systems, the LIF component also incorporates interactive learning to make the inspection system more powerful and efficient. The VFD system has been deployed successfully in three different manufacturing companies and the PCB inspection system is the process of being deployed in a manufacturing plant.

  8. An integrated port camera and display system for laparoscopy.

    PubMed

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  9. General Mission Analysis Tool (GMAT): Mission, Vision, and Business Case

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2007-01-01

    The Goal of the GMAT project is to develop new space trajectory optimization and mission design technology by working inclusively with ordinary people, universities businesses and other government organizations; and to share that technology in an open and unhindered way. GMAT's a free and open source software system; free for anyone to use in development of new mission concepts or to improve current missions, freely available in source code form for enhancement or future technology development.

  10. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users

    PubMed Central

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2014-01-01

    Background Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population in order to evaluate the impact of new vision restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional vision ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. Methods The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional vision tasks for observation of performance, and a case narrative summary. Results were analyzed to determine whether the interview questions and functional vision tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, twenty-six subjects were assessed with the FLORA. Seven different evaluators administered the assessment. Results All 14 interview questions were asked. All 35 functional vision tasks were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options -- impossible (33%), difficult (23%), moderate (24%) and easy (19%) -- were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with vision only occurring 75% on average with the System ON, and 29% with the System OFF. Conclusion The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as a functional vision and well-being assessment tool. PMID:25675964

  11. Educational Applications of Vision Therapy: A Pilot Study on Children with Autism.

    ERIC Educational Resources Information Center

    Lovelace, Kelly; Rhodes, Heidi; Chambliss, Catherine

    This report discusses the outcomes of a study that explored the feasibility of using vision therapy (VT) as part of an interdisciplinary approach to the education of children with autism. Traditional research on VT has explored its usefulness in helping patients to use both eyes together, improve depth perception, and enhance visual acuity.…

  12. B. F. Skinner's Utopian Vision: Behind and beyond "Walden Two"

    ERIC Educational Resources Information Center

    Altus, Deborah E.; Morris, Edward K.

    2009-01-01

    This paper addresses B. F. Skinner's utopian vision for enhancing social justice and human wellbeing in his 1948 novel, "Walden Two." In the first part, we situate the book in its historical, intellectual, and social context of the utopian genre, address critiques of the book's premises and practices, and discuss the fate of intentional…

  13. Vision in Children and Adolescents with Autistic Spectrum Disorder: Evidence for Reduced Convergence

    ERIC Educational Resources Information Center

    Milne, Elizabeth; Griffiths, Helen; Buckley, David; Scope, Alison

    2009-01-01

    Evidence of atypical perception in individuals with ASD is mainly based on self report, parental questionnaires or psychophysical/cognitive paradigms. There have been relatively few attempts to establish whether binocular vision is enhanced, intact or abnormal in those with ASD. To address this, we screened visual function in 51 individuals with…

  14. Night vision: changing the way we drive

    NASA Astrophysics Data System (ADS)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  15. A Machine Vision System for Automatically Grading Hardwood Lumber - (Proceedings)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas H. Drayer; Joe G. Tront; Philip A. Araman; Robert L. Brisbon

    1990-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  16. Machine Vision Systems for Processing Hardwood Lumber and Logs

    Treesearch

    Philip A. Araman; Daniel L. Schmoldt; Tai-Hoon Cho; Dongping Zhu; Richard W. Conners; D. Earl Kline

    1992-01-01

    Machine vision and automated processing systems are under development at Virginia Tech University with support and cooperation from the USDA Forest Service. Our goals are to help U.S. hardwood producers automate, reduce costs, increase product volume and value recovery, and market higher value, more accurately graded and described products. Any vision system is...

  17. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  18. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  19. MEMS-based system and image processing strategy for epiretinal prosthesis.

    PubMed

    Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong

    2015-01-01

    Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.

  20. Effects of vision on head-putter coordination in golf.

    PubMed

    Gonzalez, David Antonio; Kegel, Stefan; Ishikura, Tadao; Lee, Tim

    2012-07-01

    Low-skill golfers coordinate the movements of their head and putter with an allocentric, isodirectional coupling, which is opposite to the allocentric, antidirectional coordination pattern used by experts (Lee, Ishikura, Kegel, Gonzalez, & Passmore, 2008). The present study investigated the effects of four vision conditions (full vision, no vision, target focus, and ball focus) on head-putter coupling in low-skill golfers. Performance in the absence of vision resulted in a level of high isodirectional coupling that was similar to the full vision condition. However, when instructed to focus on the target during the putt, or focus on the ball through a restricted viewing angle, low-skill golfers significantly decoupled the head--putter coordination pattern. However, outcome measures demonstrated that target focus resulted in poorer performance compared with the other visual conditions, thereby providing overall support for use of a ball focus strategy to enhance coordination and outcome performance. Focus of attention and reduced visual tracking were hypothesized as potential reasons for the decoupling.

  1. A Vision for the Future of Community Psychology Education and Training.

    PubMed

    Jimenez, Tiffeny R; Sánchez, Bernadette; McMahon, Susan D; Viola, Judah

    2016-12-01

    As we reflect on the founding vision of the field of community psychology in the United States, we assess our progress toward achieving, building upon, and refining this vision. We review early literature regarding the US vision of the field, provide a historical overview of education and training within the field, and provide recommendations to guide and strengthen our approach to education. Our recommendations include the following: 1) serve as a resource to communities, 2) promote a sense of community within our field, 3) diversify students, faculty, and leadership, 4) evaluate our efforts, 5) be current and relevant, 6) enhance the visibility and growth of our field, and 7) create globally minded and innovative CPists. We provide strategies for programs, faculty, linkages between researchers and practitioners, and the Society for Community Research and Action. We conclude that community psychology education and training continues to reflect the early vision; however, we believe we must make more intentional efforts to align with the mission and values of the field, and to engage in a critical analysis of our pedagogy. Enhancing and growing undergraduate and graduate education can facilitate the achievement of our goals to promote social justice, prevent and address social problems, and build community connections to become more effective, impactful, and global social change agents. © Society for Community Research and Action 2016.

  2. A Machine Vision System for Automatically Grading Hardwood Lumber - (Industrial Metrology)

    Treesearch

    Richard W. Conners; Tai-Hoon Cho; Chong T. Ng; Thomas T. Drayer; Philip A. Araman; Robert L. Brisbon

    1992-01-01

    Any automatic system for grading hardwood lumber can conceptually be divided into two components. One of these is a machine vision system for locating and identifying grading defects. The other is an automatic grading program that accepts as input the output of the machine vision system and, based on these data, determines the grade of a board. The progress that has...

  3. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  4. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  5. Intensity measurement of automotive headlamps using a photometric vision system

    NASA Astrophysics Data System (ADS)

    Patel, Balvant; Cruz, Jose; Perry, David L.; Himebaugh, Frederic G.

    1996-01-01

    Requirements for automotive head lamp luminous intensity tests are introduced. The rationale for developing a non-goniometric photometric test system is discussed. The design of the Ford photometric vision system (FPVS) is presented, including hardware, software, calibration, and system use. Directional intensity plots and regulatory test results obtained from the system are compared to corresponding results obtained from a Ford goniometric test system. Sources of error for the vision system and goniometer are discussed. Directions for new work are identified.

  6. Foveal Processing Under Concurrent Peripheral Load in Profoundly Deaf Adults.

    PubMed

    Dye, Matthew W G

    2016-04-01

    Development of the visual system typically proceeds in concert with the development of audition. One result is that the visual system of profoundly deaf individuals differs from that of those with typical auditory systems. While past research has suggested deaf people have enhanced attention in the visual periphery, it is still unclear whether or not this enhancement entails deficits in central vision. Profoundly deaf and typically hearing adults were administered a variant of the useful field of view task that independently assessed performance on concurrent central and peripheral tasks. Identification of a foveated target was impaired by a concurrent selective peripheral attention task, more so in profoundly deaf adults than in the typically hearing. Previous findings of enhanced performance on the peripheral task were not replicated. These data are discussed in terms of flexible allocation of spatial attention targeted towards perceived task demands, and support a modified "division of labor" hypothesis whereby attentional resources co-opted to process peripheral space result in reduced resources in the central visual field. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Unification and Enhancement of Planetary Robotic Vision Ground Processing: The EC FP7 Project PRoVisG

    NASA Astrophysics Data System (ADS)

    Paar, G.

    2009-04-01

    At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.

  8. Enhancing Perceptibility of Barely Perceptible Targets

    DTIC Science & Technology

    1982-12-28

    faster to low spatial frequencies (Breitmeyer, 1975; Tolhurst, 1975; Vassilev & Mitov , 1976;.Breitmeyer & Ganz, 1975; Watson & Nachimas, * 1977). If ground...response than those tuned to high spatial frequencies: they appear to have a shorter latency (Breitmeyer, 1975; Vassilev & Mitov , 1975; Lupp, Bauske, & Wolf...1958. Tolhurst, D. J. Sustained and transient channels in human vision. Vision Research, 1975, 15, 1151-1155. Vassilev, A., & Mitov , D. Perception

  9. Testing and Evaluation of Low-Light Sensors to Enhance Intelligence, Surveillance, and Reconnaissance (ISR) and Real-Time Situational Awareness

    DTIC Science & Technology

    2009-03-01

    infrared, thermal , or night vision applications. Understanding the true capabilities and limitations of the ALAN camera and its applicability to a...an option to more expensive infrared, thermal , or night vision applications. Ultimately, it will be clear whether the configuration of the Kestrel...45  A.  THERMAL CAMERAS................................................................................45  1

  10. Education for All: An Expanded Vision. World Conference on Education for All (Jomtien, Thailand, March 5-9, 1990). Monograph II. Roundtable Themes II.

    ERIC Educational Resources Information Center

    Fordham, Paul

    This seven-chapter monograph elaborates on five components of an expanded vision of basic education. It presents material that deals with aspects of these components: universalizing access and promoting equity, focusing on learning, broadening the means and scope of basic education, enhancing the environment for learning, and strengthening…

  11. Enhancing the Entertainment Experience of Blind and Low-Vision Theatregoers through Touch Tours

    ERIC Educational Resources Information Center

    Udo, J. P.; Fels, D. I.

    2010-01-01

    In this paper we demonstrate how universal design theory and the research available on museum-based touch tours can be used to develop a touch tour for blind and low-vision theatregoers. We discuss these theoretical and practical approaches with reference to data collected and experience gained from the creation and execution of a touch tour for…

  12. The Effects of Synthetic and Enhanced Vision Technologies for Lunar Landings

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Norman, Robert M.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III; Shelton, Kevin J.; Williams, Steven P.

    2009-01-01

    Eight pilots participated as test subjects in a fixed-based simulation experiment to evaluate advanced vision display technologies such as Enhanced Vision (EV) and Synthetic Vision (SV) for providing terrain imagery on flight displays in a Lunar Lander Vehicle. Subjects were asked to fly 20 approaches to the Apollo 15 lunar landing site with four different display concepts - Baseline (symbology only with no terrain imagery), EV only (terrain imagery from Forward Looking Infra Red, or FLIR, and LIght Detection and Ranging, or LIDAR, sensors), SV only (terrain imagery from onboard database), and Fused EV and SV concepts. As expected, manual landing performance was excellent (within a meter of landing site center) and not affected by the inclusion of EV or SV terrain imagery on the Lunar Lander flight displays. Subjective ratings revealed significant situation awareness improvements with the concepts employing EV and/or SV terrain imagery compared to the Baseline condition that had no terrain imagery. In addition, display concepts employing EV imagery (compared to the SV and Baseline concepts which had none) were significantly better for pilot detection of intentional but unannounced navigation failures since this imagery provided an intuitive and obvious visual methodology to monitor the validity of the navigation solution.

  13. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  14. Self calibration of the stereo vision system of the Chang'e-3 lunar rover based on the bundle block adjustment

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Liu, Shaochuang; Ma, Youqing; Qi, Chen; Ma, Hao; Yang, Huan

    2017-06-01

    The Chang'e-3 was the first lunar soft landing probe of China. It was composed of the lander and the lunar rover. The Chang'e-3 successful landed in the northwest of the Mare Imbrium in December 14, 2013. The lunar rover completed the movement, imaging and geological survey after landing. The lunar rover equipped with a stereo vision system which was made up of the Navcam system, the mast mechanism and the inertial measurement unit (IMU). The Navcam system composed of two cameras with the fixed focal length. The mast mechanism was a robot with three revolute joints. The stereo vision system was used to determine the position of the lunar rover, generate the digital elevation models (DEM) of the surrounding region and plan the moving paths of the lunar rover. The stereo vision system must be calibrated before use. The control field could be built to calibrate the stereo vision system in the laboratory on the earth. However, the parameters of the stereo vision system would change after the launch, the orbital changes, the braking and the landing. Therefore, the stereo vision system should be self calibrated on the moon. An integrated self calibration method based on the bundle block adjustment is proposed in this paper. The bundle block adjustment uses each bundle of ray as the basic adjustment unit and the adjustment is implemented in the whole photogrammetric region. The stereo vision system can be self calibrated with the proposed method under the unknown lunar environment and all parameters can be estimated simultaneously. The experiment was conducted in the ground lunar simulation field. The proposed method was compared with other methods such as the CAHVOR method, the vanishing point method, the Denavit-Hartenberg method, the factorization method and the weighted least-squares method. The analyzed result proved that the accuracy of the proposed method was superior to those of other methods. Finally, the proposed method was practical used to self calibrate the stereo vision system of the Chang'e-3 lunar rover on the moon.

  15. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision.

    PubMed

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-22

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  16. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision

    NASA Astrophysics Data System (ADS)

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-01

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  17. A computer vision for animal ecology.

    PubMed

    Weinstein, Ben G

    2018-05-01

    A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.

  18. An embedded vision system for an unmanned four-rotor helicopter

    NASA Astrophysics Data System (ADS)

    Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James

    2006-10-01

    In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.

  19. Skilled Deaf Readers have an Enhanced Perceptual Span in Reading

    PubMed Central

    Bélanger, Nathalie N.; Slattery, Timothy J.; Mayberry, Rachel I.; Rayner, Keith

    2013-01-01

    Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers’ enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading. PMID:22683830

  20. Robots Would Couple And Uncouple Fluid And Electrical Lines

    NASA Technical Reports Server (NTRS)

    Del Castillo, Eduardo Lopez; Davis, Virgil; Ferguson, Bob; Reichle, Garland

    1992-01-01

    Robots make and break connections between umbilical plates and mating connectors on rockets about to be launched. Sensing and control systems include vision, force, and torque subsystems. Enhances safety by making it possible to couple and uncouple umbilical plates quickly, without exposing human technicians to hazards of leaking fuels and oxidizers. Significantly reduces time spent to manually connect umbilicals. Robots based on similar principles used in refueling of National AeroSpace Plane (NASP) and satellites and orbital transfer vehicles in space.

  1. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

    DTIC Science & Technology

    1993-04-01

    suggesting it occurs in later visual motion processing (long-range or second-order system). STIMULUS PERCEPT L" FLASH DURATION FLASH DURATION (a) TIME ( b ...TIME Figure 2. Gamma motion. (a) A light of fixed spatial extent is illuminated then extim- guished. ( b ) The percept is of a light expanding and then...while smaller, type- B cells provide input to its parvocellular subdivision. From here the magnocellular pathway progresses up through visual cortex area V

  2. Adaptive Optics for the Human Eye

    NASA Astrophysics Data System (ADS)

    Williams, D. R.

    2000-05-01

    Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.

  3. Enhanced situational technologies applied to ship channels

    NASA Astrophysics Data System (ADS)

    Helgeson, Michael A.; Wacker, Roger A.

    1997-06-01

    The Houston Ship Channel ranks as America's number one port in foreign tonnage by welcoming more than 50,000 cargo ships and barges annually. Locally 196,000 jobs, 5.5 billion dollars in business revenue and 213 million dollars in taxes are generated. Unfortunately, 32 days of each year vessel traffic stops for hours due to fog causing an estimated 40- 100 million dollars loss as ships idly wait in the channel for weather to clear. In addition, poor visibility has contributed to past vessel collisions which have resulted in channel closure, and associated damage to property and the environment. Today's imaging technology for synthetic vision systems and enhanced situational awareness systems offers a new solution to this problem. Whereas, typically these systems have been targeted at aircraft landing systems the channel navigation application provides a peripheral ground based market. This paper describes two imaging solutions to the problem. One using an active 35 GHz scanning radar and the other using a 94 GHz passive millimeter wave camera.

  4. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  5. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  6. Reconfigurable vision system for real-time applications

    NASA Astrophysics Data System (ADS)

    Torres-Huitzil, Cesar; Arias-Estrada, Miguel

    2002-03-01

    Recently, a growing community of researchers has used reconfigurable systems to solve computationally intensive problems. Reconfigurability provides optimized processors for systems on chip designs, and makes easy to import technology to a new system through reusable modules. The main objective of this work is the investigation of a reconfigurable computer system targeted for computer vision and real-time applications. The system is intended to circumvent the inherent computational load of most window-based computer vision algorithms. It aims to build a system for such tasks by providing an FPGA-based hardware architecture for task specific vision applications with enough processing power, using the minimum amount of hardware resources as possible, and a mechanism for building systems using this architecture. Regarding the software part of the system, a library of pre-designed and general-purpose modules that implement common window-based computer vision operations is being investigated. A common generic interface is established for these modules in order to define hardware/software components. These components can be interconnected to develop more complex applications, providing an efficient mechanism for transferring image and result data among modules. Some preliminary results are presented and discussed.

  7. Robust and efficient vision system for group of cooperating mobile robots with application to soccer robots.

    PubMed

    Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar

    2004-07-01

    In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.

  8. Two sides of the same coin? The (techno)epistemic cultures of systems and synthetic biology.

    PubMed

    Kastenhofer, Karen

    2013-06-01

    Systems and synthetic biology both emerged around the turn of this century as labels for new research approaches. Although their disciplinary status as well as their relation to each other is rarely discussed in depth, now and again the idea is invoked that both approaches represent 'two sides of the same coin'. The following paper focuses on this general notion and compares it with empirical findings concerning the epistemic cultures prevalent in the two contexts. Drawing on interviews with researchers from both fields, on participatory observation in conferences and courses and on documentary analysis, this paper delineates differences and similarities, incompatibilities and blurred boundaries. By reconstructing systems and synthetic biology's epistemic cultures, this paper argues that they represent two 'communities of vision', encompassing heterogeneous practices. Understanding the relation of the respective visions of understanding nature and engineering life is seen as indispensible for the characterisation of (techno)science in more general terms. Depending on the conceptualisation of understanding and construction (or: science and engineering), related practices such as in silico modelling for enhancing understanding or enabling engineering can either be seen as incommensurable or 'two sides of one coin'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Navigation of military and space unmanned ground vehicles in unstructured terrains

    NASA Technical Reports Server (NTRS)

    Lescoe, Paul; Lavery, David; Bedard, Roger

    1991-01-01

    Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.

  10. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  11. Vision-mediated interaction with the Nottingham caves

    NASA Astrophysics Data System (ADS)

    Ghali, Ahmed; Bayomi, Sahar; Green, Jonathan; Pridmore, Tony; Benford, Steve

    2003-05-01

    The English city of Nottingham is widely known for its rich history and compelling folklore. A key attraction is the extensive system of caves to be found beneath Nottingham Castle. Regular guided tours are made of the Nottingham caves, during which castle staff tell stories and explain historical events to small groups of visitors while pointing out relevant cave locations and features. The work reported here is part of a project aimed at enhancing the experience of cave visitors, and providing flexible storytelling tools to their guides, by developing machine vision systems capable of identifying specific actions of guides and/or visitors and triggering audio and/or video presentations as a result. Attention is currently focused on triggering audio material by directing the beam of a standard domestic flashlight towards features of interest on the cave wall. Cameras attached to the walls or roof provide image sequences within which torch light and cave features are detected and their relative positions estimated. When a target feature is illuminated the corresponding audio response is generated. We describe the architecture of the system, its implementation within the caves and the results of initial evaluations carried out with castle guides and members of the public.

  12. In-process fault detection for textile fabric production: onloom imaging

    NASA Astrophysics Data System (ADS)

    Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til

    2011-05-01

    Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.

  13. Function-based design process for an intelligent ground vehicle vision system

    NASA Astrophysics Data System (ADS)

    Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.

    2010-10-01

    An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.

  14. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  15. GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa

    2004-01-01

    The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.

  16. Computer Vision System For Locating And Identifying Defects In Hardwood Lumber

    NASA Astrophysics Data System (ADS)

    Conners, Richard W.; Ng, Chong T.; Cho, Tai-Hoon; McMillin, Charles W.

    1989-03-01

    This paper describes research aimed at developing an automatic cutup system for use in the rough mills of the hardwood furniture and fixture industry. In particular, this paper describes attempts to create the vision system that will power this automatic cutup system. There are a number of factors that make the development of such a vision system a challenge. First there is the innate variability of the wood material itself. No two species look exactly the same, in fact, they can have a significant visual difference in appearance among species. Yet a truly robust vision system must be able to handle a variety of such species, preferably with no operator intervention required when changing from one species to another. Secondly, there is a good deal of variability in the definition of what constitutes a removable defect. The hardwood furniture and fixture industry is diverse in the nature of the products that it makes. The products range from hardwood flooring to fancy hardwood furniture, from simple mill work to kitchen cabinets. Thus depending on the manufacturer, the product, and the quality of the product the nature of what constitutes a removable defect can and does vary. The vision system must be such that it can be tailored to meet each of these unique needs, preferably without any additional program modifications. This paper will describe the vision system that has been developed. It will assess the current system capabilities, and it will discuss the directions for future research. It will be argued that artificial intelligence methods provide a natural mechanism for attacking this computer vision application.

  17. Audible vision for the blind and visually impaired in indoor open spaces.

    PubMed

    Yu, Xunyi; Ganz, Aura

    2012-01-01

    In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.

  18. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  19. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  20. Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators

    PubMed Central

    Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas

    2009-01-01

    Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712

  1. The precision measurement and assembly for miniature parts based on double machine vision systems

    NASA Astrophysics Data System (ADS)

    Wang, X. D.; Zhang, L. F.; Xin, M. Z.; Qu, Y. Q.; Luo, Y.; Ma, T. M.; Chen, L.

    2015-02-01

    In the process of miniature parts' assembly, the structural features on the bottom or side of the parts often need to be aligned and positioned. The general assembly equipment integrated with one vertical downward machine vision system cannot satisfy the requirement. A precision automatic assembly equipment was developed with double machine vision systems integrated. In the system, a horizontal vision system is employed to measure the position of the feature structure at the parts' side view, which cannot be seen with the vertical one. The position measured by horizontal camera is converted to the vertical vision system with the calibration information. By careful calibration, the parts' alignment and positioning in the assembly process can be guaranteed. The developed assembly equipment has the characteristics of easy implementation, modularization and high cost performance. The handling of the miniature parts and assembly procedure were briefly introduced. The calibration procedure was given and the assembly error was analyzed for compensation.

  2. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    PubMed

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.

  3. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    PubMed Central

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-01-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031

  4. Computer vision for foreign body detection and removal in the food industry

    USDA-ARS?s Scientific Manuscript database

    Computer vision inspection systems are often used for quality control, product grading, defect detection and other product evaluation issues. This chapter focuses on the use of computer vision inspection systems that detect foreign bodies and remove them from the product stream. Specifically, we wi...

  5. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  6. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  7. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  8. Flight instruments and helmet-mounted SWIR imaging systems

    NASA Astrophysics Data System (ADS)

    Robinson, Tim; Green, John; Jacobson, Mickey; Grabski, Greg

    2011-06-01

    Night vision technology has experienced significant advances in the last two decades. Night vision goggles (NVGs) based on gallium arsenide (GaAs) continues to raise the bar for alternative technologies. Resolution, gain, sensitivity have all improved; the image quality through these devices is nothing less than incredible. Panoramic NVGs and enhanced NVGs are examples of recent advances that increase the warfighter capabilities. Even with these advances, alternative night vision devices such as solid-state indium gallium arsenide (InGaAs) focal plane arrays are under development for helmet-mounted imaging systems. The InGaAs imaging system offers advantages over the existing NVGs. Two key advantages are; (1) the new system produces digital image data, and (2) the new system is sensitive to energy in the shortwave infrared (SWIR) spectrum. While it is tempting to contrast the performance of these digital systems to the existing NVGs, the advantage of different spectral detection bands leads to the conclusion that the technologies are less competitive and more synergistic. It is likely, by the end of the decade, pilots within a cockpit will use multi-band devices. As such, flight decks will need to be compatible with both NVGs and SWIR imaging systems. Insertion of NVGs in aircraft during the late 70's and early 80's resulted in many "lessons learned" concerning instrument compatibility with NVGs. These "lessons learned" ultimately resulted in specifications such as MIL-L-85762A and MIL-STD-3009. These specifications are now used throughout industry to produce NVG-compatible illuminated instruments and displays for both military and civilian applications. Inserting a SWIR imaging device in a cockpit will require similar consideration. A project evaluating flight deck instrument compatibility with SWIR devices is currently ongoing; aspects of this evaluation are described in this paper. This project is sponsored by the Air Force Research Laboratory (AFRL).

  9. Chlorophyll derivatives enhance invertebrate red-light and ultraviolet phototaxis.

    PubMed

    Degl'Innocenti, Andrea; Rossi, Leonardo; Salvetti, Alessandra; Marino, Attilio; Meloni, Gabriella; Mazzolai, Barbara; Ciofani, Gianni

    2017-06-13

    Chlorophyll derivatives are known to enhance vision in vertebrates. They are thought to bind visual pigments (i.e., opsins apoproteins bound to retinal chromophores) directly within the retina. Consistent with previous findings in vertebrates, here we show that chlorin e 6 - a chlorophyll derivative - enhances photophobicity in a flatworm (Dugesia japonica), specifically when exposed to UV radiation (λ = 405 nm) or red light (λ = 660 nm). This is the first report of chlorophyll derivatives acting as modulators of invertebrate phototaxis, and in general the first account demonstrating that they can artificially alter animal response to light at a behavioral level. Our findings show that the interaction between chlorophyll derivatives and opsins virtually concerns the vast majority of bilaterian animals, and also occurs in visual systems based on rhabdomeric (rather than ciliary) opsins.

  10. A Machine Vision Quality Control System for Industrial Acrylic Fibre Production

    NASA Astrophysics Data System (ADS)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. Brázio; Dinis, João

    2002-12-01

    This paper describes the implementation of INFIBRA, a machine vision system used in the quality control of acrylic fibre production. The system was developed by INETI under a contract with a leading industrial manufacturer of acrylic fibres. It monitors several parameters of the acrylic production process. This paper presents, after a brief overview of the system, a detailed description of the machine vision algorithms developed to perform the inspection tasks unique to this system. Some of the results of online operation are also presented.

  11. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  12. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  13. The adaptive value of primate color vision for predator detection.

    PubMed

    Pessoa, Daniel Marques Almeida; Maia, Rafael; de Albuquerque Ajuz, Rafael Cavalcanti; De Moraes, Pedro Zurvaino Palmeira Melo Rosa; Spyrides, Maria Helena Constantino; Pessoa, Valdir Filgueiras

    2014-08-01

    The complex evolution of primate color vision has puzzled biologists for decades. Primates are the only eutherian mammals that evolved an enhanced capacity for discriminating colors in the green-red part of the spectrum (trichromatism). However, while Old World primates present three types of cone pigments and are routinely trichromatic, most New World primates exhibit a color vision polymorphism, characterized by the occurrence of trichromatic and dichromatic females and obligatory dichromatic males. Even though this has stimulated a prolific line of inquiry, the selective forces and relative benefits influencing color vision evolution in primates are still under debate, with current explanations focusing almost exclusively at the advantages in finding food and detecting socio-sexual signals. Here, we evaluate a previously untested possibility, the adaptive value of primate color vision for predator detection. By combining color vision modeling data on New World and Old World primates, as well as behavioral information from human subjects, we demonstrate that primates exhibiting better color discrimination (trichromats) excel those displaying poorer color visions (dichromats) at detecting carnivoran predators against the green foliage background. The distribution of color vision found in extant anthropoid primates agrees with our results, and may be explained by the advantages of trichromats and dichromats in detecting predators and insects, respectively. © 2014 Wiley Periodicals, Inc.

  14. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  15. Vision for an Open, Global Greenhouse Gas Information System (GHGIS)

    NASA Astrophysics Data System (ADS)

    Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team

    2010-12-01

    Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.

  16. Task-focused modeling in automated agriculture

    NASA Astrophysics Data System (ADS)

    Vriesenga, Mark R.; Peleg, K.; Sklansky, Jack

    1993-01-01

    Machine vision systems analyze image data to carry out automation tasks. Our interest is in machine vision systems that rely on models to achieve their designed task. When the model is interrogated from an a priori menu of questions, the model need not be complete. Instead, the machine vision system can use a partial model that contains a large amount of information in regions of interest and less information elsewhere. We propose an adaptive modeling scheme for machine vision, called task-focused modeling, which constructs a model having just sufficient detail to carry out the specified task. The model is detailed in regions of interest to the task and is less detailed elsewhere. This focusing effect saves time and reduces the computational effort expended by the machine vision system. We illustrate task-focused modeling by an example involving real-time micropropagation of plants in automated agriculture.

  17. Shared visions: Partnership of Rockwell International and NASA Cost Effectiveness Enhancements (CEE) for the space shuttle system integration program

    NASA Technical Reports Server (NTRS)

    Bejmuk, Bohdan I.; Williams, Larry

    1992-01-01

    As a result of limited resources and tight fiscal constraints over the past several years, the defense and aerospace industries have experienced a downturn in business activity. The impact of fewer contracts being awarded has placed a greater emphasis for effectiveness and efficiency on industry contractors. It is clear that a reallocation of resources is required for America to continue to lead the world in space and technology. The key to technological and economic survival is the transforming of existing programs, such as the Space Shuttle Program, into more cost efficient programs so as to divert the savings to other NASA programs. The partnership between Rockwell International and NASA and their joint improvement efforts that resulted in significant streamlining and cost reduction measures to Rockwell International Space System Division's work on the Space Shuttle System Integration Contract is described. This work was a result of an established Cost Effectiveness Enhancement (CEE) Team formed initially in Fiscal Year 1991, and more recently expanded to a larger scale CEE Initiative in 1992. By working closely with the customer in agreeing to contract content, obtaining management endorsement and commitment, and involving the employees in total quality management (TQM) and continuous improvement 'teams,' the initial annual cost reduction target was exceeded significantly. The CEE Initiative helped reduce the cost of the Shuttle Systems Integration contract while establishing a stronger program based upon customer needs, teamwork, quality enhancements, and cost effectiveness. This was accomplished by systematically analyzing, challenging, and changing the established processes, practices, and systems. This examination, in nature, was work intensive due to the depth and breadth of the activity. The CEE Initiative has provided opportunities to make a difference in the way Rockwell and NASA work together - to update the methods and processes of the organizations. The future success of NASA space programs and Rockwell hinges upon the ability to adopt new, more efficient and effective work processes. Efficiency, proficiency, cost effectiveness, and teamwork are a necessity for economic survival. Continuous improvement initiatives like the CEE are, and will continue to be, vehicles by which the road can be traveled with a vision to the future.

  18. Shared visions: Partnership of Rockwell International and NASA Cost Effectiveness Enhancements (CEE) for the space shuttle system integration program

    NASA Astrophysics Data System (ADS)

    Bejmuk, Bohdan I.; Williams, Larry

    As a result of limited resources and tight fiscal constraints over the past several years, the defense and aerospace industries have experienced a downturn in business activity. The impact of fewer contracts being awarded has placed a greater emphasis for effectiveness and efficiency on industry contractors. It is clear that a reallocation of resources is required for America to continue to lead the world in space and technology. The key to technological and economic survival is the transforming of existing programs, such as the Space Shuttle Program, into more cost efficient programs so as to divert the savings to other NASA programs. The partnership between Rockwell International and NASA and their joint improvement efforts that resulted in significant streamlining and cost reduction measures to Rockwell International Space System Division's work on the Space Shuttle System Integration Contract is described. This work was a result of an established Cost Effectiveness Enhancement (CEE) Team formed initially in Fiscal Year 1991, and more recently expanded to a larger scale CEE Initiative in 1992. By working closely with the customer in agreeing to contract content, obtaining management endorsement and commitment, and involving the employees in total quality management (TQM) and continuous improvement 'teams,' the initial annual cost reduction target was exceeded significantly. The CEE Initiative helped reduce the cost of the Shuttle Systems Integration contract while establishing a stronger program based upon customer needs, teamwork, quality enhancements, and cost effectiveness. This was accomplished by systematically analyzing, challenging, and changing the established processes, practices, and systems. This examination, in nature, was work intensive due to the depth and breadth of the activity. The CEE Initiative has provided opportunities to make a difference in the way Rockwell and NASA work together - to update the methods and processes of the organizations. The future success of NASA space programs and Rockwell hinges upon the ability to adopt new, more efficient and effective work processes. Efficiency, proficiency, cost effectiveness, and teamwork are a necessity for economic survival. Continuous improvement initiatives like the CEE are, and will continue to be, vehicles by which the road can be traveled with a vision to the future.

  19. Health system vision of iran in 2025.

    PubMed

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Vision statement in evolutionary plan of health system is considered to be :"a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region(1) and with the regarding to health in all policies, accountability and innovation". An explanatory context was compiled either to create a complete image of the vision. Social values and leaders' strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders' strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system.

  20. Accessible maps for the color vision deficient observers: past and present knowledge and future possibilities

    NASA Astrophysics Data System (ADS)

    Kvitle, Anne Kristin

    2018-05-01

    Color is part of the visual variables in map, serving an aesthetic part and as a guide of attention. Impaired color vision affects the ability to distinguish colors, which makes the task of decoding the map colors difficult. Map reading is reported as a challenging task for these observers, especially when the size of stimuli is small. The aim of this study is to review existing methods for map design for color vision deficient users. A systematic review of research literature and case studies of map design for CVD observers has been conducted in order to give an overview of current knowledge and future research challenges. In addition, relevant research on simulations of CVD and color image enhancement for these observers from other fields of industry is included. The study identified two main approaches: pre-processing by using accessible colors and post-processing by using enhancement methods. Some of the methods may be applied for maps, but requires tailoring of test images according to map types.

  1. Airborne Windshear Detection and Warning Systems. Fifth and Final Combined Manufacturers' and Technologists' Conference, part 1

    NASA Technical Reports Server (NTRS)

    Delnore, Victor E. (Compiler)

    1994-01-01

    The Fifth (and Final) Combined Manufacturers' and Technologists' Airborne Windshear Review Meeting was hosted jointly by the NASA Langley Research Center (LaRC) and the Federal Aviation Administration (FAA) in Hampton, Virginia, on September 28-30, 1993. The purpose of the meeting was to report on the highly successful windshear experiments conducted by government, academic institutions, and industry; to transfer the results to regulators, manufacturers, and users; and to set initiatives for future aeronautics technology research. The formal sessions covered recent developments in windshear flight testing; windshear modeling, flight management, and ground-based systems; airborne windshear detection systems; certification and regulatory issues; development and applications of sensors for wake vortex detection; and synthetic and enhanced vision systems.

  2. Robust Spatial Autoregressive Modeling for Hardwood Log Inspection

    Treesearch

    Dongping Zhu; A.A. Beex

    1994-01-01

    We explore the application of a stochastic texture modeling method toward a machine vision system for log inspection in the forest products industry. This machine vision system uses computerized tomography (CT) imaging to locate and identify internal defects in hardwood logs. The application of CT to such industrial vision problems requires efficient and robust image...

  3. Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer

    2005-01-01

    Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.

  4. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  5. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  6. Simulation Evaluation of Synthetic Vision as an Enabling Technology for Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.

    2008-01-01

    Enhanced Vision (EV) and synthetic vision (SV) systems may serve as enabling technologies to meet the challenges of the Next Generation Air Transportation System (NextGen) Equivalent Visual Operations (EVO) concept ? that is, the ability to achieve or even improve on the safety of Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and even, perhaps, retain VFR procedures independent of actual weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A piloted simulation experiment was conducted to evaluate the effects of the presence or absence of Synthetic Vision, the location of this information during an instrument approach (i.e., on a Head-Up or Head-Down Primary Flight Display), and the type of airport lighting information on landing minima. The quantitative data from this experiment were analyzed to begin the definition of performance-based criteria for all-weather approach and landing operations. Objective results from the present study showed that better approach performance was attainable with the head-up display (HUD) compared to the head-down display (HDD). A slight performance improvement in HDD performance was shown when SV was added, as the pilots descended below 200 ft to a 100 ft decision altitude, but this performance was not tested for statistical significance (nor was it expected to be statistically significant). The touchdown data showed that regardless of the display concept flown (SV HUD, Baseline HUD, SV HDD, Baseline HDD) a majority of the runs were within the performance-based defined approach and landing criteria in all the visibility levels, approach lighting systems, and decision altitudes tested. For this visual flight maneuver, RVR appeared to be the most significant influence in touchdown performance. The approach lighting system clearly impacted the pilot's ability to descend to 100 ft height above touchdown based on existing Federal Aviation Regulation (FAR) 91.175 using a 200 ft decision height, but did not appear to influence touchdown performance or approach path maintenance

  7. Skilled deaf readers have an enhanced perceptual span in reading.

    PubMed

    Bélanger, Nathalie N; Slattery, Timothy J; Mayberry, Rachel I; Rayner, Keith

    2012-07-01

    Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.

  8. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  9. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  10. The role of visual deprivation and experience on the performance of sensory substitution devices.

    PubMed

    Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick

    2015-10-22

    It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Tracking by Identification Using Computer Vision and Radio

    PubMed Central

    Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez

    2013-01-01

    We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485

  12. Design of a surgical robot with dynamic vision field control for Single Port Endoscopic Surgery.

    PubMed

    Kobayashi, Yo; Sekiguchi, Yuta; Tomono, Yu; Watanabe, Hiroki; Toyoda, Kazutaka; Konishi, Kozo; Tomikawa, Morimasa; Ieiri, Satoshi; Tanoue, Kazuo; Hashizume, Makoto; Fujie, Masaktsu G

    2010-01-01

    Recently, a robotic system was developed to assist Single Port Endoscopic Surgery (SPS). However, the existing system required a manual change of vision field, hindering the surgical task and increasing the degrees of freedom (DOFs) of the manipulator. We proposed a surgical robot for SPS with dynamic vision field control, the endoscope view being manipulated by a master controller. The prototype robot consisted of a positioning and sheath manipulator (6 DOF) for vision field control, and dual tool tissue manipulators (gripping: 5DOF, cautery: 3DOF). Feasibility of the robot was demonstrated in vitro. The "cut and vision field control" (using tool manipulators) is suitable for precise cutting tasks in risky areas while a "cut by vision field control" (using a vision field control manipulator) is effective for rapid macro cutting of tissues. A resection task was accomplished using a combination of both methods.

  13. Perceptual organization in computer vision - A review and a proposal for a classificatory structure

    NASA Technical Reports Server (NTRS)

    Sarkar, Sudeep; Boyer, Kim L.

    1993-01-01

    The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.

  14. Embedded System Implementation on FPGA System With μCLinux OS

    NASA Astrophysics Data System (ADS)

    Fairuz Muhd Amin, Ahmad; Aris, Ishak; Syamsul Azmir Raja Abdullah, Raja; Kalos Zakiah Sahbudin, Ratna

    2011-02-01

    Embedded systems are taking on more complicated tasks as the processors involved become more powerful. The embedded systems have been widely used in many areas such as in industries, automotives, medical imaging, communications, speech recognition and computer vision. The complexity requirements in hardware and software nowadays need a flexibility system for further enhancement in any design without adding new hardware. Therefore, any changes in the design system will affect the processor that need to be changed. To overcome this problem, a System On Programmable Chip (SOPC) has been designed and developed using Field Programmable Gate Array (FPGA). A softcore processor, NIOS II 32-bit RISC, which is the microprocessor core was utilized in FPGA system together with the embedded operating system(OS), μClinux. In this paper, an example of web server is explained and demonstrated

  15. 3D display for enhanced tele-operation and other applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott

    2010-04-01

    In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  16. The development of scientific identification theory to conduct operation research in education management

    NASA Astrophysics Data System (ADS)

    Hardhienata, S.

    2017-01-01

    Operations research is a general method used in the study and optimization of a system through modeling of the system. In the field of education, especially in education management, operations research has not been widely used. This paper gives an exposition of ideas about how operations research can be used to conduct research and optimization in the field of education management by developing SITOREM (Scientific Identification Theory for Operation Research in Education Management). To clarify the intent of the idea, an example of applying SITOREM to enhance the professional commitment of lecturers associated with achieving the vision of university will be described.

  17. Seeing and Feeling for Self and Other: Proprioceptive Spatial Location Determines Multisensory Enhancement of Touch

    ERIC Educational Resources Information Center

    Cardini, Flavia; Haggard, Patrick; Ladavas, Elisabetta

    2013-01-01

    We have investigated the relation between visuo-tactile interactions and the self-other distinction. In the Visual Enhancement of Touch (VET) effect, non-informative vision of one's own hand improves tactile spatial perception. Previous studies suggested that looking at "another"person's hand could also enhance tactile perception, but did not…

  18. FLORA™: Phase I development of a functional vision assessment for prosthetic vision users.

    PubMed

    Geruschat, Duane R; Flax, Marshall; Tanna, Nilima; Bianchi, Michelle; Fisher, Andy; Goldschmidt, Mira; Fisher, Lynne; Dagnelie, Gislin; Deremeik, Jim; Smith, Audrey; Anaflous, Fatima; Dorn, Jessy

    2015-07-01

    Research groups and funding agencies need a functional assessment suitable for an ultra-low vision population to evaluate the impact of new vision-restoration treatments. The purpose of this study was to develop a pilot assessment to capture the functional visual ability and well-being of subjects whose vision has been partially restored with the Argus II Retinal Prosthesis System. The Functional Low-Vision Observer Rated Assessment (FLORA) pilot assessment involved a self-report section, a list of functional visual tasks for observation of performance and a case narrative summary. Results were analysed to determine whether the interview questions and functional visual tasks were appropriate for this ultra-low vision population and whether the ratings suffered from floor or ceiling effects. Thirty subjects with severe to profound retinitis pigmentosa (bare light perception or worse in both eyes) were enrolled in a clinical trial and implanted with the Argus II System. From this population, 26 subjects were assessed with the FLORA. Seven different evaluators administered the assessment. All 14 interview questions were asked. All 35 tasks for functional vision were selected for evaluation at least once, with an average of 20 subjects being evaluated for each test item. All four rating options—impossible (33 per cent), difficult (23 per cent), moderate (24 per cent) and easy (19 per cent)—were used by the evaluators. Evaluators also judged the amount of vision they observed the subjects using to complete the various tasks, with 'vision only' occurring 75 per cent on average with the System ON, and 29 per cent with the System OFF. The first version of the FLORA was found to contain useful elements for evaluation and to avoid floor and ceiling effects. The next phase of development will be to refine the assessment and to establish reliability and validity to increase its value as an assessment tool for functional vision and well-being. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.

  19. The role of vision processing in prosthetic vision.

    PubMed

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  20. Binocular combination in abnormal binocular vision

    PubMed Central

    Ding, Jian; Klein, Stanley A.; Levi, Dennis M.

    2013-01-01

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments and provides new insights into the mechanisms of abnormal binocular vision. PMID:23397039

  1. Binocular combination in abnormal binocular vision.

    PubMed

    Ding, Jian; Klein, Stanley A; Levi, Dennis M

    2013-02-08

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments and provides new insights into the mechanisms of abnormal binocular vision.

  2. Hierarchical Modelling Of Mobile, Seeing Robots

    NASA Astrophysics Data System (ADS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-03-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  3. Hierarchical modelling of mobile, seeing robots

    NASA Technical Reports Server (NTRS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-01-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  4. Marking parts to aid robot vision

    NASA Technical Reports Server (NTRS)

    Bales, J. W.; Barker, L. K.

    1981-01-01

    The premarking of parts for subsequent identification by a robot vision system appears to be beneficial as an aid in the automation of certain tasks such as construction in space. A simple, color coded marking system is presented which allows a computer vision system to locate an object, calculate its orientation, and determine its identity. Such a system has the potential to operate accurately, and because the computer shape analysis problem has been simplified, it has the ability to operate in real time.

  5. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  6. The Effect of the Usage of Computer-Based Assistive Devices on the Functioning and Quality of Life of Individuals Who Are Blind or Have Low Vision

    ERIC Educational Resources Information Center

    Rosner, Yotam; Perlman, Amotz

    2018-01-01

    Introduction: The Israel Ministry of Social Affairs and Social Services subsidizes computer-based assistive devices for individuals with visual impairments (that is, those who are blind or have low vision) to assist these individuals in their interactions with computers and thus to enhance their independence and quality of life. The aim of this…

  7. The Effect of Expansion of Vision Span on Reading Speed: A Case Study of EFL Major Students at King Khalid University

    ERIC Educational Resources Information Center

    Kana'an, Basim Hamdan Ibrahim; Rab, Salahud Din Abdul; Siddiqui, Ahlullah

    2014-01-01

    The objective of the study is to demonstrate how and to what extent the expansion of vision span could be a decisive factor in enhancing the reading speed of EFL major students in the English Department at King Khalid University while maintaining their previous level of comprehension. The reading speed of students in the English Department at KKU…

  8. Improving Vision-Based Motor Rehabilitation Interactive Systems for Users with Disabilities Using Mirror Feedback

    PubMed Central

    Martínez-Bueso, Pau; Moyà-Alcover, Biel

    2014-01-01

    Observation is recommended in motor rehabilitation. For this reason, the aim of this study was to experimentally test the feasibility and benefit of including mirror feedback in vision-based rehabilitation systems: we projected the user on the screen. We conducted a user study by using a previously evaluated system that improved the balance and postural control of adults with cerebral palsy. We used a within-subjects design with the two defined feedback conditions (mirror and no-mirror) with two different groups of users (8 with disabilities and 32 without disabilities) using usability measures (time-to-start (T s) and time-to-complete (T c)). A two-tailed paired samples t-test confirmed that in case of disabilities the mirror feedback facilitated the interaction in vision-based systems for rehabilitation. The measured times were significantly worse in the absence of the user's own visual feedback (T s = 7.09 (P < 0.001) and T c = 4.48 (P < 0.005)). In vision-based interaction systems, the input device is the user's own body; therefore, it makes sense that feedback should be related to the body of the user. In case of disabilities the mirror feedback mechanisms facilitated the interaction in vision-based systems for rehabilitation. Results recommends developers and researchers use this improvement in vision-based motor rehabilitation interactive systems. PMID:25295310

  9. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  10. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  11. Image jitter enhances visual performance when spatial resolution is impaired.

    PubMed

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  12. Verification Test of Automated Robotic Assembly of Space Truss Structures

    NASA Technical Reports Server (NTRS)

    Rhodes, Marvin D.; Will, Ralph W.; Quach, Cuong C.

    1995-01-01

    A multidisciplinary program has been conducted at the Langley Research Center to develop operational procedures for supervised autonomous assembly of truss structures suitable for large-aperture antennas. The hardware and operations required to assemble a 102-member tetrahedral truss and attach 12 hexagonal panels were developed and evaluated. A brute-force automation approach was used to develop baseline assembly hardware and software techniques. However, as the system matured and operations were proven, upgrades were incorporated and assessed against the baseline test results. These upgrades included the use of distributed microprocessors to control dedicated end-effector operations, machine vision guidance for strut installation, and the use of an expert system-based executive-control program. This paper summarizes the developmental phases of the program, the results of several assembly tests, and a series of proposed enhancements. No problems that would preclude automated in-space assembly or truss structures have been encountered. The test system was developed at a breadboard level and continued development at an enhanced level is warranted.

  13. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  14. The Efficacy of Optometric Vision Therapy.

    ERIC Educational Resources Information Center

    Journal of the American Optometric Association, 1988

    1988-01-01

    This review aims to document the efficacy and validity of vision therapy for modifying and improving vision functioning. The paper describes the essential components of the visual system and disorders which can be physiologically and clinically identified. Vision therapy is defined as a clinical approach for correcting and ameliorating the effects…

  15. Functional vision and cognition in infants with congenital disorders of the peripheral visual system.

    PubMed

    Dale, Naomi; Sakkalou, Elena; O'Reilly, Michelle; Springall, Clare; De Haan, Michelle; Salt, Alison

    2017-07-01

    To investigate how vision relates to early development by studying vision and cognition in a national cohort of 1-year-old infants with congenital disorders of the peripheral visual system and visual impairment. This was a cross-sectional observational investigation of a nationally recruited cohort of infants with 'simple' and 'complex' congenital disorders of the peripheral visual system. Entry age was 8 to 16 months. Vision level (Near Detection Scale) and non-verbal cognition (sensorimotor understanding, Reynell Zinkin Scales) were assessed. Parents completed demographic questionnaires. Of 90 infants (49 males, 41 females; mean 13mo, standard deviation [SD] 2.5mo; range 7-17mo); 25 (28%) had profound visual impairment (light perception at best) and 65 (72%) had severe visual impairment (basic 'form' vision). The Near Detection Scale correlated significantly with sensorimotor understanding developmental quotients in the 'total', 'simple', and 'complex' groups (all p<0.001). Age and vision accounted for 48% of sensorimotor understanding variance. Infants with profound visual impairment, especially in the 'complex' group with congenital disorders of the peripheral visual system with known brain involvement, showed the greatest cognitive delay. Lack of vision is associated with delayed early-object manipulative abilities and concepts; 'form' vision appeared to support early developmental advance. This paper provides baseline characteristics for cross-sectional and longitudinal follow-up investigations in progress. A methodological strength of the study was the representativeness of the cohort according to national epidemiological and population census data. © 2017 Mac Keith Press.

  16. Effects of high-color-discrimination capability spectra on color-deficient vision.

    PubMed

    Perales, Esther; Linhares, João Manuel Maciel; Masuda, Osamu; Martínez-Verdú, Francisco M; Nascimento, Sérgio Miguel Cardoso

    2013-09-01

    Light sources with three spectral bands in specific spectral positions are known to have high-color-discrimination capability. W. A. Thornton hypothesized that they may also enhance color discrimination for color-deficient observers. This hypothesis was tested here by comparing the Rösch-MacAdam color volume for color-deficient observers rendered by three of these singular spectra, two reported previously and one derived in this paper by maximization of the Rösch-MacAdam color solid. It was found that all illuminants tested enhance discriminability for deuteranomalous observers, but their impact on other congenital deficiencies was variable. The best illuminant was the one derived here, as it was clearly advantageous for the two red-green anomalies and for tritanopes and almost neutral for red-green dichromats. We conclude that three-band spectra with high-color-discrimination capability for normal observers do not necessarily produce comparable enhancements for color-deficient observers, but suitable spectral optimization clearly enhances the vision of the color deficient.

  17. Low computation vision-based navigation for a Martian rover

    NASA Technical Reports Server (NTRS)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  18. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  19. Software model of a machine vision system based on the common house fly.

    PubMed

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  20. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  1. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    PubMed

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  2. Eleven Colors That Are Almost Never Confused

    NASA Astrophysics Data System (ADS)

    Boynton, Robert M.

    1989-08-01

    1.1. Three functions of color vision. Setting aside the complex psychological effects of color, related to esthetics, fashion, and mood, three relatively basic functions of color vision, which can be examined scientifically, are discernable. (1) With the eye in a given state of adaptation, color vision allows the perception of signals that otherwise would be below threshold, and therefore lost to perception. Evidence for this comes from a variety of two-color threshold experiments. (2) Visible contours can be maintained by color differences alone, regardless of the relative radiances of the two parts of the field whose junction defines the border. For achromatic vision, contour disappears at the isoluminant point. (3) Color specifies what seems to be an absolute property of a surface, one that enhances its recognizability and allows a clearer separation and classification of non-contiguous elements in the visual field.

  3. Novel treatment for radiation optic neuropathy with intravenous bevacizumab.

    PubMed

    Farooq, Osman; Lincoff, Norah S; Saikali, Nicolas; Prasad, Dheerendra; Miletich, Robert S; Mechtler, Laszlo L

    2012-12-01

    Radiation optic neuropathy is a devastating form of vision loss that can occur months to years after radiation therapy for tumors and other lesions located in close proximity to the visual pathways. We present the case of a 24-year-old woman who underwent external beam radiation for treatment of a tectal pilocytic astrocytoma, and 5 years later she developed bilateral radiation optic neuropathy and radiation necrosis of the right temporal lobe. We opted to treat her with intravenous bevacizumab with 3 doses every 3 weeks, as well as dexamethasone and pentoxifylline. After the first infusion of bevacizumab, the patient noted improvement in vision and color vision, and a follow-up magnetic resonance imaging study showed that the previous enhancement of the optic nerves and chiasm was diminishing. Her vision improved dramatically and has remained stable over a 3-year period.

  4. Health System Vision of Iran in 2025

    PubMed Central

    Rostamigooran, N; Esmailzadeh, H; Rajabi, F; Majdzadeh, R; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Background: Vast changes in disease features and risk factors and influence of demographic, economical, and social trends on health system, makes formulating a long term evolutionary plan, unavoidable. In this regard, to determine health system vision in a long term horizon is a primary stage. Method: After narrative and purposeful review of documentaries, major themes of vision statement were determined and its context was organized in a work group consist of selected managers and experts of health system. Final content of the statement was prepared after several sessions of group discussions and receiving ideas of policy makers and experts of health system. Results: Vision statement in evolutionary plan of health system is considered to be :“a progressive community in the course of human prosperity which has attained to a developed level of health standards in the light of the most efficient and equitable health system in visionary region1 and with the regarding to health in all policies, accountability and innovation”. An explanatory context was compiled either to create a complete image of the vision. Conclusion: Social values and leaders’ strategic goals, and also main orientations are generally mentioned in vision statement. In this statement prosperity and justice are considered as major values and ideals in society of Iran; development and excellence in the region as leaders’ strategic goals; and also considering efficiency and equality, health in all policies, and accountability and innovation as main orientations of health system. PMID:23865011

  5. Visual Search in the Real World: Color Vision Deficiency Affects Peripheral Guidance, but Leaves Foveal Verification Largely Unaffected.

    PubMed

    Kugler, Günter; 't Hart, Bernard M; Kohlbecher, Stefan; Bartl, Klaus; Schumann, Frank; Einhäuser, Wolfgang; Schneider, Erich

    2015-01-01

    People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency.

  6. Evaluation of a stereoscopic camera-based three-dimensional viewing workstation for ophthalmic surgery.

    PubMed

    Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S

    2007-05-01

    To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.

  7. Transformational Spaceport and Range Concept of Operations: A Vision to Transform Ground and Launch Operations

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Transformational Concept of Operations (CONOPS) provides a long-term, sustainable vision for future U.S. space transportation infrastructure and operations. This vision presents an interagency concept, developed cooperatively by the Department of Defense (DoD), the Federal Aviation Administration (FAA), and the National Aeronautics and Space Administration (NASA) for the upgrade, integration, and improved operation of major infrastructure elements of the nation s space access systems. The interagency vision described in the Transformational CONOPS would transform today s space launch infrastructure into a shared system that supports worldwide operations for a variety of users. The system concept is sufficiently flexible and adaptable to support new types of missions for exploration, commercial enterprise, and national security, as well as to endure further into the future when space transportation technology may be sufficiently advanced to enable routine public space travel as part of the global transportation system. The vision for future space transportation operations is based on a system-of-systems architecture that integrates the major elements of the future space transportation system - transportation nodes (spaceports), flight vehicles and payloads, tracking and communications assets, and flight traffic coordination centers - into a transportation network that concurrently accommodates multiple types of mission operators, payloads, and vehicle fleets. This system concept also establishes a common framework for defining a detailed CONOPS for the major elements of the future space transportation system. The resulting set of four CONOPS (see Figure 1 below) describes the common vision for a shared future space transportation system (FSTS) infrastructure from a variety of perspectives.

  8. The Impact of Providing Vision Screening and Free Eyeglasses on Academic Outcomes: Evidence from a Randomized Trial in Title I Elementary Schools in Florida

    PubMed Central

    West, Kristine L.; Lee, Jongwook

    2018-01-01

    More than 20 percent of all school-aged children in the United States have vision problems, and low-income and minority children are disproportionately likely to have unmet vision care needs. Vision screening is common in U.S. schools, but it remains an open question whether screening alone is sufficient to improve student outcomes. We implemented a multi-armed randomized controlled trial to evaluate the impact of vision screening, and of vision screening accompanied by eye exams and eyeglasses, provided by a non-profit organization to Title I elementary schools in three large central Florida school districts. We find that providing additional/enhanced screening alone is generally insufficient to improve student achievement in math and reading. In contrast, providing screening along with free eye exams and free eyeglasses to students with vision problems improved student achievement as measured by standardized test scores. We find, averaging over all students (including those without vision problems), that this more comprehensive intervention increased the probability of passing the Florida Comprehensive Achievement Tests (FCAT) in reading and math by approximately 2.0 percentage points. We also present evidence that indicates that this impact fades out over time, indicating that follow-up actions after the intervention may be necessary to sustain these estimated achievement gains. PMID:29693366

  9. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  10. The genetics of normal and defective color vision

    PubMed Central

    Neitz, Jay; Neitz, Maureen

    2011-01-01

    The contributions of genetics research to the science of normal and defective color vision over the previous few decades are reviewed emphasizing the developments in the 25 years since the last anniversary issue of Vision Research. Understanding of the biology underlying color vision has been vaulted forward through the application of the tools of molecular genetics. For all their complexity, the biological processes responsible for color vision are more accessible than for many other neural systems. This is partly because of the wealth of genetic variations that affect color perception, both within and across species, and because components of the color vision system lend themselves to genetic manipulation. Mutations and rearrangements in the genes encoding the long, middle, and short wavelength sensitive cone pigments are responsible for color vision deficiencies and mutations have been identified that affect the number of cone types, the absorption spectrum of the pigments, the functionality and viability of the cones, and the topography of the cone mosaic. The addition of an opsin gene, as occurred in the evolution of primate color vision, and has been done in experimental animals can produce expanded color vision capacities and this has provided insight into the underlying neural circuitry. PMID:21167193

  11. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  12. Cyborg mini-trainer.

    PubMed

    Mosso, José L; Nieto, Jesus J; Carbajal, Manuel F; Marmolejo, Jorge; Ochoa, Enrique; De La Fuente, Mireya; Almazan, Andrew; Obrador, Tomas

    2009-01-01

    We present the smallest surgical trainer with a total weight of 400 gr, built in aluminum of 25 cm large and 24 cm wide, and 23 cm high. It's a system integrated by a small and open module, a lamp and a microcamera connected to a Head Mounted display. It holds two endoscopic instruments, and items to make knots or sutures and enhance visual-motor coordination. The vision we got is by a small microcamera displayed to a Head Mounted Display HMD. This surgical trainer is the smallest in the worldwide, easy to install, and easy to carry.

  13. Fabrication of Advanced Thermoelectric Materials by Hierarchical Nanovoid Generation

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); Elliott, James R. (Inventor); Stoakley, Diane M. (Inventor); Chu, Sang-Hyon (Inventor); King, Glen C. (Inventor); Kim, Jae-Woo (Inventor); Choi, Sang Hyouk (Inventor); Lillehei, Peter T. (Inventor)

    2011-01-01

    A novel method to prepare an advanced thermoelectric material has hierarchical structures embedded with nanometer-sized voids which are key to enhancement of the thermoelectric performance. Solution-based thin film deposition technique enables preparation of stable film of thermoelectric material and void generator (voigen). A subsequent thermal process creates hierarchical nanovoid structure inside the thermoelectric material. Potential application areas of this advanced thermoelectric material with nanovoid structure are commercial applications (electronics cooling), medical and scientific applications (biological analysis device, medical imaging systems), telecommunications, and defense and military applications (night vision equipments).

  14. Machine vision system for online inspection of freshly slaughtered chickens

    USDA-ARS?s Scientific Manuscript database

    A machine vision system was developed and evaluated for the automation of online inspection to differentiate freshly slaughtered wholesome chickens from systemically diseased chickens. The system consisted of an electron-multiplying charge-coupled-device camera used with an imaging spectrograph and ...

  15. Trauma-Informed Part C Early Intervention: A Vision, A Challenge, A New Reality

    ERIC Educational Resources Information Center

    Gilkerson, Linda; Graham, Mimi; Harris, Deborah; Oser, Cindy; Clarke, Jane; Hairston-Fuller, Tody C.; Lertora, Jessica

    2013-01-01

    Federal directives require that any child less than 3 years old with a substantiated case of abuse be referred to the early intervention (EI) system. This article details the need and presents a vision for a trauma-informed EI system. The authors describe two exemplary program models which implement this vision and recommend steps which the field…

  16. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  17. Enhanced vision flight deck technology for commercial aircraft low-visibility surface operations

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J.; Norman, R. M.; Kramer, Lynda J.; Prinzel, Lawerence J.; Ellis, Kyle K.; Harrison, Stephanie J.; Comstock, J. R.

    2013-05-01

    NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) airfield during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and minification effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.

  18. Enhanced Vision Flight Deck Technology for Commercial Aircraft Low-Visibility Surface Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Norman, R. Michael; Kramer, Lynda J.; Prinzel, Lawrence J., III; Ellis, Kyle K. E.; Harrison, Stephanie J.; Comstock, J. Ray

    2013-01-01

    NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) air field during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and mini cation effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.

  19. The role of intrinsic motivation in a group of low vision patients participating in a self-management programme to enhance self-efficacy and quality of life.

    PubMed

    Tay, Kay Chai Peter; Drury, Vicki Blair; Mackey, Sandra

    2014-02-01

    Self-management programmes have previously been found to decrease health problems, enhance quality of life and increase independence. However, there is no literature that examines the influence of the participants' intrinsic motivation on the outcomes of such programmes. This study examined the role of intrinsic motivation in a pilot low vision self-management programme to enhance self-efficacy and quality of life of the programme participants. A positive association was observed between the female participants' perceived choice and perceived competence, two underlying dimensions of the Intrinsic Motivation Inventory. In addition, a positive correlation was observed between the younger participants' perceived competence and the change in their quality of life. The findings provide some support for consideration of participants' intrinsic motivation in the development of effective self-management programmes. © 2013 Wiley Publishing Asia Pty Ltd.

  20. Landmark navigation and autonomous landing approach with obstacle detection for aircraft

    NASA Astrophysics Data System (ADS)

    Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.

    1997-06-01

    A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.

  1. Precision of computer-assisted core decompression drilling of the femoral head.

    PubMed

    Beckmann, J; Goetz, J; Baethis, H; Kalteis, T; Grifka, J; Perlick, L

    2006-08-01

    Osteonecrosis of the femoral head is a local destructive disease with progression into devastating stages. Left untreated it mostly leads to severe secondary osteoarthrosis and early endoprosthetic joint replacement. Core decompression by exact drilling into the ischemic areas can be performed in early stages according to Ficat or ARCO. Computer-aided surgery might enhance the precision of the drilling and lower the radiation exposure time of both staff and patients. The aim of this study was to evaluate the precision of the fluoroscopically based VectorVision navigation system in an in vitro model. Thirty sawbones were prepared with a defect filled up with a radiopaque gypsum sphere mimicking the osteonecrosis. Twenty sawbones were drilled by guidance of an intraoperative navigation system VectorVision (BrainLAB, Munich, Germany) and 10 sawbones by fluoroscopic control only. No gypsum sphere was missed. There was a statistically significant difference regarding the three-dimensional deviation (Euclidian norm) as well as maximum deviation in x-, y- or z-direction (maximum norm) to the desired mid-point of the lesion, with a mean of 0.51 and 0.4 mm in the navigated group and 1.1 and 0.88 mm in the control group, respectively. Furthermore, significant difference was found in the number of drilling corrections as well as the radiation time needed: no second drilling or correction of drilling direction was necessary in the navigated group compared to 1.4 in the control group. The radiation time needed was less than 1 s compared to 3.1 s, respectively. The fluoroscopy-based VectorVision navigation system shows a high feasibility of computer-guided drilling with a clear reduction of radiation exposure time and can therefore be integrated into clinical routine. The additional time needed is acceptable regarding the simultaneous reduction of radiation time.

  2. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  3. Can Effective Synthetic Vision System Displays be Implemented on Limited Size Display Spaces?

    NASA Technical Reports Server (NTRS)

    Comstock, J. Raymond, Jr.; Glaab, Lou J.; Prinzel, Lance J.; Elliott, Dawn M.

    2004-01-01

    The Synthetic Vision Systems (SVS) element of the NASA Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents, and to enhance operational capabilities of all types or aircraft. To accomplish these safety and situation awareness improvements, the SVS concepts are designed to provide a clear view of the world ahead through the display of computer generated imagery derived from an onboard database of terrain, obstacle and airport information. An important issue for the SVS concept is whether useful and effective Synthetic Vision System (SVS) displays can be implemented on limited size display spaces as would be required to implement this technology on older aircraft with physically smaller instrument spaces. In this study, prototype SVS displays were put on the following display sizes: (a) size "A' (e.g. 757 EADI), (b) form factor "D" (e.g. 777 PFD), and (c) new size "X" (Rectangular flat-panel, approximately 20 x 25 cm). Testing was conducted in a high-resolution graphics simulation facility at NASA Langley Research Center. Specific issues under test included the display size as noted above, the field-of-view (FOV) to be shown on the display and directly related to FOV is the degree of minification of the displayed image or picture. Using simulated approaches with display size and FOV conditions held constant no significant differences by these factors were found. Preferred FOV based on performance was determined by using approaches during which pilots could select FOV. Mean preference ratings for FOV were in the following order: (1) 30 deg., (2) Unity, (3) 60 deg., and (4) 90 deg., and held true for all display sizes tested. Limitations of the present study and future research directions are discussed.

  4. Polarized object detection in crabs: a two-channel system.

    PubMed

    Basnak, Melanie Ailín; Pérez-Schuster, Verónica; Hermitte, Gabriela; Berón de Astrada, Martín

    2018-05-25

    Many animal species take advantage of polarization vision for vital tasks such as orientation, communication and contrast enhancement. Previous studies have suggested that decapod crustaceans use a two-channel polarization system for contrast enhancement. Here, we characterize the polarization contrast sensitivity in a grapsid crab . We estimated the polarization contrast sensitivity of the animals by quantifying both their escape response and changes in heart rate when presented with polarized motion stimuli. The motion stimulus consisted of an expanding disk with an 82 deg polarization difference between the object and the background. More than 90% of animals responded by freezing or trying to avoid the polarized stimulus. In addition, we co-rotated the electric vector (e-vector) orientation of the light from the object and background by increments of 30 deg and found that the animals' escape response varied periodically with a 90 deg period. Maximum escape responses were obtained for object and background e-vectors near the vertical and horizontal orientations. Changes in cardiac response showed parallel results but also a minimum response when e-vectors of object and background were shifted by 45 deg with respect to the maxima. These results are consistent with an orthogonal receptor arrangement for the detection of polarized light, in which two channels are aligned with the vertical and horizontal orientations. It has been hypothesized that animals with object-based polarization vision rely on a two-channel detection system analogous to that of color processing in dichromats. Our results, obtained by systematically varying the e-vectors of object and background, provide strong empirical support for this theoretical model of polarized object detection. © 2018. Published by The Company of Biologists Ltd.

  5. An Examination of the Changes in Science Teaching Orientations and Technology-Enhanced Tools for Student Learning in the Context of Professional Development

    NASA Astrophysics Data System (ADS)

    Campbell, Todd; Zuwallack, Rebecca; Longhurst, Max; Shelton, Brett E.; Wolf, Paul G.

    2014-07-01

    This research examines how science teaching orientations and beliefs about technology-enhanced tools change over time in professional development (PD). The primary data sources for this study came from learning journals of 8 eighth grade science teachers at the beginning and conclusion of a year of PD. Based on the analysis completed, Information Transmission (IT) and Struggling with Standards-Based Reform (SSBR) profiles were found at the beginning of the PD, while SSBR and Standards-Based Reform (SBR) profiles were identified at the conclusion of PD. All profiles exhibited Vision I beliefs about the goals and purposes for science education, while only the SBR profile exhibited Vision II goals and purposes for science teaching. The IT profile demonstrated naïve or unrevealed beliefs about the nature of science, while the SSBR and SBR profiles had more sophisticated beliefs in this area. The IT profile was grounded in more teacher-centered beliefs about science teaching and learning as the other two profiles revealed more student-centered beliefs. While no beliefs about technology-enhanced tools were found for the IT profile, these were found for the other two profiles. Our findings suggest promising implications for (a) Roberts' Vision II as a central support for reform efforts, (b) situating technology-enhanced tools within the beliefs about science teaching and learning dimension of science teaching orientations, and (c) revealing how teacher orientations develop as a result of PD.

  6. Facial identification in very low-resolution images simulating prosthetic vision.

    PubMed

    Chang, M H; Kim, H S; Shin, J H; Park, K S

    2012-08-01

    Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.

  7. The remarkable visual capacities of nocturnal insects: vision at the limits with small eyes and tiny brains

    PubMed Central

    2017-01-01

    Nocturnal insects have evolved remarkable visual capacities, despite small eyes and tiny brains. They can see colour, control flight and land, react to faint movements in their environment, navigate using dim celestial cues and find their way home after a long and tortuous foraging trip using learned visual landmarks. These impressive visual abilities occur at light levels when only a trickle of photons are being absorbed by each photoreceptor, begging the question of how the visual system nonetheless generates the reliable signals needed to steer behaviour. In this review, I attempt to provide an answer to this question. Part of the answer lies in their compound eyes, which maximize light capture. Part lies in the slow responses and high gains of their photoreceptors, which improve the reliability of visual signals. And a very large part lies in the spatial and temporal summation of these signals in the optic lobe, a strategy that substantially enhances contrast sensitivity in dim light and allows nocturnal insects to see a brighter world, albeit a slower and coarser one. What is abundantly clear, however, is that during their evolution insects have overcome several serious potential visual limitations, endowing them with truly extraordinary night vision. This article is part of the themed issue ‘Vision in dim light’. PMID:28193808

  8. Content-aware dark image enhancement through channel division.

    PubMed

    Rivera, Adin Ramirez; Ryu, Byungyong; Chae, Oksam

    2012-09-01

    The current contrast enhancement algorithms occasionally result in artifacts, overenhancement, and unnatural effects in the processed images. These drawbacks increase for images taken under poor illumination conditions. In this paper, we propose a content-aware algorithm that enhances dark images, sharpens edges, reveals details in textured regions, and preserves the smoothness of flat regions. The algorithm produces an ad hoc transformation for each image, adapting the mapping functions to each image's characteristics to produce the maximum enhancement. We analyze the contrast of the image in the boundary and textured regions, and group the information with common characteristics. These groups model the relations within the image, from which we extract the transformation functions. The results are then adaptively mixed, by considering the human vision system characteristics, to boost the details in the image. Results show that the algorithm can automatically process a wide range of images-e.g., mixed shadow and bright areas, outdoor and indoor lighting, and face images-without introducing artifacts, which is an improvement over many existing methods.

  9. Neuromorphic vision sensors and preprocessors in system applications

    NASA Astrophysics Data System (ADS)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  10. Color line-scan technology in industrial applications

    NASA Astrophysics Data System (ADS)

    Lemstrom, Guy F.

    1995-10-01

    Color machine vision opens new possibilities for industrial on-line quality control applications. With color machine vision it's possible to detect different colors and shades, make color separation, spectroscopic applications and at the same time do measurements in the same way as with gray scale technology. These can be geometrical measurements such as dimensions, shape, texture etc. By combining these technologies in a color line scan camera, it brings the machine vision to new dimensions of realizing new applications and new areas in the machine vision business. Quality and process control requirements in the industry get more demanding every day. Color machine vision can be the solution for many simple tasks that haven't been realized with gray scale technology. The lack of detecting or measuring colors has been one reason why machine vision has not been used in quality control as much as it could have been. Color machine vision has shown a growing enthusiasm in the industrial machine vision applications. Potential areas of the industry include food, wood, mining and minerals, printing, paper, glass, plastic, recycling etc. Tasks are from simple measuring to total process and quality control. The color machine vision is not only for measuring colors. It can also be for contrast enhancement, object detection, background removing, structure detection and measuring. Color or spectral separation can be used in many different ways for working out machine vision application than before. It's only a question of how to use the benefits of having two or more data per measured pixel, instead of having only one as in case with traditional gray scale technology. There are plenty of potential applications already today that can be realized with color vision and it's going to give more performance to many traditional gray scale applications in the near future. But the most important feature is that color machine vision offers a new way of working out applications, where machine vision hasn't been applied before.

  11. A Vision for Systems Engineering Applied to Wind Energy (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felker, F.; Dykes, K.

    2015-01-01

    This presentation was given at the Third Wind Energy Systems Engineering Workshop on January 14, 2015. Topics covered include the importance of systems engineering, a vision for systems engineering as applied to wind energy, and application of systems engineering approaches to wind energy research and development.

  12. Biofeedback for Better Vision

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Biofeedtrac, Inc.'s Accommotrac Vision Trainer, invented by Dr. Joseph Trachtman, is based on vision research performed by Ames Research Center and a special optometer developed for the Ames program by Stanford Research Institute. In the United States, about 150 million people are myopes (nearsighted), who tend to overfocus when they look at distant objects causing blurry distant vision, or hyperopes (farsighted), whose vision blurs when they look at close objects because they tend to underfocus. The Accommotrac system is an optical/electronic system used by a doctor as an aid in teaching a patient how to contract and relax the ciliary body, the focusing muscle. The key is biofeedback, wherein the patient learns to control a bodily process or function he is not normally aware of. Trachtman claims a 90 percent success rate for correcting, improving or stopping focusing problems. The Vision Trainer has also proved effective in treating other eye problems such as eye oscillation, cross eyes, and lazy eye and in professional sports to improve athletes' peripheral vision and reaction time.

  13. Office of Biological and Physical Research: Overview Transitioning to the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    Crouch, Roger

    2004-01-01

    Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.

  14. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.

  15. Homework system development with the intention of supporting Saudi Arabia's vision 2030

    NASA Astrophysics Data System (ADS)

    Elgimari, Atifa; Alshahrani, Shafya; Al-shehri, Amal

    2017-10-01

    This paper suggests a web-based homework system. The suggested homework system can serve targeted students with ages of 7-11 years old. By using the suggested homework system, hard copies of homeworks were replaced by soft copies. Parents were involved in the education process electronically. It is expected to participate in applying Saudi Arabia's Vision 2030, specially in the education sector, where it considers the primary education is its foundation stone, as the success of the Vision depends in large assess on reforms in the education system generating a better basis for employment of young Saudis.

  16. Transitioning from learning healthcare systems to learning health care communities.

    PubMed

    Mullins, C Daniel; Wingate, La'Marcus T; Edwards, Hillary A; Tofade, Toyin; Wutoh, Anthony

    2018-02-26

    The learning healthcare system (LHS) model framework has three core, foundational components. These include an infrastructure for health-related data capture, care improvement targets and a supportive policy environment. Despite progress in advancing and implementing LHS approaches, low levels of participation from patients and the public have hampered the transformational potential of the LHS model. An enhanced vision of a community-engaged LHS redesign would focus on the provision of health care from the patient and community perspective to complement the healthcare system as the entity that provides the environment for care. Addressing the LHS framework implementation challenges and utilizing community levers are requisite components of a learning health care community model, version two of the LHS archetype.

  17. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  18. OLED study for military applications

    NASA Astrophysics Data System (ADS)

    Barre, F.; Chiquard, A.; Faure, S.; Landais, L.; Patry, P.

    2005-07-01

    The presentation deals with some applications of OLED displays in military optronic systems, which are scheduled by SAGEM DS (Defence and Security). SAGEM DS, one of the largest group in the defence and security market, is currently investigating OLED Technologies for military programs. This technology is close from being chosen for optronic equipment such as future infantry night vision goggles, rifle-sight, or, more generally, vision enhancement systems. Most of those applications requires micro-display with an active matrix size below 1". Some others, such as, for instance, ruggedized flat displays do have a need for higher active matrix size (1,5" to 15"). SAGEM DS takes advantages of this flat, high luminance and emissive technology in highly integrated systems. In any case, many requirements have to be fulfilled: ultra-low power consumption, wide viewing angle, good pixel to pixel uniformity, and satisfactory behaviour in extreme environmental conditions.... Accurate measurements have been achieved at SAGEM DS on some micro display OLEDs and will be detailed: luminance (over 2000 cd/m2 achieved), area uniformity and pixel to pixel uniformity, robustness at low and high temperature (-40°C to +60°C), lifetime. These results, which refer to military requirements, provide a valuable feedback representative of the state of the art OLED performances.

  19. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  20. Solar System Exploration Augmented by In-Situ Resource Utilization: Human Planetary Base Issues for Mercury and Saturn

    NASA Technical Reports Server (NTRS)

    Palaszewski, Bryan A.

    2017-01-01

    Human and robotic missions to Mercury and Saturn are presented and analyzed with a range of propulsion options. Historical studies of space exploration, planetary spacecraft, and astronomy, in-situ resource utilization (ISRU), and industrialization all point to the vastness of natural resources in the solar system. Advanced propulsion benefitted from these resources in many ways. While advanced propulsion systems were proposed in these historical studies, further investigation of nuclear options using high power nuclear thermal and nuclear pulse propulsion as well as advanced chemical propulsion can significantly enhance these scenarios. Updated analyses based on these historical visions are presented. Nuclear thermal propulsion and ISRU enhanced chemical propulsion landers are assessed for Mercury missions. At Saturn, nuclear pulse propulsion with alternate propellant feed systems and Saturn moon exploration with chemical propulsion and nuclear electric propulsion options are discussed. Issues with using in-situ resource utilization on Mercury missions are discussed. At Saturn, the best locations for exploration and the use of the moons Titan and Enceladus as central locations for Saturn moon exploration is assessed.

Top