NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2007-01-01
The use of enhanced vision systems in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting approach and landing operations. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved enhanced flight vision system that shows the required visual references on the pilot's Head-Up Display. An experiment was conducted to evaluate the complementary use of synthetic vision systems and enhanced vision system technologies, focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under these newly adopted rules. Experimental results specific to flight crew response to non-normal events using the fused synthetic/enhanced vision system are presented.
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III; Wilz, Susan J.
2009-01-01
NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. Improvements in lateral path control performance were realized when the Head-Up Display concepts included a tunnel, independent of the imagery (enhanced vision or fusion of enhanced and synthetic vision) presented with it. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, of itself, provide an improvement in runway incursion detection without being specifically tailored for this application.
Fusion of Synthetic and Enhanced Vision for All-Weather Commercial Aviation Operations
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence, III
2007-01-01
NASA is developing revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck during low visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were not adversely impacted by the display concepts although the addition of Enhanced Vision did not, unto itself, provide an improvement in runway incursion detection.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III
2007-01-01
NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.
Head-Mounted Display Technology for Low Vision Rehabilitation and Vision Enhancement
Ehrlich, Joshua R.; Ojeda, Lauro V.; Wicker, Donna; Day, Sherry; Howson, Ashley; Lakshminarayanan, Vasudevan; Moroi, Sayoko E.
2017-01-01
Purpose To describe the various types of head-mounted display technology, their optical and human factors considerations, and their potential for use in low vision rehabilitation and vision enhancement. Design Expert perspective. Methods An overview of head-mounted display technology by an interdisciplinary team of experts drawing on key literature in the field. Results Head-mounted display technologies can be classified based on their display type and optical design. See-through displays such as retinal projection devices have the greatest potential for use as low vision aids. Devices vary by their relationship to the user’s eyes, field of view, illumination, resolution, color, stereopsis, effect on head motion and user interface. These optical and human factors considerations are important when selecting head-mounted displays for specific applications and patient groups. Conclusions Head-mounted display technologies may offer advantages over conventional low vision aids. Future research should compare head-mounted displays to commonly prescribed low vision aids in order to compare their effectiveness in addressing the impairments and rehabilitation goals of diverse patient populations. PMID:28048975
Synthetic and Enhanced Vision System for Altair Lunar Lander
NASA Technical Reports Server (NTRS)
Prinzell, Lawrence J., III; Kramer, Lynda J.; Norman, Robert M.; Arthur, Jarvis J., III; Williams, Steven P.; Shelton, Kevin J.; Bailey, Randall E.
2009-01-01
Past research has demonstrated the substantial potential of synthetic and enhanced vision (SV, EV) for aviation (e.g., Prinzel & Wickens, 2009). These augmented visual-based technologies have been shown to significantly enhance situation awareness, reduce workload, enhance aviation safety (e.g., reduced propensity for controlled flight -into-terrain accidents/incidents), and promote flight path control precision. The issues that drove the design and development of synthetic and enhanced vision have commonalities to other application domains; most notably, during entry, descent, and landing on the moon and other planetary surfaces. NASA has extended SV/EV technology for use in planetary exploration vehicles, such as the Altair Lunar Lander. This paper describes an Altair Lunar Lander SV/EV concept and associated research demonstrating the safety benefits of these technologies.
Crew and Display Concepts Evaluation for Synthetic / Enhanced Vision Systems
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Prinzel, Lawrence J., III
2006-01-01
NASA s Synthetic Vision Systems (SVS) project is developing technologies with practical applications that strive to eliminate low-visibility conditions as a causal factor to civil aircraft accidents and replicate the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Enhanced Vision System (EVS) technologies are analogous and complementary in many respects to SVS, with the principle difference being that EVS is an imaging sensor presentation, as opposed to a database-derived image. The use of EVS in civil aircraft is projected to increase rapidly as the Federal Aviation Administration recently changed the aircraft operating rules under Part 91, revising the flight visibility requirements for conducting operations to civil airports. Operators conducting straight-in instrument approach procedures may now operate below the published approach minimums when using an approved EVS that shows the required visual references on the pilot s Head-Up Display. An experiment was conducted to evaluate the complementary use of SVS and EVS technologies, specifically focusing on new techniques for integration and/or fusion of synthetic and enhanced vision technologies and crew resource management while operating under the newly adopted FAA rules which provide operating credit for EVS. Overall, the experimental data showed that significant improvements in SA without concomitant increases in workload and display clutter could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying.
Hypersonic airbreathing vehicle visions and enhancing technologies
NASA Astrophysics Data System (ADS)
Hunt, James L.; Lockwood, Mary Kae; Petley, Dennis H.; Pegg, Robert J.
1997-01-01
This paper addresses the visions for hypersonic airbreathing vehicles and the advanced technologies that forge and enhance the designs. The matrix includes space access vehicles (single-stage-to-orbit (SSTO), two-stage-to-orbit (2STO) and three-stage-to-orbit (3STO)) and endoatmospheric vehicles (airplanes—missiles are omitted). The characteristics, the performance potential, the technologies and the synergies will be discussed. A common design constraint is that all vehicles (space access and endoatmospheric) have enclosed payload bays.
Enhanced Vision for All-Weather Operations Under NextGen
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Kramer, Lynda J.; Williams, Steven P.
2010-01-01
Recent research in Synthetic/Enhanced Vision technology is analyzed with respect to existing Category II/III performance and certification guidance. The goal is to start the development of performance-based vision systems technology requirements to support future all-weather operations and the NextGen goal of Equivalent Visual Operations. This work shows that existing criteria to operate in Category III weather and visibility are not directly applicable since, unlike today, the primary reference for maneuvering the airplane is based on what the pilot sees visually through the "vision system." New criteria are consequently needed. Several possible criteria are discussed, but more importantly, the factors associated with landing system performance using automatic and manual landings are delineated.
Hypersonic airbreathing vehicle visions and enhancing technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, J.L.; Lockwood, M.K.; Petley, D.H.
1997-01-01
This paper addresses the visions for hypersonic airbreathing vehicles and the advanced technologies that forge and enhance the designs. The matrix includes space access vehicles (single-stage-to-orbit (SSTO), two-stage-to-orbit (2STO) and three-stage-to-orbit (3STO)) and endoatmospheric vehicles (airplanes{emdash}missiles are omitted). The characteristics, the performance potential, the technologies and the synergies will be discussed. A common design constraint is that all vehicles (space access and endoatmospheric) have enclosed payload bays. {copyright} {ital 1997 American Institute of Physics.}
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Shelton, Kevin J.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.; Norman, Rober M.; Ellis, Kyle K. E.; Barmore, Bryan E.
2011-01-01
An emerging Next Generation Air Transportation System concept - Equivalent Visual Operations (EVO) - can be achieved using an electronic means to provide sufficient visibility of the external world and other required flight references on flight deck displays that enable the safety, operational tempos, and visual flight rules (VFR)-like procedures for all weather conditions. Synthetic and enhanced flight vision system technologies are critical enabling technologies to EVO. Current research evaluated concepts for flight deck-based interval management (FIM) operations, integrated with Synthetic Vision and Enhanced Vision flight-deck displays and technologies. One concept involves delegated flight deck-based separation, in which the flight crews were paired with another aircraft and responsible for spacing and maintaining separation from the paired aircraft, termed, "equivalent visual separation." The operation required the flight crews to acquire and maintain an "equivalent visual contact" as well as to conduct manual landings in low-visibility conditions. The paper describes results that evaluated the concept of EVO delegated separation, including an off-nominal scenario in which the lead aircraft was not able to conform to the assigned spacing resulting in a loss of separation.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III
2005-01-01
Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.
NASA Astrophysics Data System (ADS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike
2011-06-01
NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzell, Lawrence J.; Williams, Steven P.; Bailey, Randall E.; Shelton, Kevin J.; Norman, R. Mike
2011-01-01
NASA is researching innovative technologies for the Next Generation Air Transportation System (NextGen) to provide a "Better-Than-Visual" (BTV) capability as adjunct to "Equivalent Visual Operations" (EVO); that is, airport throughputs equivalent to that normally achieved during Visual Flight Rules (VFR) operations rates with equivalent and better safety in all weather and visibility conditions including Instrument Meteorological Conditions (IMC). These new technologies build on proven flight deck systems and leverage synthetic and enhanced vision systems. Two piloted simulation studies were conducted to access the use of a Head-Worn Display (HWD) with head tracking for synthetic and enhanced vision systems concepts. The first experiment evaluated the use a HWD for equivalent visual operations to San Francisco International Airport (airport identifier: KSFO) compared to a visual concept and a head-down display concept. A second experiment evaluated symbology variations under different visibility conditions using a HWD during taxi operations at Chicago O'Hare airport (airport identifier: KORD). Two experiments were conducted, one in a simulated San Francisco airport (KSFO) approach operation and the other, in simulated Chicago O'Hare surface operations, evaluating enhanced/synthetic vision and head-worn display technologies for NextGen operations. While flying a closely-spaced parallel approach to KSFO, pilots rated the HWD, under low-visibility conditions, equivalent to the out-the-window condition, under unlimited visibility, in terms of situational awareness (SA) and mental workload compared to a head-down enhanced vision system. There were no differences between the 3 display concepts in terms of traffic spacing and distance and the pilot decision-making to land or go-around. For the KORD experiment, the visibility condition was not a factor in pilot's rating of clutter effects from symbology. Several concepts for enhanced implementations of an unlimited field-of-regard BTV concept for low-visibility surface operations were determined to be equivalent in pilot ratings of efficacy and usability.
Chapter 11. Quality evaluation of apple by computer vision
USDA-ARS?s Scientific Manuscript database
Apple is one of the most consumed fruits in the world, and there is a critical need for enhanced computer vision technology for quality assessment of apples. This chapter gives a comprehensive review on recent advances in various computer vision techniques for detecting surface and internal defects ...
NASA Astrophysics Data System (ADS)
Cross, Jack; Schneider, John; Cariani, Pete
2013-05-01
Sierra Nevada Corporation (SNC) has developed rotary and fixed wing millimeter wave radar enhanced vision systems. The Helicopter Autonomous Landing System (HALS) is a rotary-wing enhanced vision system that enables multi-ship landing, takeoff, and enroute flight in Degraded Visual Environments (DVE). HALS has been successfully flight tested in a variety of scenarios, from brown-out DVE landings, to enroute flight over mountainous terrain, to wire/cable detection during low-level flight. The Radar Enhanced Vision Systems (REVS) is a fixed-wing Enhanced Flight Vision System (EFVS) undergoing prototype development testing. Both systems are based on a fast-scanning, threedimensional 94 GHz radar that produces real-time terrain and obstacle imagery. The radar imagery is fused with synthetic imagery of the surrounding terrain to form a long-range, wide field-of-view display. A symbology overlay is added to provide aircraft state information and, for HALS, approach and landing command guidance cuing. The combination of see-through imagery and symbology provides the key information a pilot needs to perform safe flight operations in DVE conditions. This paper discusses the HALS and REVS systems and technology, presents imagery, and summarizes the recent flight test results.
1985-01-01
The NASA imaging processing technology, an advanced computer technique to enhance images sent to Earth in digital form by distant spacecraft, helped develop a new vision screening process. The Ocular Vision Screening system, an important step in preventing vision impairment, is a portable device designed especially to detect eye problems in children through the analysis of retinal reflexes.
Ultraviolet-Blocking Lenses Protect, Enhance Vision
NASA Technical Reports Server (NTRS)
2010-01-01
To combat the harmful properties of light in space, as well as that of artificial radiation produced during laser and welding work, Jet Propulsion Laboratory (JPL) scientists developed a lens capable of absorbing, filtering, and scattering the dangerous light while not obstructing vision. SunTiger Inc. now Eagle Eyes Optics, of Calabasas, California was formed to market a full line of sunglasses based on the JPL discovery that promised 100-percent elimination of harmful wavelengths and enhanced visual clarity. The technology was recently inducted into the Space Technology Hall of Fame.
Development and testing of the EVS 2000 enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.
Recent progress in millimeter-wave sensor system capabilities for enhanced (synthetic) vision
NASA Astrophysics Data System (ADS)
Hellemann, Karlheinz; Zachai, Reinhard
1999-07-01
Weather- and daylight independent operation of modern traffic systems is strongly required for an optimized and economic availability. Mainly helicopters, small aircraft and military transport aircraft operating frequently close to the ground have a need for effective and cost-effective Enhanced Vision sensors. The technical progress in sensor technology and processing speed offer today the possibility for new concepts to be realized. Derived from this background the paper reports on the improvements which are under development within the HiVision program at DaimlerChrysler Aerospace. A sensor demonstrator based on FMCW radar technology with high information update-rate and operating in the mm-wave band, has been up-graded to improve performance and fitted to fly on an experimental base. The results achieved so far demonstrate the capability to produce a weather independent enhanced vision. In addition the demonstrator has been tested on board a high- speed ferry at the Baltic sea, because fast vessels have a similar need for weather-independent operation and anti- collision measures. In the future one sensor type may serve both 'worlds' and help ease and save traffic. The described demonstrator fills up the technology gap between optical sensors (Infrared) and standard pulse radars with its specific features such as high speed scanning and weather penetration with the additional benefit of cost-effectiveness.
NASA Technical Reports Server (NTRS)
1995-01-01
NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Ellis, Kyle K. E.; Bailey, Randall E.; Williams, Steven P.; Severance, Kurt; Le Vie, Lisa R.; Comstock, James R.
2014-01-01
Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O'Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.
Synthetic Vision Displays for Planetary and Lunar Lander Vehicles
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Shelton, Kevin J.; Kramer, Lynda J.; Bailey, Randall E.; Norman, Robert M.
2008-01-01
Aviation research has demonstrated that Synthetic Vision (SV) technology can substantially enhance situation awareness, reduce pilot workload, improve aviation safety, and promote flight path control precision. SV, and related flight deck technologies are currently being extended for application in planetary exploration vehicles. SV, in particular, holds significant potential for many planetary missions since the SV presentation provides a computer-generated view for the flight crew of the terrain and other significant environmental characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts, not available from other vision technologies. In addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. The paper accompanies a conference demonstration that introduced a prototype NASA Synthetic Vision system for lunar lander spacecraft. The paper will describe technical challenges and potential solutions to SV applications for the lunar landing mission, including the requirements for high-resolution lunar terrain maps, accurate positioning and orientation, and lunar cockpit display concepts to support projected mission challenges.
Enhanced vision flight deck technology for commercial aircraft low-visibility surface operations
NASA Astrophysics Data System (ADS)
Arthur, Jarvis J.; Norman, R. M.; Kramer, Lynda J.; Prinzel, Lawerence J.; Ellis, Kyle K.; Harrison, Stephanie J.; Comstock, J. R.
2013-05-01
NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) airfield during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and minification effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.
Enhanced Vision Flight Deck Technology for Commercial Aircraft Low-Visibility Surface Operations
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Norman, R. Michael; Kramer, Lynda J.; Prinzel, Lawrence J., III; Ellis, Kyle K. E.; Harrison, Stephanie J.; Comstock, J. Ray
2013-01-01
NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) air field during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and mini cation effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Hughes, Monica F.; Arthur, Jarvis J., III; Kramer, Lynda J.; Glaab, Louis J.; Bailey, Randy E.; Parrish, Russell V.; Uenking, Michael D.
2003-01-01
Because restricted visibility has been implicated in the majority of commercial and general aviation accidents, solutions will need to focus on how to enhance safety during instrument meteorological conditions (IMC). The NASA Synthetic Vision Systems (SVS) project is developing technologies to help achieve these goals through the synthetic presentation of how the outside world would look to the pilot if vision were not reduced. The potential safety outcome would be a significant reduction in several accident categories, such as controlled-flight-into-terrain (CFIT), that have restricted visibility as a causal factor. The paper describes two experiments that demonstrated the efficacy of synthetic vision technology to prevent CFIT accidents for both general aviation and commercial aircraft.
Color line-scan technology in industrial applications
NASA Astrophysics Data System (ADS)
Lemstrom, Guy F.
1995-10-01
Color machine vision opens new possibilities for industrial on-line quality control applications. With color machine vision it's possible to detect different colors and shades, make color separation, spectroscopic applications and at the same time do measurements in the same way as with gray scale technology. These can be geometrical measurements such as dimensions, shape, texture etc. By combining these technologies in a color line scan camera, it brings the machine vision to new dimensions of realizing new applications and new areas in the machine vision business. Quality and process control requirements in the industry get more demanding every day. Color machine vision can be the solution for many simple tasks that haven't been realized with gray scale technology. The lack of detecting or measuring colors has been one reason why machine vision has not been used in quality control as much as it could have been. Color machine vision has shown a growing enthusiasm in the industrial machine vision applications. Potential areas of the industry include food, wood, mining and minerals, printing, paper, glass, plastic, recycling etc. Tasks are from simple measuring to total process and quality control. The color machine vision is not only for measuring colors. It can also be for contrast enhancement, object detection, background removing, structure detection and measuring. Color or spectral separation can be used in many different ways for working out machine vision application than before. It's only a question of how to use the benefits of having two or more data per measured pixel, instead of having only one as in case with traditional gray scale technology. There are plenty of potential applications already today that can be realized with color vision and it's going to give more performance to many traditional gray scale applications in the near future. But the most important feature is that color machine vision offers a new way of working out applications, where machine vision hasn't been applied before.
NASA Astrophysics Data System (ADS)
Campbell, Todd; Zuwallack, Rebecca; Longhurst, Max; Shelton, Brett E.; Wolf, Paul G.
2014-07-01
This research examines how science teaching orientations and beliefs about technology-enhanced tools change over time in professional development (PD). The primary data sources for this study came from learning journals of 8 eighth grade science teachers at the beginning and conclusion of a year of PD. Based on the analysis completed, Information Transmission (IT) and Struggling with Standards-Based Reform (SSBR) profiles were found at the beginning of the PD, while SSBR and Standards-Based Reform (SBR) profiles were identified at the conclusion of PD. All profiles exhibited Vision I beliefs about the goals and purposes for science education, while only the SBR profile exhibited Vision II goals and purposes for science teaching. The IT profile demonstrated naïve or unrevealed beliefs about the nature of science, while the SSBR and SBR profiles had more sophisticated beliefs in this area. The IT profile was grounded in more teacher-centered beliefs about science teaching and learning as the other two profiles revealed more student-centered beliefs. While no beliefs about technology-enhanced tools were found for the IT profile, these were found for the other two profiles. Our findings suggest promising implications for (a) Roberts' Vision II as a central support for reform efforts, (b) situating technology-enhanced tools within the beliefs about science teaching and learning dimension of science teaching orientations, and (c) revealing how teacher orientations develop as a result of PD.
Alignment between Principal and Teacher Beliefs about Technology Use
ERIC Educational Resources Information Center
Alghamdi, Abdulmajeed; Prestridge, Sarah
2015-01-01
This paper explores the link between principals' and teachers' beliefs regarding technology use in teaching and learning. Principals who have a clear vision for carrying out the pedagogical requirements for technological change in teaching and learning approaches can direct the use of technology to enhance the school learning environment.…
2003-09-01
Refractive Surgery Origin and History, (RK, PRK , LASIK ) Refractive surgery was first considered as early as 1898 by a Dutch professor and was...34 This ejection demonstrated one extreme facet of the safety of PRK . Laser-Assisted In Situ Keratomileusis ( LASIK ) LASIK offers the greatest...refractive shift of clinical significance.35 Therefore LASIK and PRK , recipients had no significant vision changes at altitude, unlike recipients of RK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aylward, A.D.
1996-12-01
This paper describes the various advanced technologies already in use in the intermodal freight transportation industry and addresses the opportunity for improved communication between the public and private sector regarding technology applications to the freight transportation system that could enhance the capacity of the system as a whole. The current public interest in freight transportation policy creates an opportunity to develop a shared vision of the future needs of international intermodal freight transportation in the United States. The Federal government can impact this vision by taking action in the following areas: Provide Infrastructure Funding to Support Efficiency and Global Competitiveness;more » Support Regional and Corridor Efforts; Understand the Freight Sector and Develop a Shared Vision of Technology Benefits; Lead Transportation Technology Efforts of Federal Agencies; and Maintain Commitment to Open ITS Architecture.« less
The Adoption of Technology-Enhanced Instruction to Support Education for All
ERIC Educational Resources Information Center
Khatib, Nahla M.
2014-01-01
This study aimed at investigating the efforts of academic tutors at Arab Open University in Jordan (AOU) to implement technology enhanced instruction through using the learning management system software Moodle (LMS). The AOU has adopted an open learning approach. The aim is to support its vision to reach students in different parts of Jordan and…
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
Visual Advantage of Enhanced Flight Vision System During NextGen Flight Test Evaluation
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Harrison, Stephanie J.; Bailey, Randall E.; Shelton, Kevin J.; Ellis, Kyle K.
2014-01-01
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment. Simulation and flight tests were jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA) to evaluate potential safety and operational benefits of SVS/EFVS technologies in low visibility Next Generation Air Transportation System (NextGen) operations. The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SVS/EFVS operational and system-level performance capabilities. Nine test flights were flown in Gulfstream's G450 flight test aircraft outfitted with the SVS/EFVS technologies under low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 feet to 3600 feet reported visibility) under different obscurants (mist, fog, drizzle fog, frozen fog) and sky cover (broken, overcast). Flight test videos were evaluated at three different altitudes (decision altitude, 100 feet radar altitude, and touchdown) to determine the visual advantage afforded to the pilot using the EFVS/Forward-Looking InfraRed (FLIR) imagery compared to natural vision. Results indicate the EFVS provided a visual advantage of two to three times over that of the out-the-window (OTW) view. The EFVS allowed pilots to view the runway environment, specifically runway lights, before they would be able to OTW with natural vision.
Wijeyekoon, Skanda; Kharicha, Kalpa; Iliffe, Steve
2015-09-01
To evaluate heuristics (rules of thumb) for recognition of undetected vision loss in older patients in primary care. Vision loss is associated with ageing, and its prevalence is increasing. Visual impairment has a broad impact on health, functioning and well-being. Unrecognised vision loss remains common, and screening interventions have yet to reduce its prevalence. An alternative approach is to enhance practitioners' skills in recognising undetected vision loss, by having a more detailed picture of those who are likely not to act on vision changes, report symptoms or have eye tests. This paper describes a qualitative technology development study to evaluate heuristics for recognition of undetected vision loss in older patients in primary care. Using a previous modelling study, two heuristics in the form of mnemonics were developed to aid pattern recognition and allow general practitioners to identify potential cases of unreported vision loss. These heuristics were then analysed with experts. Findings It was concluded that their implementation in modern general practice was unsuitable and an alternative solution should be sort.
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
Design and testing of a dual-band enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.
2016-01-01
Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Williams, Steven P.; Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.
2013-01-01
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory standards and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility approach and landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O'Hare environment. Various scenarios tested the potential for using EFVS to conduct approach, landing, and roll-out operations in visibility as low as 1000 feet runway visual range (RVR). Also, SVS was tested to evaluate the potential for lowering decision heights (DH) on certain instrument approach procedures below what can be flown today. Expanding the portion of the visual segment in which EFVS can be used in lieu of natural vision from 100 feet above the touchdown zone elevation to touchdown and rollout in visibilities as low as 1000 feet RVR appears to be viable as touchdown performance was acceptable without any apparent workload penalties. A lower DH of 150 feet and/or possibly reduced visibility minima using SVS appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.
The Effects of Synthetic and Enhanced Vision Technologies for Lunar Landings
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Norman, Robert M.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III; Shelton, Kevin J.; Williams, Steven P.
2009-01-01
Eight pilots participated as test subjects in a fixed-based simulation experiment to evaluate advanced vision display technologies such as Enhanced Vision (EV) and Synthetic Vision (SV) for providing terrain imagery on flight displays in a Lunar Lander Vehicle. Subjects were asked to fly 20 approaches to the Apollo 15 lunar landing site with four different display concepts - Baseline (symbology only with no terrain imagery), EV only (terrain imagery from Forward Looking Infra Red, or FLIR, and LIght Detection and Ranging, or LIDAR, sensors), SV only (terrain imagery from onboard database), and Fused EV and SV concepts. As expected, manual landing performance was excellent (within a meter of landing site center) and not affected by the inclusion of EV or SV terrain imagery on the Lunar Lander flight displays. Subjective ratings revealed significant situation awareness improvements with the concepts employing EV and/or SV terrain imagery compared to the Baseline condition that had no terrain imagery. In addition, display concepts employing EV imagery (compared to the SV and Baseline concepts which had none) were significantly better for pilot detection of intentional but unannounced navigation failures since this imagery provided an intuitive and obvious visual methodology to monitor the validity of the navigation solution.
NASA Astrophysics Data System (ADS)
Paar, G.
2009-04-01
At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
ERIC Educational Resources Information Center
Tanis, Emily Shea; Palmer, Susan; Wehmeyer, Michael; Davies, Daniel K.; Stock, Steven E.; Lobb, Kathy; Bishop, Barbara
2012-01-01
Advancements of technologies in the areas of mobility, hearing and vision, communication, and daily living for people with intellectual and developmental disabilities has the potential to greatly enhance independence and self-determination. Previous research, however, suggests that there is a technological divide with regard to the use of such…
Moving towards the Virtual University: A Vision of Technology in Higher Education.
ERIC Educational Resources Information Center
Baker, Warren J.; Gloster, Arthur S. II
1994-01-01
California Polytechnic State University at San Luis Obispo is exploring several cost-effective technological solutions to improve learning productivity, reduce labor intensity, and provide new ways to deliver education and better services to students while enhancing quality of instruction. Strategic planning and partnerships have been key…
Enhanced and Synthetic Vision for Terminal Maneuvering Area NextGen Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K. E.; Norman, R. Michael; Williams, Steven P.; Arthur, Jarvis J., III; Shelton, Kevin J.; Prinzel, Lawrence J., III
2011-01-01
Synthetic Vision Systems and Enhanced Flight Vision System (SVS/EFVS) technologies have the potential to provide additional margins of safety for aircrew performance and enable operational improvements for low visibility operations in the terminal area environment with equivalent efficiency as visual operations. To meet this potential, research is needed for effective technology development and implementation of regulatory and design guidance to support introduction and use of SVS/EFVS advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. A fixed-base pilot-in-the-loop simulation test was conducted at NASA Langley Research Center that evaluated the use of SVS/EFVS in NextGen low visibility ground (taxi) operations and approach/landing operations. Twelve crews flew approach and landing operations in a simulated NextGen Chicago O Hare environment. Various scenarios tested the potential for EFVS for operations in visibility as low as 1000 ft runway visibility range (RVR) and SVS to enable lower decision heights (DH) than can currently be flown today. Expanding the EFVS visual segment from DH to the runway in visibilities as low as 1000 RVR appears to be viable as touchdown performance was excellent without any workload penalties noted for the EFVS concept tested. A lower DH to 150 ft and/or possibly reduced visibility minima by virtue of SVS equipage appears to be viable when implemented on a Head-Up Display, but the landing data suggests further study for head-down implementations.
Flight Deck Display Technologies for 4DT and Surface Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Jones, Denis R.; Shelton, Kevin J.; Arthur, Jarvis J., III; Bailey, Randall E.; Allamandola, Angela S.; Foyle, David C.; Hooey, Becky L.
2009-01-01
NASA research is focused on flight deck display technologies that may significantly enhance situation awareness, enable new operating concepts, and reduce the potential for incidents/accidents for terminal area and surface operations. The display technologies include surface map, head-up, and head-worn displays; 4DT guidance algorithms; synthetic and enhanced vision technologies; and terminal maneuvering area traffic conflict detection and alerting systems. This work is critical to ensure that the flight deck interface technologies and the role of the human participants can support the full realization of the Next Generation Air Transportation System (NextGen) and its novel operating concepts.
Office of Space Science: Integrated technology strategy
NASA Technical Reports Server (NTRS)
Huntress, Wesley T., Jr.; Reck, Gregory M.
1994-01-01
This document outlines the strategy by which the Office of Space Science, in collaboration with the Office of Advanced Concepts and Technology and the Office of Space Communications, will meet the challenge of the national technology thrust. The document: highlights the legislative framework within which OSS must operate; evaluates the relationship between OSS and its principal stakeholders; outlines a vision of a successful OSS integrated technology strategy; establishes four goals in support of this vision; provides an assessment of how OSS is currently positioned to respond to the goals; formulates strategic objectives to meet the goals; introduces policies for implementing the strategy; and identifies metrics for measuring success. The OSS Integrated Technology Strategy establishes the framework through which OSS will satisfy stakeholder expectations by teaming with partners in NASA and industry to develop the critical technologies required to: enhance space exploration, expand our knowledge of the universe, and ensure continued national scientific, technical and economic leadership.
NASA Technical Reports Server (NTRS)
Gibbel, Mark; Bellamy, Marvin; DeSantis, Charlie; Hess, John; Pattok, Tracy; Quintero, Andrew; Silver, R.
1996-01-01
ESS 2000 has the vision of enhancing the knowledge necessary to implement cost-effective, leading-edge ESS technologies and procedures in order to increase U.S. electronics industry competitiveness. This paper defines EES and discusses the factors driving the project, the objectives of the project, its participants, the three phases of the project, the technologies involved, and project deliverables.
Combat Vehicle Technology Report
1992-05-01
to stay within the lines to meet optical scanning requirements. Block 1. Agency Use Only (Leave blank). Block 12a. Distribution/Availability Statement...Optronics ( Optical Energy Circuits)...-,..................... 465.3o Fiber Optics ....... ............ o..........o................... 46 5.4. Flat...Technology Objective Remarks Survivability o Protected Vision Enhanced Crew Function The application of filters and other optical (Directed Energy Through
A Review on Making Things See: Augmented Reality for Futuristic Virtual Educator
ERIC Educational Resources Information Center
Iqbal, Javid; Sidhu, Manjit Singh
2017-01-01
In the past few years many choreographers have focused upon implementation of computer technology to enhance their artistic skills. Computer vision technology presents new methods for learning, instructing, developing, and assessing physical movements as well as provides scope to expand dance resources and rediscover the learning process. This…
EVA Communications Avionics and Informatics
NASA Technical Reports Server (NTRS)
Carek, David Andrew
2005-01-01
The Glenn Research Center is investigating and developing technologies for communications, avionics, and information systems that will significantly enhance extra vehicular activity capabilities to support the Vision for Space Exploration. Several of the ongoing research and development efforts are described within this presentation including system requirements formulation, technology development efforts, trade studies, and operational concept demonstrations.
Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.
2016-01-01
Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-25
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-17
... Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation..., Enhanced Flight Vision/ Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing this notice to advise the public of the seventeenth meeting of RTCA Special Committee 213, Enhanced Flight Vision...
Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations
NASA Astrophysics Data System (ADS)
Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.
2016-04-01
This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing..., Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-05
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-03
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
... Special Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... Joint RTCA Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-28
... Committee 213: EUROCAE WG- 79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Special Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS... 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY...: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will...
Journal Writing with Web 2.0 Tools: A Vision for Older Adults
ERIC Educational Resources Information Center
Shepherd, Craig E.; Aagard, Steven
2011-01-01
This article describes how Web 2.0 technologies may facilitate journaling and related inquiry methods among older adults. Benefits and limitations of journaling are summarized as well as computer skills of older adults. We then describe how Web 2.0 technologies can enhance journaling among older adults by diminishing feelings of isolation,…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-08
... enhancements would enable realistic, joint training and testing to support emerging technologies, respond to... visibility require using advanced night vision technology. Training with this equipment can only be conducted... justice and risks to children, subsistence, and cumulative impacts. Public and agency scoping may identify...
ERIC Educational Resources Information Center
Fichten, Catherine S.; Asuncion, Jennison V.; Barile, Maria; Ferraro, Vittoria; Wolforth, Joan
2009-01-01
This article presents the results of two studies on the accessibility of e-learning materials and other information and computer and communication technologies for 143 Canadian college and university students with low vision and 29 who were blind. It offers recommendations for enhancing access, creating new learning opportunities, and eliminating…
2002-04-01
Origin and History, (RK, PRK , LASIK ) Refractive surgery was first considered as early as 1898 by a Dutch professor and was unsuccessfully attempted in...Keratomileusis ( LASIK ) LASIK offers the greatest potential for improving aviator vision and is the latest PRK -similar procedure. First, a flap of...showed that after LASIK , subjects did not exhibit a refractive shift of clinical significance.39 Therefore LASIK and PRK , recipients had no significant
Using Vision System Technologies for Offset Approaches in Low Visibility Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.
2015-01-01
Flight deck-based vision systems, such as Synthetic Vision Systems (SVS) and Enhanced Flight Vision Systems (EFVS), have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in Next Generation Air Transportation System low visibility approach and landing operations at Chicago O'Hare airport. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and three instrument approach types (straight-in, 3-degree offset, 15-degree offset) were experimentally varied to test the efficacy of the SVS/EFVS HUD concepts for offset approach operations. The findings suggest making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD appear feasible. Regardless of offset approach angle or HUD concept being flown, all approaches had comparable ILS tracking during the instrument segment and were within the lateral confines of the runway with acceptable sink rates during the visual segment of the approach. Keywords: Enhanced Flight Vision Systems; Synthetic Vision Systems; Head-up Display; NextGen
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-22
... Committee 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Visions Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held April 17-19...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-12
... Committee 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY: Federal Aviation... 213, Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing... Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). DATES: The meeting will be held October 2-4...
NASA Technical Reports Server (NTRS)
Murray, N. D.
1985-01-01
Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.
Awareness and Detection of Traffic and Obstacles Using Synthetic and Enhanced Vision Systems
NASA Technical Reports Server (NTRS)
Bailey, Randall E.
2012-01-01
Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down display and head-up display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
... Committee 213: EUROCAE WG-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS) AGENCY...-79: Enhanced Flight Vision Systems/Synthetic Vision Systems (EFVS/SVS). SUMMARY: The FAA is issuing...: Enhanced Flight [[Page 38864
Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.
2005-01-01
Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.
Integrated Communications, Navigation and Surveillance Technologies Keynote Address
NASA Technical Reports Server (NTRS)
Lebacqz, J. Victor
2004-01-01
Slides for the Keynote Address present graphics to enhance the discussion of NASA's vision, the National Space Exploration Initiative, current Mars exploration, and aeronautics exploration. The presentation also focuses on development of an Air Transportation System and transformation from present systems.
Assistive technology applied to education of students with visual impairment.
Alves, Cássia Cristiane de Freitas; Monteiro, Gelse Beatriz Martins; Rabello, Suzana; Gasparetto, Maria Elisabete Rodrigues Freire; de Carvalho, Keila Monteiro
2009-08-01
Verify the application of assistive technology, especially information technology in the education of blind and low-vision students from the perceptions of their teachers. Descriptive survey study in public schools in three municipalities of the state of São Paulo, Brazil. The sample comprised 134 teachers. According to the teachers' opinions, there are differences in the specificities and applicability of assistive technology for blind and low-vision students, for whom specific computer programs are important. Information technology enhances reading and writing skills, as well as communication with the world on an equal basis, thereby improving quality of life and facilitating the learning process. The main reason for not using information technology is the lack of planning courses. The main requirements for the use of information technology in schools are enough computers for all students, advisers to help teachers, and pedagogical support. Assistive technology is applied to education of students with visual impairment; however, teachers indicate the need for infrastructure and pedagogical support. Information technology is an important tool in the inclusion process and can promote independence and autonomy of students with visual impairment.
Present and future of vision systems technologies in commercial flight operations
NASA Astrophysics Data System (ADS)
Ward, Jim
2016-05-01
The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.
Sensor Needs for Control and Health Management of Intelligent Aircraft Engines
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Gang, Sanjay; Hunter, Gary W.; Guo, Ten-Huei; Semega, Kenneth J.
2004-01-01
NASA and the U.S. Department of Defense are conducting programs which support the future vision of "intelligent" aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized.
Evaluation of Equivalent Vision Technologies for Supersonic Aircraft Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Wilz, Susan P.; Arthur, Jarvis J., III; Bailey, Randall E.
2009-01-01
Twenty-four air transport-rated pilots participated as subjects in a fixed-based simulation experiment to evaluate the use of Synthetic/Enhanced Vision (S/EV) and eXternal Vision System (XVS) technologies as enabling technologies for future all-weather operations. Three head-up flight display concepts were evaluated a monochromatic, collimated Head-up Display (HUD) and a color, non-collimated XVS display with a field-of-view (FOV) equal to and also, one significantly larger than the collimated HUD. Approach, landing, departure, and surface operations were conducted. Additionally, the apparent angle-of-attack (AOA) was varied (high/low) to investigate the vertical field-of-view display requirements and peripheral, side window visibility was experimentally varied. The data showed that lateral approach tracking performance and lateral landing position were excellent regardless of the display type and AOA condition being evaluated or whether or not there were peripheral cues in the side windows. Longitudinal touchdown and glideslope tracking were affected by the display concepts. Larger FOV display concepts showed improved longitudinal touchdown control, superior glideslope tracking, significant situation awareness improvements and workload reductions compared to smaller FOV display concepts.
Two-Phase Flow Technology Developed and Demonstrated for the Vision for Exploration
NASA Technical Reports Server (NTRS)
Sankovic, John M.; McQuillen, John B.; Lekan, Jack F.
2005-01-01
NASA s vision for exploration will once again expand the bounds of human presence in the universe with planned missions to the Moon and Mars. To attain the numerous goals of this vision, NASA will need to develop technologies in several areas, including advanced power-generation and thermal-control systems for spacecraft and life support. The development of these systems will have to be demonstrated prior to implementation to ensure safe and reliable operation in reduced-gravity environments. The Two-Phase Flow Facility (T(PHI) FFy) Project will provide the path to these enabling technologies for critical multiphase fluid products. The safety and reliability of future systems will be enhanced by addressing focused microgravity fluid physics issues associated with flow boiling, condensation, phase separation, and system stability, all of which are essential to exploration technology. The project--a multiyear effort initiated in 2004--will include concept development, normal-gravity testing (laboratories), reduced gravity aircraft flight campaigns (NASA s KC-135 and C-9 aircraft), space-flight experimentation (International Space Station), and model development. This project will be implemented by a team from the NASA Glenn Research Center, QSS Group, Inc., ZIN Technologies, Inc., and the Extramural Strategic Research Team composed of experts from academia.
Knowledge Management: A Model to Enhance Combatant Command Effectiveness
2011-02-15
implementing the change that is required to achieve the knowledge management vision.43 The Chief Knowledge Management Officer ( KMO ) is overall responsible for...the processes, people/culture and technology in the organization. The Chief KMO develops policy and leads the organization’s knowledge management...integrates team. Reporting directly to the Chief KMO is the Chief Process Manager, Chief Learning Manager and Chief Technology Officer
General Mission Analysis Tool (GMAT): Mission, Vision, and Business Case
NASA Technical Reports Server (NTRS)
Hughes, Steven P.
2007-01-01
The Goal of the GMAT project is to develop new space trajectory optimization and mission design technology by working inclusively with ordinary people, universities businesses and other government organizations; and to share that technology in an open and unhindered way. GMAT's a free and open source software system; free for anyone to use in development of new mission concepts or to improve current missions, freely available in source code form for enhancement or future technology development.
Flight Testing an Integrated Synthetic Vision System
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.
Big data computing: Building a vision for ARS information management
USDA-ARS?s Scientific Manuscript database
Improvements are needed within the ARS to increase scientific capacity and keep pace with new developments in computer technologies that support data acquisition and analysis. Enhancements in computing power and IT infrastructure are needed to provide scientists better access to high performance com...
NASA Technical Reports Server (NTRS)
Shelton, Kevin J.; Kramer, Lynda J.; Ellis,Kyle K.; Rehfeld, Sherri A.
2012-01-01
The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.
NASA's strategic plan for education. A strategy for change, 1993-1998
NASA Technical Reports Server (NTRS)
1992-01-01
NASA's education vision is to promote excellence in America's education system through enhancing and expanding scientific and technological competence. In doing so, NASA strives to be recognized by the education community as the premier mission agency in support of the National Education Goals and in the development and implementation of education standards. To realize this vision, NASA has clearly defined and developed three specific goals to promote excellence in education. Specific objectives and milestones are defined for each goal in the body of this strategic plan.
Making Distance Education Borderless.
ERIC Educational Resources Information Center
Srisa-An, Wichit
1997-01-01
Begins with a tribute to Professor G. Ram Reddy (founder of Indira Gandhi National Open University), then focuses on enhancing the role of open universities in providing borderless distance education. Highlights include the need for open distance-education; philosophy and vision; the distance teaching system; the role of information technology;…
SEM image quality enhancement technology for bright field mask
NASA Astrophysics Data System (ADS)
Fukuda, Naoki; Chihara, Yuta; Shida, Soichi; Ito, Keisuke
2013-09-01
Bright-field photomasks are used to print small contact holes via ArF immersion multiple patterning lithography. There are some technical difficulties when small floating dots are to be measured by SEM tools because of a false imaging shadow. However, a new scan technology of Multi Vision Metrology SEMTM E3630 presents a solution for this issue. The combination of new scan technology and the other MVM-SEM® functions can provide further extended applications with more accurate measurement results.
Biometrics: Facing Up to Terrorism
2001-10-01
ment committee appointed by Secretary of Trans- portation Norman Y. Mineta to review airport security measures will recommend that facial recogni- tion...on the Role Facial Recognition Technology Can Play in Enhancing Airport Security .” Joseph Atick, the CEO of Visionics, testified before the government...system at a U.S. air- port. This deployment is believed to be the first-in-the-nation use of face-recognition technology for airport security . The sys
Bedi, Harprit S; Yucel, Edgar K
2013-10-01
This article describes how mobile technologies can improve the way we teach radiology and offers ideas to bridge the clinical gap with technology. Radiology programs across the country are purchasing iPads and other mobile devices for their residents. Many programs, however, do not have a concrete vision for how a mobile device can enhance the learning environment.
Zodrow, Katherine R; Li, Qilin; Buono, Regina M; Chen, Wei; Daigger, Glen; Dueñas-Osorio, Leonardo; Elimelech, Menachem; Huang, Xia; Jiang, Guibin; Kim, Jae-Hong; Logan, Bruce E; Sedlak, David L; Westerhoff, Paul; Alvarez, Pedro J J
2017-09-19
Innovation in urban water systems is required to address the increasing demand for clean water due to population growth and aggravated water stress caused by water pollution, aging infrastructure, and climate change. Advances in materials science, modular water treatment technologies, and complex systems analyses, coupled with the drive to minimize the energy and environmental footprints of cities, provide new opportunities to ensure a resilient and safe water supply. We present a vision for enhancing efficiency and resiliency of urban water systems and discuss approaches and research needs for overcoming associated implementation challenges.
Advanced integrated enhanced vision systems
NASA Astrophysics Data System (ADS)
Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha
2003-09-01
In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.
Synthetic vision in the cockpit: 3D systems for general aviation
NASA Astrophysics Data System (ADS)
Hansen, Andrew J.; Rybacki, Richard M.; Smith, W. Garth
2001-08-01
Synthetic vision has the potential to improve safety in aviation through better pilot situational awareness and enhanced navigational guidance. The technological advances enabling synthetic vision are GPS based navigation (position and attitude) systems and efficient graphical systems for rendering 3D displays in the cockpit. A benefit for military, commercial, and general aviation platforms alike is the relentless drive to miniaturize computer subsystems. Processors, data storage, graphical and digital signal processing chips, RF circuitry, and bus architectures are at or out-pacing Moore's Law with the transition to mobile computing and embedded systems. The tandem of fundamental GPS navigation services such as the US FAA's Wide Area and Local Area Augmentation Systems (WAAS) and commercially viable mobile rendering systems puts synthetic vision well with the the technological reach of general aviation. Given the appropriate navigational inputs, low cost and power efficient graphics solutions are capable of rendering a pilot's out-the-window view into visual databases with photo-specific imagery and geo-specific elevation and feature content. Looking beyond the single airframe, proposed aviation technologies such as ADS-B would provide a communication channel for bringing traffic information on-board and into the cockpit visually via the 3D display for additional pilot awareness. This paper gives a view of current 3D graphics system capability suitable for general aviation and presents a potential road map following the current trends.
U.S. Rice: Enhancing human health.
USDA-ARS?s Scientific Manuscript database
A vision of the U.S. rice industry is to improve human health through the development of germplasm and technologies for products that capture the unique nutritional benefits of the rice grain. This paper gives an overview of U.S. rice production and markets. New product trends and introductions in...
Building a "cyber forest" in complex terrain at the Andrews Experimental Forest
Donald L. Henshaw; Fred Bierlmaier; Barbara J. Bond; Kari B. O' Connell
2008-01-01
Our vision for a future "cyber forest" at the Andrews Experimental Forest foresees high performance wireless communications enhancing connectivity among remote field research locations, station headquarters, and beyond to the university and outside world. New sensor technologies and collaboration tools foretell exponential increases in data and information...
Lethal RPAs: Ethical Implications of Future Airpower Technology
2013-04-01
enhancements include cochlear implants, artificial vision, and bionic body parts. Significantly, this is also one of the Air Force goals as stated in...need courage and the warrior ethos in order to lead others into battle. Holding this capability in high esteem and ensuring more cross flow between
Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes
NASA Technical Reports Server (NTRS)
Agapakis, John E.; Bolstad, Jon
1993-01-01
Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.
COMPARISON OF RECENTLY USED PHACOEMULSIFICATION SYSTEMS USING A HEALTH TECHNOLOGY ASSESSMENT METHOD.
Huang, Jiannan; Wang, Qi; Zhao, Caimin; Ying, Xiaohua; Zou, Haidong
2017-01-01
To compare the recently used phacoemulsification systems using a health technology assessment (HTA) model. A self-administered questionnaire, which included questions to gauge on the opinions of the recently used phacoemulsification systems, was distributed to the chief cataract surgeons in the departments of ophthalmology of eighteen tertiary hospitals in Shanghai, China. A series of senile cataract patients undergoing phacoemulsification surgery were enrolled in the study. The surgical results and the average costs related to their surgeries were all recorded and compared for the recently used phacoemulsification systems. The four phacoemulsification systems currently used in Shanghai are the Infiniti Vision, Centurion Vision, WhiteStar Signature, and Stellaris Vision Enhancement systems. All of the doctors confirmed that the systems they used would help cataract patients recover vision. A total of 150 cataract patients who underwent phacoemulsification surgery were enrolled in the present study. A significant difference was found among the four groups in cumulative dissipated energy, with the lowest value found in the Centurion group. No serious complications were observed and a positive trend in visual acuity was found in all four groups after cataract surgery. The highest total cost of surgery was associated with procedures conducted using the Centurion Vision system, and significant differences between systems were mainly because of the cost of the consumables used in the different surgeries. This HTA comparison of four recently used phacoemulsification systems found that each of system offers a satisfactory vision recovery outcome, but differs in surgical efficacy and costs.
Enhanced Flight Vision Systems Operational Feasibility Study Using Radar and Infrared Sensors
NASA Technical Reports Server (NTRS)
Etherington, Timothy J.; Kramer, Lynda J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.
2015-01-01
Approach and landing operations during periods of reduced visibility have plagued aircraft pilots since the beginning of aviation. Although techniques are currently available to mitigate some of the visibility conditions, these operations are still ultimately limited by the pilot's ability to "see" required visual landing references (e.g., markings and/or lights of threshold and touchdown zone) and require significant and costly ground infrastructure. Certified Enhanced Flight Vision Systems (EFVS) have shown promise to lift the obscuration veil. They allow the pilot to operate with enhanced vision, in lieu of natural vision, in the visual segment to enable equivalent visual operations (EVO). An aviation standards document was developed with industry and government consensus for using an EFVS for approach, landing, and rollout to a safe taxi speed in visibilities as low as 300 feet runway visual range (RVR). These new standards establish performance, integrity, availability, and safety requirements to operate in this regime without reliance on a pilot's or flight crew's natural vision by use of a fail-operational EFVS. A pilot-in-the-loop high-fidelity motion simulation study was conducted at NASA Langley Research Center to evaluate the operational feasibility, pilot workload, and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 feet RVR by use of vision system technologies on a head-up display (HUD) without need or reliance on natural vision. Twelve crews flew various landing and departure scenarios in 1800, 1000, 700, and 300 RVR. This paper details the non-normal results of the study including objective and subjective measures of performance and acceptability. The study validated the operational feasibility of approach and departure operations and success was independent of visibility conditions. Failures were handled within the lateral confines of the runway for all conditions tested. The fail-operational concept with pilot in the loop needs further study.
Machine vision 1992-1996: technology program to promote research and its utilization in industry
NASA Astrophysics Data System (ADS)
Soini, Antti J.
1994-10-01
Machine vision technology has got a strong interest in Finnish research organizations, which is resulting in many innovative products to industry. Despite this end users were very skeptical towards machine vision and its robustness for harsh industrial environments. Therefore Technology Development Centre, TEKES, who funds technology related research and development projects in universities and individual companies, decided to start a national technology program, Machine Vision 1992 - 1996. Led by industry the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is in nationally important, demanding applications. The program will create new industry and business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers are working on our forty different projects. The key themes that we have are process control, robot vision and quality control.
Synthetic Vision for Lunar and Planetary Landing Vehicles
NASA Technical Reports Server (NTRS)
Williams, Steven P.; Arthur, Jarvis (Trey) J., III; Shelton, Kevin J.; Prinzel, Lawrence J., III; Norman, R. Michael
2008-01-01
The Crew Vehicle Interface (CVI) group of the Integrated Intelligent Flight Deck Technologies (IIFDT) has done extensive research in the area of Synthetic Vision (SV), and has shown that SV technology can substantially enhance flight crew situation awareness, reduce pilot workload, promote flight path control precision and improve aviation safety. SV technology is being extended to evaluate its utility for lunar and planetary exploration vehicles. SV may hold significant potential for many lunar and planetary missions since the SV presentation provides a computer-generated view of the terrain and other significant environment characteristics independent of the outside visibility conditions, window locations, or vehicle attributes. SV allows unconstrained control of the computer-generated scene lighting, terrain coloring, and virtual camera angles which may provide invaluable visual cues to pilots/astronauts and in addition, important vehicle state information may be conformally displayed on the view such as forward and down velocities, altitude, and fuel remaining to enhance trajectory control and vehicle system status. This paper discusses preliminary SV concepts for tactical and strategic displays for a lunar landing vehicle. The technical challenges and potential solutions to SV applications for the lunar landing mission are explored, including the requirements for high resolution terrain lunar maps and an accurate position and orientation of the vehicle that is essential in providing lunar Synthetic Vision System (SVS) cockpit displays. The paper also discusses the technical challenge of creating an accurate synthetic terrain portrayal using an ellipsoid lunar digital elevation model which eliminates projection errors and can be efficiently rendered in real-time.
2D/3D Synthetic Vision Navigation Display
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.
2008-01-01
Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.
NASA's Strategic Plan for Education. A Strategy for Change: 1993-1998. First Edition.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Washington, DC.
The National Aeronautics and Space Administration's (NASA's) education vision is to promote excellence in America's education system through enhancing and expanding scientific and technological competence. In doing so, NASA strives to be recognized by the education community as the premier mission agency in support of the National Education Goals…
ERIC Educational Resources Information Center
Pennsylvania Coll. of Technology, Williamsport.
Intended to enhance strategic planning and enable staff to work as a team toward a shared vision and common goals, this report presents the 1992-95 long-range plan of the Pennsylvania College of Technology (PCT). Part I defines long-range planning; describes the structure and use of the plan at PCT; presents PCT's philosophy, mission, and vision…
Effective use of business intelligence.
Glaser, John; Stone, John
2008-02-01
Business intelligence--technology to manage and leverage an organization's data--can enhance healthcare organizations' financial and operational performance and quality of patient care. Effective BI management requires five preliminary steps: Establish business needs and value. Obtain buy-in from managers. Create an end-to-end vision. Establish BI governance. Implement specific roles for managing data quality.
Wearable optical-digital assistive device for low vision students.
Afinogenov, Boris I; Coles, James B; Parthasarathy, Sailashri; Press-Williams, Jessica; Tsykunova, Ralina; Vasilenko, Anastasia; Narain, Jaya; Hanumara, Nevan C; Winter, Amos; Satgunam, PremNandhini
2016-08-01
People with low vision have limited residual vision that can be greatly enhanced through high levels of magnification. Current assistive technologies are tailored for far field or near field magnification but not both. In collaboration with L.V. Prasad Eye Institute (LVPEI), a wearable, optical-digital assistive device was developed to meet the near and far field magnification needs of students. The critical requirements, system architecture and design decisions for each module were analyzed and quantified. A proof-of-concept prototype was fabricated that can achieve magnification up to 8x and a battery life of up to 8 hours. Potential user evaluation with a Snellen chart showed identification of characters not previously discernible. Further feedback suggested that the system could be used as a general accessibility aid.
Theory, Design, and Algorithms for Optimal Control of wireless Networks
2010-06-09
The implementation of network-centric warfare technologies is an abiding, critical interest of Air Force Science and Technology efforts for the Warfighter. Wireless communications, strategic signaling are areas of critical Air Force Mission need. Autonomous networks of multiple, heterogeneous Throughput enhancement and robust connectivity in communications and sensor networks are critical factors in net-centric USAF operations. This research directly supports the Air Force vision of information dominance and the development of anywhere, anytime operational readiness.
Effectiveness of Assistive Technologies for Low Vision Rehabilitation: A Systematic Review
ERIC Educational Resources Information Center
Jutai, Jeffrey W.; Strong, J. Graham; Russell-Minda, Elizabeth
2009-01-01
"Low vision" describes any condition of diminished vision that is uncorrectable by standard eyeglasses, contact lenses, medication, or surgery that disrupts a person's ability to perform common age-appropriate visual tasks. Examples of assistive technologies for vision rehabilitation include handheld magnifiers; electronic vision-enhancement…
Machine Vision Giving Eyes to Robots. Resources in Technology.
ERIC Educational Resources Information Center
Technology Teacher, 1990
1990-01-01
This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)
Synthetic Vision Enhanced Surface Operations With Head-Worn Display for Commercial Aircraft
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Norman, R. M.
2007-01-01
Experiments and flight tests have shown that airport surface operations can be enhanced by using synthetic vision and associated technologies, employed on a Head-Up Display (HUD) and head-down display electronic moving maps (EMM). Although HUD applications have shown the greatest potential operational improvements, the research noted that two major limitations during ground operations were its monochrome form and limited, fixed field-of-regard. A potential solution to these limitations may be the application of advanced Head Worn Displays (HWDs) particularly during low-visibility operations wherein surface movement is substantially limited because of the impaired vision of pilots and air traffic controllers. The paper describes the results of ground simulation experiments conducted at the NASA Langley Research Center. The results of the experiments showed that the fully integrated HWD concept provided significantly improved path performance compared to using paper charts alone. When comparing the HWD and HUD concepts, there were no statistically-significant differences in path performance or subjective ratings of situation awareness and workload. Implications and directions for future research are described.
Towards a Decision Support System for Space Flight Operations
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Hogle, Charles; Ruszkowski, James
2013-01-01
The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.
Synthetic Vision Enhanced Surface Operations and Flight Procedures Rehearsal Tool
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Williams, Steven P.; Kramer, Lynda J.
2006-01-01
Limited visibility has been cited as predominant causal factor for both Controlled-Flight-Into-Terrain (CFIT) and runway incursion accidents. NASA is conducting research and development of Synthetic Vision Systems (SVS) technologies which may potentially mitigate low visibility conditions as a causal factor to these accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. Two experimental evaluation studies were performed to determine the efficacy of two concepts: 1) head-worn display application of SVS technology to enhance transport aircraft surface operations, and 2) three-dimensional SVS electronic flight bag display concept for flight plan preview, mission rehearsal and controller-pilot data link communications interface of flight procedures. In the surface operation study, pilots evaluated two display devices and four display modes during taxi under unlimited and CAT II visibility conditions. In the mission rehearsal study, pilots flew approaches and departures in an operationally-challenged airport environment, including CFIT scenarios. Performance using the SVS concepts was compared to traditional baseline displays with paper charts only or EFB information. In general, the studies evince the significant situation awareness and enhanced operational capabilities afforded from these advanced SVS display concepts. The experimental results and conclusions from these studies are discussed along with future directions.
Harper, Simon; Yesilada, Yeliz
2012-01-01
This is a technological review paper focussed on identifying both the research challenges and opportunities for further investigation arising from emerging technologies, and it does not aim to propose any recommendation or standard. It is focussed on blind and partially sighted World Wide Web (Web) users along with others who use assistive technologies. The Web is a fast moving interdisciplinary domain in which new technologies, techniques and research is in perpetual development. It is often difficult to maintain a holistic view of new developments within the multiple domains which together make up the Web. This suggests that knowledge of the current developments and predictions of future developments are additionally important for the accessibility community. Web accessibility has previously been characterised by the correction of our past mistakes to make the current Web fulfil the original vision of access for all. New technologies were not designed with accessibility in mind and technologies that could be useful for addressing accessibility issues were not identified or adopted by the accessibility community. We wish to enable the research community to undertake preventative measures and proactively address challenges, while recognising opportunities, before they become unpreventable or require retrospective technological enhancement. This article then reviews emerging trends within the Web and Web Accessibility domains.
NASA Astrophysics Data System (ADS)
Durfee, David; Johnson, Walter; McLeod, Scott
2007-04-01
Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.
Vision-based aircraft guidance
NASA Technical Reports Server (NTRS)
Menon, P. K.
1993-01-01
Early research on the development of machine vision algorithms to serve as pilot aids in aircraft flight operations is discussed. The research is useful for synthesizing new cockpit instrumentation that can enhance flight safety and efficiency. With the present work as the basis, future research will produce low-cost instrument by integrating a conventional TV camera together with off-the=shelf digitizing hardware for flight test verification. Initial focus of the research will be on developing pilot aids for clear-night operations. Latter part of the research will examine synthetic vision issues for poor visibility flight operations. Both research efforts will contribute towards the high-speed civil transport aircraft program. It is anticipated that the research reported here will also produce pilot aids for conducting helicopter flight operations during emergency search and rescue. The primary emphasis of the present research effort is on near-term, flight demonstrable technologies. This report discusses pilot aids for night landing and takeoff and synthetic vision as an aid to low visibility landing.
Does having a "brand" help you lead others?
Davidhizar, Ruth
2007-01-01
Every manager has the opportunity to develop a personal brand of unique characteristics that are valuable in his or her own right. Recognizing a personal brand and developing it to its fullest can enhance leadership potential. These qualities enable others to notice and follow the leader and can enhance cooperation. Credibility is key to developing a personal brand. Then come style, consistency, and change. A brand can enable the manager to connect with others. Use of technology can enhance the use of a brand because cyberspace promotes communication. Other necessities are using organization, selling vision, sharing information, and staying personal.
FELIN: tailored optronics and systems solutions for dismounted combat
NASA Astrophysics Data System (ADS)
Milcent, A. M.
2009-05-01
The FELIN French modernization program for dismounted combat provides the Armies with info-centric systems which dramatically enhance the performances of the soldier and the platoon. Sagem now has available a portfolio of various equipments, providing C4I, data and voice digital communication, and enhanced vision for day and night operations, through compact high performance electro-optics. The FELIN system provides the infantryman with a high-tech integrated and modular system which increases significantly their detection, recognition, identification capabilities, their situation awareness and information sharing, and this in any dismounted close combat situation. Among the key technologies used in this system, infrared and intensified vision provide a significant improvement in capability, observation performance and protection of the ground soldiers. This paper presents in detail the developed equipments, with an emphasis on lessons learned from the technical and operational feedback from dismounted close combat field tests.
Assistive technology for children and young people with low vision.
Thomas, Rachel; Barker, Lucy; Rubin, Gary; Dahlmann-Noor, Annegret
2015-06-18
Recent technological developments, such as the near universal spread of mobile phones and portable computers and improvements in the accessibility features of these devices, give children and young people with low vision greater independent access to information. Some electronic technologies, such as closed circuit TV, are well established low vision aids and newer versions, such as electronic readers or off-the shelf tablet computers, may offer similar functionalities with easier portability and at lower cost. To assess the effect of electronic assistive technologies on reading, educational outcomes and quality of life in children and young people with low vision. We searched CENTRAL (which contains the Cochrane Eyes and Vision Group Trials Register) (2014, Issue 9), Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid MEDLINE Daily, Ovid OLDMEDLINE (January 1946 to October 2014), EMBASE (January 1980 to October 2014), the Health Technology Assessment Programme (HTA) (www.hta.ac.uk/), the metaRegister of Controlled Trials (mRCT) (www.controlled-trials.com), ClinicalTrials.gov (www.clinicaltrials.gov) and the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) (www.who.int/ictrp/search/en). We did not use any date or language restrictions in the electronic searches for trials. We last searched the electronic databases on 30 October 2014. We intended to include randomised controlled trials (RCTs) and quasi-RCTs in this review. We planned to include trials involving children between the ages of 5 and 16 years with low vision as defined by, or equivalent to, the WHO 1992 definition of low vision. We planned to include studies that explore the use of assistive technologies (ATs). These could include all types of closed circuit television/electronic vision enhancement systems (CCTV/EVES), computer technology including tablet computers and adaptive technologies such as screen readers, screen magnification and optical character recognition (OCR). We intended to compare the use of ATs with standard optical aids, which include distance refractive correction (with appropriate near addition for aphakic (no lens)/pseudophakic (with lens implant) patients) and monocular/binoculars for distance and brightfield magnifiers for near. We also planned to include studies that compare different types of ATs with each other, without or in addition to conventional optical aids, and those that compare ATs given with or without instructions for use. Independently, two review authors reviewed titles and abstracts for eligibility. They divided studies into categories to 'definitely include', 'definitely exclude' and 'possibly include', and the same two authors made final judgements about inclusion/exclusion by obtaining full-text copies of the studies in the 'possibly include' category. We did not identify any randomised controlled trials in this subject area. High-quality evidence about the usefulness of electronic AT for children and young people with visual impairment is needed to inform the choice healthcare and education providers and family have to make when selecting a technology. Randomised controlled trials are needed to assess the impact of AT. Research protocols should carefully select outcomes relevant not only to the scientific community, but more importantly to families and teachers. Functional outcomes such as reading accuracy, comprehension and speed should be recorded, as well as the impact of AT on independent learning and quality of life.
IMAGE ENHANCEMENT FOR IMPAIRED VISION: THE CHALLENGE OF EVALUATION
PELI, ELI; WOODS, RUSSELL L
2009-01-01
With the aging of the population, the prevalence of eye diseases and thus of vision impairment is increasing. The TV watching habits of people with vision impairments are comparable to normally sighted people1, however their vision loss prevents them from fully benefiting from this medium. For over 20 years we have been developing video image-enhancement techniques designed to assist people with visual impairments, particularly those due to central retinal vision loss. A major difficulty in this endeavor is the lack of evaluation techniques to assess and compare the effectiveness of various enhancement methods. This paper reviews our approaches to image enhancement and the results we have obtained, with special emphasis on the difficulties encountered in the evaluation of the benefits of enhancement and the solutions we have developed to date. PMID:20161188
Brain Structure-Function Couplings: Year 2 Accomplishments and Programmatic Plans
2013-06-01
performance through individual-specific neurotechnologies and enhance Soldier protection technologies to minimize neural injury. The long-term vision of this...envision pathways that enable our basic science accomplishments to foster development of revolutionary Soldier neurotechnologies and Soldier protection...improve Soldier-system performance with Soldier-specific neurotechnologies . We expect mid-term impact with models linking structure and function that can
Wysham, Nicholas G; Abernethy, Amy P; Cox, Christopher E
2014-10-01
Prediction models in critical illness are generally limited to short-term mortality and uncommonly include patient-centered outcomes. Current outcome prediction tools are also insensitive to individual context or evolution in healthcare practice, potentially limiting their value over time. Improved prognostication of patient-centered outcomes in critical illness could enhance decision-making quality in the ICU. Patient-reported outcomes have emerged as precise methodological measures of patient-centered variables and have been successfully employed using diverse platforms and technologies, enhancing the value of research in critical illness survivorship and in direct patient care. The learning health system is an emerging ideal characterized by integration of multiple data sources into a smart and interconnected health information technology infrastructure with the goal of rapidly optimizing patient care. We propose a vision of a smart, interconnected learning health system with integrated electronic patient-reported outcomes to optimize patient-centered care, including critical care outcome prediction. A learning health system infrastructure integrating electronic patient-reported outcomes may aid in the management of critical illness-associated conditions and yield tools to improve prognostication of patient-centered outcomes in critical illness.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer
2005-01-01
Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.
Kaltoft, Mette Kjer
2013-01-01
All healthcare visions, including that of The TIGER (Technology-Informatics-Guiding-Educational-Reform) Initiative envisage a crucial role for nursing. However, its 7 descriptive pillars do not address the disconnect between Nursing Informatics and Nursing Ethics and their distinct communities in the clinical-disciplinary landscape. Each sees itself as providing decision support by way of information inputs and ethical insights, respectively. Both have reasons - ideological, professional, institutional - for their task construction, but this simultaneously disables each from engaging fully in the point-of-(care)-decision. Increased pressure for translating 'evidence-based' research findings into 'ethically-sound', 'value-based' and 'patient-centered' practice requires rethinking the model implicit in conventional knowledge translation and informatics practice in all disciplines, including nursing. The aim is to aid 'how nurses and other health care scientists more clearly identify clinical and other relevant data that can be captured to inform future comparative effectiveness research. 'A prescriptive, theory-based discipline of '(Nursing) Decisionics' expands the Grid for Volunteer Development of TIGER's newly launched virtual learning environment (VLE). This provides an enhanced TIGER-vision for educational reform to deliver ethically coherent, person-centered care transparently.
Simulating Colour Vision Deficiency from a Spectral Image.
Shrestha, Raju
2016-01-01
People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.
Integration of a 3D perspective view in the navigation display: featuring pilot's mental model
NASA Astrophysics Data System (ADS)
Ebrecht, L.; Schmerwitz, S.
2015-05-01
Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.
Vision servo of industrial robot: A review
NASA Astrophysics Data System (ADS)
Zhang, Yujin
2018-04-01
Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.
Using Web 2.0 Technology to Enhance the Science Curriculum in Your School
ERIC Educational Resources Information Center
Hainsworth, Mark
2017-01-01
The author shares his vision of what 21st century science education might look like in the future and discusses how to develop an e-learning capability to shape the science curriculum in your school. Good teaching and learning should always be a teacher's first priority but there is little doubt in the author's mind that the implementation of an…
NASA Technical Reports Server (NTRS)
1989-01-01
Glare from CRT screens has been blamed for blurred vision, eyestrain, headaches, etc. Optical Coating Laboratory, Inc. (OCLI) manufactures a coating to reduce glare which was used to coat the windows on the Gemini and Apollo spacecraft. In addition, OCLI offers anti-glare filters (Glare Guard) utilizing the same thin film coating technology. The coating minimizes brightness, provides enhanced contrast and improves readability. The filters are OCLI's first consumer product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koppenaal, David W.; Barinaga, Charles J.; Denton, M Bonner B.
2005-11-01
Good eyesight is often taken for granted, a situation that everyone appreciates once vision begins to fade with age. New eyeglasses or contact lenses are traditional ways to improve vision, but recent new technology, i.e. LASIK laser eye surgery, provides a new and exciting means for marked vision restoration and improvement. In mass spectrometry, detectors are the 'eyes' of the MS instrument. These 'eyes' have also been taken for granted. New detectors and new technologies are likewise needed to correct, improve, and extend ion detection and hence, our 'chemical vision'. The purpose of this report is to review and assessmore » current MS detector technology and to provide a glimpse towards future detector technologies. It is hoped that the report will also serve to motivate interest, prompt ideas, and inspire new visions for ion detection research.« less
Industrial Inspection with Open Eyes: Advance with Machine Vision Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Niel, Kurt
Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less
Tanis, Emily Shea; Palmer, Susan B.; Wehmeyer, Michael L.; Davies, Danial; Stock, Steven; Lobb, Kathy; Bishop, Barbara
2014-01-01
Advancements of technologies in the areas of mobiliy, hearing and vision, communication, and daily living for people with intellectual and developmental disabilities (IDD) has the potential to greatly enhance indepencence and self-determination. Previous research, however, suggests that there is a “technological divide” with regard to the use of such technologies by people with IDD when compared with the general public. The present study sought to provide current information with regard to technology use by people with IDD by examining the technology needs, use, and barriers to such use experienced by 180 adults with IDD through QuestNet, a self-directed computer survey program. The study findings suggest that although there has been progress in technology acquisition and use by people IDD, yet there remains an underutilization of technologies across the population. PMID:22316226
Synthetic Vision Systems - Operational Considerations Simulation Experiment
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.
2007-01-01
Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.
Synthetic vision systems: operational considerations simulation experiment
NASA Astrophysics Data System (ADS)
Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.
2007-04-01
Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.
Present Vision--Future Vision.
ERIC Educational Resources Information Center
Fitterman, L. Jeffrey
This paper addresses issues of current and future technology use for and by individuals with visual impairments and blindness in Florida. Present technology applications used in vision programs in Florida are individually described, including video enlarging, speech output, large inkprint, braille print, paperless braille, and tactual output…
Image enhancement filters significantly improve reading performance for low vision observers
NASA Technical Reports Server (NTRS)
Lawton, T. B.
1992-01-01
As people age, so do their photoreceptors; many photoreceptors in central vision stop functioning when a person reaches their late sixties or early seventies. Low vision observers with losses in central vision, those with age-related maculopathies, were studied. Low vision observers no longer see high spatial frequencies, being unable to resolve fine edge detail. We developed image enhancement filters to compensate for the low vision observer's losses in contrast sensitivity to intermediate and high spatial frequencies. The filters work by boosting the amplitude of the less visible intermediate spatial frequencies. The lower spatial frequencies. These image enhancement filters not only reduce the magnification needed for reading by up to 70 percent, but they also increase the observer's reading speed by 2-4 times. A summary of this research is presented.
An augmented-reality edge enhancement application for Google Glass.
Hwang, Alex D; Peli, Eli
2014-08-01
Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer's real-world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Google Glass' camera lens distortions were corrected by using an image warping. Because the camera and virtual display are horizontally separated by 16 mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of three-dimensional transformations to minimize parallax errors before the final projection to the Glass' see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal-vision subjects, with and without a diffuser film to simulate vision loss. For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera's performance. The authors assume that this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration.
Development and evaluation of vision rehabilitation devices.
Luo, Gang; Peli, Eli
2011-01-01
We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.
Multi-Center Evaluation of the Automated Immunohematology Instrument, the ORTHO VISION Analyzer.
Aysola, Agnes; Wheeler, Leslie; Brown, Richard; Denham, Rebecca; Colavecchia, Connie; Pavenski, Katerina; Krok, Elizabeth; Hayes, Chelsea; Klapper, Ellen
2017-02-01
ORTHO VISION Analyzer (Vision), is an immunohematology instrument using ID-MT gel card technology with digital image processing. It has a continuous, random sample access with STAT priority processing. The efficiency and ease of operation of Vision was evaluated at 5 medical centers. De-identified patient samples were tested on the ORTHO ProVue Analyzer (ProVue) and repeated on the Vision mimicking the daily workload pattern. Turnaround times (TAT) were collected and compared. Operators rated key features of the analyzer on a scale of 1 to 5. A total of 507 samples were tested on both instruments at the 5 trial sites. The mean TAT (SD) were 31.6 minutes (5.5) with Vision and 35.7 minutes (8.4) with ProVue, which renders a 12% reduction. Type and screens were performed on 381 samples; the mean TAT (SD) was 32.2 minutes (4.5) with Vision and 37.0 minutes (7.4) with ProVue. Antibody identification with eleven panel cells was performed on 134 samples on Vision; TAT (SD) was 43.2 minutes (8.3). The installation, training, configuration, maintenance and validation processes are all streamlined to provide a short implementation time. The average rating of main functions by the operators was 4.1 to 4.8. Opportunities for improvement, such as flexibility with editing QC results, maintenance schedule, and printing options were identified. The capabilities to perform serial dilutions, to accept pediatric tubes, and review results by e-Connectivity are enhancements over the ProVue. Vision provides shorter TAT compared to ProVue. Every site described a positive experience using Vision. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Night vision and electro-optics technology transfer, 1972 - 1981
NASA Astrophysics Data System (ADS)
Fulton, R. W.; Mason, G. F.
1981-09-01
The purpose of this special report, 'Night Vision and Electro-Optics Technology Transfer 1972-1981,' is threefold: To illustrate, through actual case histories, the potential for exploiting a highly developed and available military technology for solving non-military problems. To provide, in a layman's language, the principles behind night vision and electro-optical devices in order that an awareness may be developed relative to the potential for adopting this technology for non-military applications. To obtain maximum dollar return from research and development investments by applying this technology to secondary applications. This includes, but is not limited to, applications by other Government agencies, state and local governments, colleges and universities, and medical organizations. It is desired that this summary of Technology Transfer activities within Night Vision and Electro-Optics Laboratory (NV/EOL) will benefit those who desire to explore one of the vast technological resources available within the Defense Department and the Federal Government.
GSFC Information Systems Technology Developments Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Hughes, Peter; Dennehy, Cornelius; Mosier, Gary; Smith, Dan; Rykowski, Lisa
2004-01-01
The Vision for Space Exploration will guide NASA's future human and robotic space activities. The broad range of human and robotic missions now being planned will require the development of new system-level capabilities enabled by emerging new technologies. Goddard Space Flight Center is actively supporting the Vision for Space Exploration in a number of program management, engineering and technology areas. This paper provides a brief background on the Vision for Space Exploration and a general overview of potential key Goddard contributions. In particular, this paper focuses on describing relevant GSFC information systems capabilities in architecture development; interoperable command, control and communications; and other applied information systems technology/research activities that are applicable to support the Vision for Space Exploration goals. Current GSFC development efforts and task activities are presented together with future plans.
ERIC Educational Resources Information Center
Chen, Kan; Stafford, Frank P.
A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…
Visions of Change: Information Technology, Education and Postmodernism.
ERIC Educational Resources Information Center
Conlon, Tom
2000-01-01
Encourages visionary questions relating to information technology and education. Describes the context of postmodernist change and discusses two contrasting visions of how education could change, paternalism and libertarianism. Concludes that teachers, learners, and communities need to articulate their own visions of education to ensure a…
(Computer) Vision without Sight
Manduchi, Roberto; Coughlan, James
2012-01-01
Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology. PMID:22815563
Federal regulation of vision enhancement devices for normal and abnormal vision
NASA Astrophysics Data System (ADS)
Drum, Bruce
2006-09-01
The Food and Drug Administration (FDA) evaluates the safety and effectiveness of medical devices and biological products as well as food and drugs. The FDA defines a device as a product that is intended, by physical means, to diagnose, treat, or prevent disease, or to affect the structure or function of the body. All vision enhancement devices fulfill this definition because they are intended to affect a function (vision) of the body. In practice, however, FDA historically has drawn a distinction between devices that are intended to enhance low vision as opposed to normal vision. Most low vision aids are therapeutic devices intended to compensate for visual impairment, and are actively regulated according to their level of risk to the patient. The risk level is usually low (e.g. Class I, exempt from 510(k) submission requirements for magnifiers that do not touch the eye), but can be as high as Class III (requiring a clinical trial and Premarket Approval (PMA) application) for certain implanted and prosthetic devices (e.g. intraocular telescopes and prosthetic retinal implants). In contrast, the FDA usually does not actively enforce its regulations for devices that are intended to enhance normal vision, are low risk, and do not have a medical intended use. However, if an implanted or prosthetic device were developed for enhancing normal vision, the FDA would likely decide to regulate it actively, because its intended use would entail a substantial medical risk to the user. Companies developing such devices should contact the FDA at an early stage to clarify their regulatory status.
Low vision goggles: optical design studies
NASA Astrophysics Data System (ADS)
Levy, Ofer; Apter, Boris; Efron, Uzi
2006-08-01
Low Vision (LV) due to Age Related Macular Degeneration (AMD), Glaucoma or Retinitis Pigmentosa (RP) is a growing problem, which will affect more than 15 million people in the U.S alone in 2010. Low Vision Aid Goggles (LVG) have been under development at Ben-Gurion University and the Holon Institute of Technology. The device is based on a unique Image Transceiver Device (ITD), combining both functions of imaging and Display in a single chip. Using the ITD-based goggles, specifically designed for the visually impaired, our aim is to develop a head-mounted device that will allow the capture of the ambient scenery, perform the necessary image enhancement and processing, and re-direct it to the healthy part of the patient's retina. This design methodology will allow the Goggles to be mobile, multi-task and environmental-adaptive. In this paper we present the optical design considerations of the Goggles, including a preliminary performance analysis. Common vision deficiencies of LV patients are usually divided into two main categories: peripheral vision loss (PVL) and central vision loss (CVL), each requiring different Goggles design. A set of design principles had been defined for each category. Four main optical designs are presented and compared according to the design principles. Each of the designs is presented in two main optical configurations: See-through system and Video imaging system. The use of a full-color ITD-Based Goggles is also discussed.
2012-09-01
The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths by...The Cognition and Neuroergonomics (CaN) Collaborative Technology Alliance (CTA): Scientific Vision, Approach, and Translational Paths Kelvin S. Oie...REPORT DATE (DD-MM-YYYY) September 2012 2. REPORT TYPE Final 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE The Cognition and Neuroergonomics
Computer-aided system for detecting runway incursions
NASA Astrophysics Data System (ADS)
Sridhar, Banavar; Chatterji, Gano B.
1994-07-01
A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.
Low-latency situational awareness for UxV platforms
NASA Astrophysics Data System (ADS)
Berends, David C.
2012-06-01
Providing high quality, low latency video from unmanned vehicles through bandwidth-limited communications channels remains a formidable challenge for modern vision system designers. SRI has developed a number of enabling technologies to address this, including the use of SWaP-optimized Systems-on-a-Chip which provide Multispectral Fusion and Contrast Enhancement as well as H.264 video compression. Further, the use of salience-based image prefiltering prior to image compression greatly reduces output video bandwidth by selectively blurring non-important scene regions. Combined with our customization of the VLC open source video viewer for low latency video decoding, SRI developed a prototype high performance, high quality vision system for UxV application in support of very demanding system latency requirements and user CONOPS.
The study of stereo vision technique for the autonomous vehicle
NASA Astrophysics Data System (ADS)
Li, Pei; Wang, Xi; Wang, Jiang-feng
2015-08-01
The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.
An Augmented-Reality Edge Enhancement Application for Google Glass
Hwang, Alex D.; Peli, Eli
2014-01-01
Purpose Google Glass provides a platform that can be easily extended to include a vision enhancement tool. We have implemented an augmented vision system on Glass, which overlays enhanced edge information over the wearer’s real world view, to provide contrast-improved central vision to the Glass wearers. The enhanced central vision can be naturally integrated with scanning. Methods Goggle Glass’s camera lens distortions were corrected by using an image warping. Since the camera and virtual display are horizontally separated by 16mm, and the camera aiming and virtual display projection angle are off by 10°, the warped camera image had to go through a series of 3D transformations to minimize parallax errors before the final projection to the Glass’ see-through virtual display. All image processes were implemented to achieve near real-time performance. The impacts of the contrast enhancements were measured for three normal vision subjects, with and without a diffuser film to simulate vision loss. Results For all three subjects, significantly improved contrast sensitivity was achieved when the subjects used the edge enhancements with a diffuser film. The performance boost is limited by the Glass camera’s performance. The authors assume this accounts for why performance improvements were observed only with the diffuser filter condition (simulating low vision). Conclusions Improvements were measured with simulated visual impairments. With the benefit of see-through augmented reality edge enhancement, natural visual scanning process is possible, and suggests that the device may provide better visual function in a cosmetically and ergonomically attractive format for patients with macular degeneration. PMID:24978871
NASA Astrophysics Data System (ADS)
Irsch, Kristina; Gramatikov, Boris I.; Wu, Yi-Kai; Guyton, David L.
2014-06-01
Amblyopia ("lazy eye") is a major public health problem, caused by misalignment of the eyes (strabismus) or defocus. If detected early in childhood, there is an excellent response to therapy, yet most children are detected too late to be treated effectively. Commercially available vision screening devices that test for amblyopia's primary causes can detect strabismus only indirectly and inaccurately via assessment of the positions of external light reflections from the cornea, but they cannot detect the anatomical feature of the eyes where fixation actually occurs (the fovea). Our laboratory has been developing technology to detect true foveal fixation, by exploiting the birefringence of the uniquely arranged Henle fibers delineating the fovea using retinal birefringence scanning (RBS), and we recently described a polarization-modulated approach to RBS that enables entirely direct and reliable detection of true foveal fixation, with greatly enhanced signal-to-noise ratio and essentially independent of corneal birefringence (a confounding variable with all polarization-sensitive ophthalmic technology). Here, we describe the design and operation of a new pediatric vision screener that employs polarization-modulated, RBS-based strabismus detection and bull's eye focus detection with an improved target system, and demonstrate the feasibility of this new approach.
Leo, Fabrizio; Cocchi, Elena; Brayda, Luca
2017-07-01
Vision loss has severe impacts on physical, social and emotional well-being. The education of blind children poses issues as many scholar disciplines (e.g., geometry, mathematics) are normally taught by heavily relying on vision. Touch-based assistive technologies are potential tools to provide graphical contents to blind users, improving learning possibilities and social inclusion. Raised-lines drawings are still the golden standard, but stimuli cannot be reconfigured or adapted and the blind person constantly requires assistance. Although much research concerns technological development, little work concerned the assessment of programmable tactile graphics, in educative and rehabilitative contexts. Here we designed, on programmable tactile displays, tests aimed at assessing spatial memory skills and shapes recognition abilities. Tests involved a group of blind and a group of low vision children and adolescents in a four-week longitudinal schedule. After establishing subject-specific difficulty levels, we observed a significant enhancement of performance across sessions and for both groups. Learning effects were comparable to raised paper control tests: however, our setup required minimal external assistance. Overall, our results demonstrate that programmable maps are an effective way to display graphical contents in educative/rehabilitative contexts. They can be at least as effective as traditional paper tests yet providing superior flexibility and versatility.
Irsch, Kristina; Gramatikov, Boris I; Wu, Yi-Kai; Guyton, David L
2014-06-01
Amblyopia ("lazy eye") is a major public health problem, caused by misalignment of the eyes (strabismus) or defocus. If detected early in childhood, there is an excellent response to therapy, yet most children are detected too late to be treated effectively. Commercially available vision screening devices that test for amblyopia's primary causes can detect strabismus only indirectly and inaccurately via assessment of the positions of external light reflections from the cornea, but they cannot detect the anatomical feature of the eyes where fixation actually occurs (the fovea). Our laboratory has been developing technology to detect true foveal fixation, by exploiting the birefringence of the uniquely arranged Henle fibers delineating the fovea using retinal birefringence scanning (RBS), and we recently described a polarization-modulated approach to RBS that enables entirely direct and reliable detection of true foveal fixation, with greatly enhanced signal-to-noise ratio and essentially independent of corneal birefringence (a confounding variable with all polarization-sensitive ophthalmic technology). Here, we describe the design and operation of a new pediatric vision screener that employs polarization-modulated, RBS-based strabismus detection and bull's eye focus detection with an improved target system, and demonstrate the feasibility of this new approach.
Head-Worn Display Concepts for Surface Operations for Commerical Aircraft
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Bailey, Randall E.; Shelton, Kevin J.; Williams, Steven P.; Kramer, Lynda J.; Norman, Robert M.
2008-01-01
Experiments and flight tests have shown that a Head-Up Display (HUD) and a head-down electronic moving map (EMM) can be enhanced with Synthetic Vision for airport surface operations. While great success in ground operations was demonstrated with a HUD, the research noted that two major HUD limitations during ground operations were its monochrome form and limited, fixed field-of-regard. A potential solution to these limitations found with HUDs may be emerging with Head Worn Displays (HWDs). HWDs are small display devices that may be worn without significant encumbrance to the user. By coupling the HWD with a head tracker, unlimited field-of-regard may be realized. The results of three ground simulation experiments conducted at NASA Langley Research Center are summarized. The experiments evaluated the efficacy of head-worn display applications of Synthetic Vision and Enhanced Vision technology to improve transport aircraft surface operations. The results of the experiments showed that the fully integrated HWD provided greater pilot performance with respect to staying on the path compared to using paper charts alone. Further, when comparing the HWD with the HUD concept, there were no differences in path performance. In addition, the HWD and HUD concepts were rated via paired-comparisons the same in terms of situation awareness and workload.
Technology for NASA's Planetary Science Vision 2050.
NASA Technical Reports Server (NTRS)
Lakew, B.; Amato, D.; Freeman, A.; Falker, J.; Turtle, Elizabeth; Green, J.; Mackwell, S.; Daou, D.
2017-01-01
NASAs Planetary Science Division (PSD) initiated and sponsored a very successful community Workshop held from Feb. 27 to Mar. 1, 2017 at NASA Headquarters. The purpose of the Workshop was to develop a vision of planetary science research and exploration for the next three decades until 2050. This abstract summarizes some of the salient technology needs discussed during the three-day workshop and at a technology panel on the final day. It is not meant to be a final report on technology to achieve the science vision for 2050.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-18
... Document--Draft DO-XXX, Minimum Aviation Performance Standards (MASPS) for an Enhanced Flight Vision System... Discussion (9:00 a.m.-5:00 p.m.) Provide Comment Resolution of Document--Draft DO-XXX, Minimum Aviation.../Approve FRAC Draft for PMC Consideration--Draft DO- XXX, Minimum Aviation Performance Standards (MASPS...
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2012-10-01
In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
ERIC Educational Resources Information Center
Albion, Peter R.; Ertmer, Peggy A.
2002-01-01
Discussion of the successful adoption and use of information technology in education focuses on teacher's personal philosophical beliefs and how they influence the successful integration of technology. Highlights include beliefs and teacher behavior; changing teachers' beliefs; and using technology to affect change in teachers' visions and…
Solar sail science mission applications and advancement
NASA Astrophysics Data System (ADS)
Macdonald, Malcolm; McInnes, Colin
2011-12-01
Solar sailing has long been envisaged as an enabling or disruptive technology. The promise of open-ended missions allows consideration of radically new trajectories and the delivery of spacecraft to previously unreachable or unsustainable observation outposts. A mission catalogue is presented of an extensive range of potential solar sail applications, allowing identification of the key features of missions which are enabled, or significantly enhance, through solar sail propulsion. Through these considerations a solar sail application-pull technology development roadmap is established, using each mission as a technology stepping-stone to the next. Having identified and developed a solar sail application-pull technology development roadmap, this is incorporated into a new vision for solar sailing. The development of new technologies, especially for space applications, is high-risk. The advancement difficulty of low technology readiness level research is typically underestimated due to a lack of recognition of the advancement degree of difficulty scale. Recognising the currently low technology readiness level of traditional solar sailing concepts, along with their high advancement degree of difficulty and a lack of near-term applications a new vision for solar sailing is presented which increases the technology readiness level and reduces the advancement degree of difficulty of solar sailing. Just as the basic principles of solar sailing are not new, they have also been long proven and utilised in spacecraft as a low-risk, high-return limited-capability propulsion system. It is therefore proposed that this significant heritage be used to enable rapid, near-term solar sail future advancement through coupling currently mature solar sail, and other, technologies with current solar sail technology developments. As such the near-term technology readiness level of traditional solar sailing is increased, while simultaneously reducing the advancement degree of difficulty along the solar sail application-pull technology development roadmap.
Visual Prostheses: The Enabling Technology to Give Sight to the Blind
Maghami, Mohammad Hossein; Sodagar, Amir Masoud; Lashay, Alireza; Riazi-Esfahani, Hamid; Riazi-Esfahani, Mohammad
2014-01-01
Millions of patients are either slowly losing their vision or are already blind due to retinal degenerative diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) or because of accidents or injuries. Employment of artificial means to treat extreme vision impairment has come closer to reality during the past few decades. Currently, many research groups work towards effective solutions to restore a rudimentary sense of vision to the blind. Aside from the efforts being put on replacing damaged parts of the retina by engineered living tissues or microfabricated photoreceptor arrays, implantable electronic microsystems, referred to as visual prostheses, are also sought as promising solutions to restore vision. From a functional point of view, visual prostheses receive image information from the outside world and deliver them to the natural visual system, enabling the subject to receive a meaningful perception of the image. This paper provides an overview of technical design aspects and clinical test results of visual prostheses, highlights past and recent progress in realizing chronic high-resolution visual implants as well as some technical challenges confronted when trying to enhance the functional quality of such devices. PMID:25709777
Reflections on the Development of a Machine Vision Technology for the Forest Products
Richard W. Conners; D.Earl Kline; Philip A. Araman; Robert L. Brisbon
1992-01-01
The authors have approximately 25 years experience in developing machine vision technology for the forest products industry. Based on this experience this paper will attempt to realistically predict what the future holds for this technology. In particular, this paper will attempt to describe some of the benefits this technology will offer, describe how the technology...
The flight telerobotic servicer and technology transfer
NASA Technical Reports Server (NTRS)
Andary, James F.; Bradford, Kayland Z.
1991-01-01
The Flight Telerobotic Servicer (FTS) project at the Goddard Space Flight Center is developing an advanced telerobotic system to assist in and reduce crew extravehicular activity (EVA) for Space Station Freedom (SSF). The FTS will provide a telerobotic capability in the early phases of the SSF program and will be employed for assembly, maintenance, and inspection applications. The current state of space technology and the general nature of the FTS tasks dictate that the FTS be designed with sophisticated teleoperational capabilities for its internal primary operating mode. However, technologies such as advanced computer vision and autonomous planning techniques would greatly enhance the FTS capabilities to perform autonomously in less structured work environments. Another objective of the FTS program is to accelerate technology transfer from research to U.S. industry.
Recent advances in the development and transfer of machine vision technologies for space
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P.; Pendleton, Thomas
1991-01-01
Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.
The Future of Learning Technology: Some Tentative Predictions
ERIC Educational Resources Information Center
Rushby, Nick
2013-01-01
This paper is a snapshot of an evolving vision of what the future may hold for learning technology. It offers three personal visions of the future and raises many questions that need to be explored if learning technology is to realise its full potential.
An Rx for 20/20 Vision: Vision Planning and Education.
ERIC Educational Resources Information Center
Chrisman, Gerald J.; Holliday, Clifford R.
1996-01-01
Discusses the Dallas Independent School District's decision to adopt an integrated technology infrastructure and the importance of vision planning for long term goals. Outlines the vision planning process: first draft; environmental projection; restatement of vision in terms of market projections, anticipated customer needs, suspected competitor…
NASA Astrophysics Data System (ADS)
Upadhyaya, A. S.; Bandyopadhyay, P. K.
2012-11-01
In state of art technology, integrated devices are widely used or their potential advantages. Common system reduces weight as well as total space covered by its various parts. In the state of art surveillance system integrated SWIR and night vision system used for more accurate identification of object. In this system a common optical window is used, which passes the radiation of both the regions, further both the spectral regions are separated in two channels. ZnS is a good choice for a common window, as it transmit both the region of interest, night vision (650 - 850 nm) as well as SWIR (0.9 - 1.7 μm). In this work a broad band anti reflection coating is developed on ZnS window to enhance the transmission. This seven layer coating is designed using flip flop design method. After getting the final design, some minor refinement is done, using simplex method. SiO2 and TiO2 coating material combination is used for this work. The coating is fabricated by physical vapour deposition process and the materials were evaporated by electron beam gun. Average transmission of both side coated substrate from 660 to 1700 nm is 95%. This coating also acts as contrast enhancement filter for night vision devices, as it reflect the region of 590 - 660 nm. Several trials have been conducted to check the coating repeatability, and it is observed that transmission variation in different trials is not very much and it is under the tolerance limit. The coating also passes environmental test for stability.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
Application of aircraft navigation sensors to enhanced vision systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.
1993-01-01
In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, Jarrod; Barr, Jonathan L.; Burtner, Edwin R.
A key challenge for research roadmapping in the crisis response and management domain is articulation of a shared vision that describes what the future can and should include. Visioning allows for far-reaching stakeholder engagement that can properly align research with stakeholders needs. Engagement includes feedback from researchers, policy makers, general public, and end-users on technical and non-technical factors. This work articulates a process and framework for the construction and maintenance of a stakeholder-centric research vision and roadmap in the emergency management domain. This novel roadmapping process integrates three pieces: analysis of the research and technology landscape, visioning, and stakeholder engagement.more » Our structured engagement process elicits research foci for the roadmap based on relevance to stakeholder mission, identifies collaborators, and builds consensus around the roadmap priorities. We find that the vision process and vision storyboard helps SMEs conceptualize and discuss a technology's strengths, weaknesses, and alignment with needs« less
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
78 FR 55086 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-09
...: Emerging Technologies and Training Neurosciences Integrated Review Group; Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: October 3-4, 2013. Time: 8:00 a.m. to 11:00 a.m... . Name of Committee: Bioengineering Sciences & Technologies Integrated Review Group; Biomaterials and...
Transforming revenue management.
Silveria, Richard; Alliegro, Debra; Nudd, Steven
2008-11-01
Healthcare organizations that want to undertake a patient administrative/revenue management transformation should: Define the vision with underlying business objectives and key performance measures. Strategically partner with key vendors for business process development and technology design. Create a program organization and governance infrastructure. Develop a corporate design model that defines the standards for operationalizing the vision. Execute the vision through technology deployment and corporate design model implementation.
To See Anew: New Technologies Are Moving Rapidly Toward Restoring or Enabling Vision in the Blind.
Grifantini, Kristina
2017-01-01
Humans have been using technology to improve their vision for many decades. Eyeglasses, contact lenses, and, more recently, laser-based surgeries are commonly employed to remedy vision problems, both minor and major. But options are far fewer for those who have not seen since birth or who have reached stages of blindness in later life.
ERIC Educational Resources Information Center
Pinkwart, Niels
2016-01-01
This paper attempts an analysis of some current trends and future developments in computer science, education, and educational technology. Based on these trends, two possible future predictions of AIED are presented in the form of a utopian vision and a dystopian vision. A comparison of these two visions leads to seven challenges that AIED might…
Resources and Information for Parents about Braille
... Vision Loss Eye Conditions Losing Your Sight? Using Technology For Parents of Blind Children For Job Seekers For Seniors Braille Video Description Programs & Services Technology Evaluation Center on Vision Loss Learning Center Professional ...
Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.
van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline
2010-11-01
In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.
2016 National Algal Biofuels Technology Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barry, Amanda; Wolfe, Alexis; English, Christine
The Bioenergy Technologies Office (BETO) of the U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy, is committed to advancing the vision of a viable, sustainable domestic biomass industry that produces renewable biofuels, bioproducts, and biopower; enhances U.S. energy security; reduces our dependence on fossil fuels; provides environmental benefits; and creates economic opportunities across the nation. BETO’s goals are driven by various federal policies and laws, including the Energy Independence and Security Act of 2007 (EISA). To accomplish its goals, BETO has undertaken a diverse portfolio of research, development, and demonstration (RD&D) activities, in partnership with nationalmore » laboratories, academia, and industry.« less
High-Performance, Radiation-Hardened Electronics for Space and Lunar Environments
NASA Technical Reports Server (NTRS)
Keys, Andrew S.; Adams, James H.; Cressler, John D.; Darty, Ronald C.; Johnson, Michael A.; Patrick, Marshall C.
2008-01-01
The Radiation Hardened Electronics for Space Environments (RHESE) project develops advanced technologies needed for high performance electronic devices that will be capable of operating within the demanding radiation and thermal extremes of the space, lunar, and Martian environment. The technologies developed under this project enhance and enable avionics within multiple mission elements of NASA's Vision for Space Exploration. including the Constellation program's Orion Crew Exploration Vehicle. the Lunar Lander project, Lunar Outpost elements, and Extra Vehicular Activity (EVA) elements. This paper provides an overview of the RHESE project and its multiple task tasks, their technical approaches, and their targeted benefits as applied to NASA missions.
Driver's Enhanced Vision System (DEVS)
DOT National Transportation Integrated Search
1996-12-23
This advisory circular (AC) contains performance standards, specifications, and : recommendations for Drivers Enhanced Vision sSystem (DEVS). The FAA recommends : the use of the guidance in this publication for the design and installation of : DEVS e...
Danville Community College Information Technology General Plan, 1998-99.
ERIC Educational Resources Information Center
Danville Community Coll., VA.
This document describes technology usage, infrastructure and planning for Danville Community College. The Plan is divided into four sections: Introduction, Vision and Mission, Applications, and Infrastructure. The four major goals identified in Vision and Mission are: (1) to ensure the successful use of all technologies through continued training…
Effects of contour enhancement on low-vision preference and visual search.
Satgunam, Premnandhini; Woods, Russell L; Luo, Gang; Bronstad, P Matthew; Reynolds, Zachary; Ramachandra, Chaithanya; Mel, Bartlett W; Peli, Eli
2012-09-01
To determine whether image enhancement improves visual search performance and whether enhanced images were also preferred by subjects with vision impairment. Subjects (n = 24) with vision impairment (vision: 20/52 to 20/240) completed visual search and preference tasks for 150 static images that were enhanced to increase object contours' visual saliency. Subjects were divided into two groups and were shown three enhancement levels. Original and medium enhancements were shown to both groups. High enhancement was shown to group 1, and low enhancement was shown to group 2. For search, subjects pointed to an object that matched a search target displayed at the top left of the screen. An "integrated search performance" measure (area under the curve of cumulative correct response rate over search time) quantified performance. For preference, subjects indicated the preferred side when viewing the same image with different enhancement levels on side-by-side high-definition televisions. Contour enhancement did not improve performance in the visual search task. Group 1 subjects significantly (p < 0.001) rejected the High enhancement, and showed no preference for medium enhancement over the original images. Group 2 subjects significantly preferred (p < 0.001) both the medium and the low enhancement levels over original. Contrast sensitivity was correlated with both preference and performance; subjects with worse contrast sensitivity performed worse in the search task (ρ = 0.77, p < 0.001) and preferred more enhancement (ρ = -0.47, p = 0.02). No correlation between visual search performance and enhancement preference was found. However, a small group of subjects (n = 6) in a narrow range of mid-contrast sensitivity performed better with the enhancement, and most (n = 5) also preferred the enhancement. Preferences for image enhancement can be dissociated from search performance in people with vision impairment. Further investigations are needed to study the relationships between preference and performance for a narrow range of mid-contrast sensitivity where a beneficial effect of enhancement may exist.
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Chen, Alexander Y. K.
1991-01-01
Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.
Find Services for People Who Are Blind or Visually Impaired
... Vision Loss Eye Conditions Losing Your Sight? Using Technology For Parents of Blind Children For Job Seekers For Seniors Braille Video Description Programs & Services Technology Evaluation Center on Vision Loss Learning Center Professional ...
Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision
NASA Astrophysics Data System (ADS)
Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.
2018-01-01
The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.
Color machine vision in industrial process control: case limestone mine
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.; Lemstrom, Guy F.; Koskinen, Seppo
1994-11-01
An optical sorter technology has been developed to improve profitability of a mine by using color line scan machine vision technology. The new technology adapted longers the expected life time of the limestone mine and improves its efficiency. Also the project has proved that color line scan technology of today can successfully be applied to industrial use in harsh environments.
Simulation Evaluation of Equivalent Vision Technologies for Aerospace Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Wilz, Susan J.; Arthur, Jarvis J.
2009-01-01
A fixed-based simulation experiment was conducted in NASA Langley Research Center s Integration Flight Deck simulator to investigate enabling technologies for equivalent visual operations (EVO) in the emerging Next Generation Air Transportation System operating environment. EVO implies the capability to achieve or even improve on the safety of current-day Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and perhaps even retain VFR procedures - all independent of the actual weather and visibility conditions. Twenty-four air transport-rated pilots evaluated the use of Synthetic/Enhanced Vision Systems (S/EVS) and eXternal Vision Systems (XVS) technologies as enabling technologies for future all-weather operations. The experimental objectives were to determine the feasibility of XVS/SVS/EVS to provide for all weather (visibility) landing capability without the need (or ability) for a visual approach segment and to determine the interaction of XVS/EVS and peripheral vision cues for terminal area and surface operations. Another key element of the testing investigated the pilot's awareness and reaction to non-normal events (i.e., failure conditions) that were unexpectedly introduced into the experiment. These non-normal runs served as critical determinants in the underlying safety of all-weather operations. Experimental data from this test are cast into performance-based approach and landing standards which might establish a basis for future all-weather landing operations. Glideslope tracking performance appears to have improved with the elimination of the approach visual segment. This improvement can most likely be attributed to the fact that the pilots didn't have to simultaneously perform glideslope corrections and find required visual landing references in order to continue a landing. Lateral tracking performance was excellent regardless of the display concept being evaluated or whether or not there were peripheral cues in the side window. Although workload ratings were significantly less when peripheral cues were present compared to when there were none, these differences appear to be operationally inconsequential. Larger display concepts tested in this experiment showed significant situation awareness (SA) improvements and workload reductions compared to smaller display concepts. With a fixed display size, a color display was more influential in SA and workload ratings than a collimated display.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
Partnering to Change the Way NASA and the Nation Communicate Through Space
NASA Technical Reports Server (NTRS)
Vrotsos, Pete A.; Budinger, James M.; Bhasin, Kul; Ponchak, Denise S.
2000-01-01
For at least 20 years, the Space Communications Program at NASA Glenn Research Center (GRC) has focused on enhancing the capability and competitiveness of the U.S. commercial communications satellite industry. GRC has partnered with the industry on the development of enabling technologies to help maintain U.S. preeminence in the worldwide communications satellite marketplace. The Advanced Communications Technology Satellite (ACTS) has been the most significant space communications technology endeavor ever performed at GRC, and the centerpiece of GRC's communication technology program for the last decade. Under new sponsorship from NASA's Human Exploration and Development of Space Enterprise, GRC has transitioned the focus and direction of its program, from commercial relevance to NASA mission relevance. Instead of one major experimental spacecraft and one headquarters sponsor, GRC is now exploring opportunities for all of NASA's Enterprises to benefit from advances in space communications technologies, and accomplish their missions through the use of existing and emerging commercially provided services. A growing vision within NASA is to leverage the best commercial standards, technologies, and services as a starting point to satisfy NASA's unique needs. GRC's heritage of industry partnerships is closely aligned with this vision. NASA intends to leverage the explosive growth of the telecommunications industry through its impressive technology advancements and potential new commercial satellite systems. GRC's partnerships with the industry, academia, and other government agencies will directly support all four NASA's future mission needs, while advancing the state of the art of commercial practice. GRC now conducts applied research and develops and demonstrates advanced communications and network technologies in support of all four NASA Enterprises (Human Exploration and Development of Space, Space Science, Earth Science, and Aero-Space Technologies).
Millimeter-wave technology advances since 1985 and future trends
NASA Astrophysics Data System (ADS)
Meinel, Holger H.
1991-05-01
The author focuses on finline or E-plane technology. Several examples, including AVES, a 61.5-GHz radar sensor for traffic data acquisition, are included. Monolithic integrated 60- and 94-GHz receiver circuits composed of a mixer and IF amplifier in compatible FET technology on GaAs are presented to show the state of the art in this area. A promising approach to the use of silicon technology for monolithic millimeter-wave integrated circuits, called SIMMWIC, is described as well. As millimeter-wave technology has matured, increased interest has been generated for very specific applications: (1) commercial automotive applications such as intelligent cruise control and enhanced vision have attracted great interest, calling for a low-cost design approach; and (2) an almost classical application of millimeter-wave techniques is the field of radar seekers, e.g., for intelligent ammunitions, calling for high performance under extreme environmental conditions. Two examples fulfilling these requirements are described.
Communication technology and social media: opportunities and implications for healthcare systems.
Weaver, Betsy; Lindsay, Bill; Gitelman, Betsy
2012-09-30
Electronic patient education and communications, such as email, text messaging, and social media, are on the rise in healthcare today. This article explores potential uses of technology to seek solutions in healthcare for such challenges as modifying behaviors related to chronic conditions, improving efficiency, and decreasing costs. A brief discussion highlights the role of technologies in healthcare informatics and considers two theoretical bases for technology implementation. Discussion focuses more extensively on the ability and advantages of electronic communication technology, such as e-mail, social media, text messaging, and electronic health records, to enhance patient-provider e-communications in nursing today. Effectiveness of e-communication in healthcare is explored, including recent and emerging applications designed to improve patient-provider connections and review of current evidence supporting positive outcomes. The conclusion addresses the vision of nurses' place in the vanguard of these developments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trinklei, Eddy; Parker, Gordon; Weaver, Wayne
This report presents a scoping study for networked microgrids which are defined as "Interoperable groups of multiple Advanced Microgrids that become an integral part of the electricity grid while providing enhanced resiliency through self-healing, aggregated ancillary services, and real-time communication." They result in optimal electrical system configurations and controls whether grid-connected or in islanded modes and enable high penetrations of distributed and renewable energy resources. The vision for the purpose of this document is: "Networked microgrids seamlessly integrate with the electricity grid or other Electric Power Sources (EPS) providing cost effective, high quality, reliable, resilient, self-healing power delivery systems." Scopingmore » Study: Networked Microgrids September 4, 2014 Eddy Trinklein, Michigan Technological University Gordon Parker, Michigan Technological University Wayne Weaver, Michigan Technological University Rush Robinett, Michigan Technological University Lucia Gauchia Babe, Michigan Technological University Chee-Wooi Ten, Michigan Technological University Ward Bower, Ward Bower Innovations LLC Steve Glover, Sandia National Laboratories Steve Bukowski, Sandia National Laboratories Prepared by Michigan Technological University Houghton, Michigan 49931 Michigan Technological University« less
Images Revealing More Than a Thousand Words
NASA Technical Reports Server (NTRS)
2003-01-01
A unique sensor developed by ProVision Technologies, a NASA Commercial Space Center housed by the Institute for Technology Development, produces hyperspectral images with cutting-edge applications in food safety, skin health, forensics, and anti-terrorism activities. While hyperspectral imaging technology continues to make advances with ProVision Technologies, it has also been transferred to the commercial sector through a spinoff company, Photon Industries, Inc.
Ethical, environmental and social issues for machine vision in manufacturing industry
NASA Astrophysics Data System (ADS)
Batchelor, Bruce G.; Whelan, Paul F.
1995-10-01
Some of the ethical, environmental and social issues relating to the design and use of machine vision systems in manufacturing industry are highlighted. The authors' aim is to emphasize some of the more important issues, and raise general awareness of the need to consider the potential advantages and hazards of machine vision technology. However, in a short article like this, it is impossible to cover the subject comprehensively. This paper should therefore be seen as a discussion document, which it is hoped will provoke more detailed consideration of these very important issues. It follows from an article presented at last year's workshop. Five major topics are discussed: (1) The impact of machine vision systems on the environment; (2) The implications of machine vision for product and factory safety, the health and well-being of employees; (3) The importance of intellectual integrity in a field requiring a careful balance of advanced ideas and technologies; (4) Commercial and managerial integrity; and (5) The impact of machine visions technology on employment prospects, particularly for people with low skill levels.
Spaceborne GPS: Current Status and Future Visions
NASA Technical Reports Server (NTRS)
Bauer, Frank H.; Hartman, Kate; Lightsey, E. Glenn
1998-01-01
The Global Positioning System (GPS), developed by the Department of Defense is quickly revolutionizing the architecture of future spacecraft and spacecraft systems. Significant savings in spacecraft life cycle cost, in power, and in mass can be realized by exploiting GPS technology in spaceborne vehicles. These savings are realized because GPS is a systems sensor--it combines the ability to sense space vehicle trajectory, attitude, time, and relative ranging between vehicles into one package. As a result, a reduced spacecraft sensor complement can be employed and significant reductions in space vehicle operations cost can be realized through enhanced on-board autonomy. This paper provides an overview of the current status of spaceborne GPS, a description of spaceborne GPS receivers available now and in the near future, a description of the 1997-2000 GPS flight experiments, and the spaceborne GPS team's vision for the future.
2010-04-01
failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE APR 2010 2. REPORT...The second is a ‘mechanical’ part that is controlled by circuit boards and is accessible by the technician via the serial console and running...was the use of conventional remote access solution designed for telecommuters or teleworkers in the Information Technology (IT) world, such as a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slobotski, Stephanie,
2011-09-01
Under this project, the Ponca Tribe of Nebraska (PTN) will conduct An Energy Options Analysis (EOA) to empower Tribal Leadership with critical information to allow them to effectively screen energy options that will further develop the Tribe's long-term strategic plan and energy vision. The PTN will also provide community workshops to enhance Tribal Members' capabilities, skills and awareness of energy efficiency and conservation technology and practices. A 90- minute workshop will be conducted at each of the 5 sites and one-hundred tribal members will receive an erergy efficiency kit.
OLED study for military applications
NASA Astrophysics Data System (ADS)
Barre, F.; Chiquard, A.; Faure, S.; Landais, L.; Patry, P.
2005-07-01
The presentation deals with some applications of OLED displays in military optronic systems, which are scheduled by SAGEM DS (Defence and Security). SAGEM DS, one of the largest group in the defence and security market, is currently investigating OLED Technologies for military programs. This technology is close from being chosen for optronic equipment such as future infantry night vision goggles, rifle-sight, or, more generally, vision enhancement systems. Most of those applications requires micro-display with an active matrix size below 1". Some others, such as, for instance, ruggedized flat displays do have a need for higher active matrix size (1,5" to 15"). SAGEM DS takes advantages of this flat, high luminance and emissive technology in highly integrated systems. In any case, many requirements have to be fulfilled: ultra-low power consumption, wide viewing angle, good pixel to pixel uniformity, and satisfactory behaviour in extreme environmental conditions.... Accurate measurements have been achieved at SAGEM DS on some micro display OLEDs and will be detailed: luminance (over 2000 cd/m2 achieved), area uniformity and pixel to pixel uniformity, robustness at low and high temperature (-40°C to +60°C), lifetime. These results, which refer to military requirements, provide a valuable feedback representative of the state of the art OLED performances.
3D Medical Collaboration Technology to Enhance Emergency Healthcare
Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.
2009-01-01
Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951
3D medical collaboration technology to enhance emergency healthcare.
Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E
2009-04-19
Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.
Integrated navigation, flight guidance, and synthetic vision system for low-level flight
NASA Astrophysics Data System (ADS)
Mehler, Felix E.
2000-06-01
Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.
NASA Astrophysics Data System (ADS)
Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid
2017-10-01
Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowry, Thomas Stephen; Finger, John T.; Carrigan, Charles R.
This report documents the key findings from the Reservoir Maintenance and Development (RM&D) Task of the U.S. Department of Energy's (DOE), Geothermal Technologies Office (GTO) Geothermal Vision Study (GeoVision Study). The GeoVision Study had the objective of conducting analyses of future geothermal growth based on sets of current and future geothermal technology developments. The RM&D Task is one of seven tasks within the GeoVision Study with the others being, Exploration and Confirmation, Potential to Penetration, Institutional Market Barriers, Environmental and Social Impacts, Thermal Applications, and Hybrid Systems. The full set of findings and the details of the GeoVision Study canmore » be found in the final GeoVision Study report on the DOE-GTO website. As applied here, RM&D refers to the activities associated with developing, exploiting, and maintaining a known geothermal resource. It assumes that the site has already been vetted and that the resource has been evaluated to be of sufficient quality to move towards full-scale development. It also assumes that the resource is to be developed for power generation, as opposed to low-temperature or direct use applications. This document presents the key factors influencing RM&D from both a technological and operational standpoint and provides a baseline of its current state. It also looks forward to describe areas of research and development that must be pursued if the development geothermal energy is to reach its full potential.« less
Visions 2020.2: Student Views on Transforming Education and Training through Advanced Technologies
ERIC Educational Resources Information Center
US Department of Education, 2004
2004-01-01
The U.S. Departments of Commerce and Education (who co-chair the NSTC Working Group) and NetDay formed a partnership aimed at analyzing K-12 student views about technology for learning. These views are analyzed in this second report, "Visions 2020.2: Student Views on Transforming Education and Training Through Advanced Technologies." In…
NASA Technical Reports Server (NTRS)
1999-01-01
Amherst Systems manufactures foveal machine vision technology and systems commercially available to end-users and system integrators. This technology was initially developed under NASA contracts NAS9-19335 (Johnson Space Center) and NAS1-20841 (Langley Research Center). This technology is currently being delivered to university research facilities and military sites. More information may be found in www.amherst.com.
ERIC Educational Resources Information Center
Flanagan, Gina E.
2014-01-01
There is limited research that outlines how a superintendent's instructional vision can help to gain acceptance of a large-scale technology initiative. This study explored how superintendents gain acceptance for a large-scale technology initiative (specifically a 1:1 device program) through various leadership actions. The role of the instructional…
Machine Vision Technology for the Forest Products Industry
Richard W. Conners; D.Earl Kline; Philip A. Araman; Thomas T. Drayer
1997-01-01
From forest to finished product, wood is moved from one processing stage to the next, subject to the decisions of individuals along the way. While this process has worked for hundreds of years, the technology exists today to provide more complete information to the decision makers. Virginia Tech has developed this technology, creating a machine vision prototype for...
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
Rasmussen, Rune Skovgaard; Schaarup, Anne Marie Heltoft; Overgaard, Karsten
2018-02-27
Serious and often lasting vision impairments affect 30% to 35% of people following stroke. Vision may be considered the most important sense in humans, and even smaller permanent injuries can drastically reduce quality of life. Restoration of visual field impairments occur only to a small extent during the first month after brain damage, and therefore the time window for spontaneous improvements is limited. One month after brain injury causing visual impairment, patients usually will experience chronically impaired vision and the need for compensatory vision rehabilitation is substantial. The purpose of this study is to investigate whether rehabilitation with Neuro Vision Technology will result in a significant and lasting improvement in functional capacity in persons with chronic visual impairments after brain injury. Improving eyesight is expected to increase both physical and mental functioning, thus improving the quality of life. This is a prospective open label trial in which participants with chronic visual field impairments are examined before and after the intervention. Participants typically suffer from stroke or traumatic brain injury and will be recruited from hospitals and The Institute for the Blind and Partially Sighted. Treatment is based on Neuro Vision Technology, which is a supervised training course, where participants are trained in compensatory techniques using specially designed equipment. Through the Neuro Vision Technology procedure, the vision problems of each individual are carefully investigated, and personal data is used to organize individual training sessions. Cognitive face-to-face assessments and self-assessed questionnaires about both life and vision quality are also applied before and after the training. Funding was provided in June 2017. Results are expected to be available in 2020. Sample size is calculated to 23 participants. Due to age, difficulty in transport, and the time-consuming intervention, up to 25% dropouts are expected; thus, we aim to include at least 29 participants. This investigation will evaluate the effects of Neuro Vision Technology therapy on compensatory vision rehabilitation. Additionally, quality of life and cognitive improvements associated to increased quality of life will be explored. ClinicalTrials.gov NCT03160131; https://clinicaltrials.gov/ct2/show/NCT03160131 (Archived by WebCite at http://www.webcitation.org/6x3f5HnCv). ©Rune Skovgaard Rasmussen, Anne Marie Heltoft Schaarup, Karsten Overgaard. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 27.02.2018.
Seeing the Universe: On the Cusp of Technology
NASA Astrophysics Data System (ADS)
Impey, C.
2009-08-01
Since Galileo Galilei turned his modest spyglass to the night sky 400 years ago, his heirs have enhanced his legacy by the method of using successively larger lenses, and then concave mirrors. Astronomers have even succeeded in stilling the night air with flexible optics, letting their telescopes register the sharpest images that optics will allow. New technologies allow the creation of telescopes that would be unrecognizable to Galileo. They don't use glass mirrors or lenses and they often gather radiation that's invisible to the eye. Astronomers have even managed to use the Earth as a telescope. These innovations have given us a sharper vision of the universe, and a better sense of our place in space and time.
Transforming American Education
ERIC Educational Resources Information Center
Horn, Michael B.; Mackey, Katherine
2011-01-01
In this article the authors accept as a given the National Education Technology Plan's vision of a transformed education system powered by technology such that learners receive personalized and engaging learning experiences, and where assessment, teaching, infrastructure, and productivity are redefined. The article analyzes this vision of a…
Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review
Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul
2012-01-01
Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548
NASA Astrophysics Data System (ADS)
Thakore, B.; Frierson, T.; Coderre, K.; Lukaszczyk, A.; Karl, A.
2009-04-01
This paper outlines the response of students and young space professionals on the occasion of the 50th Anniversary of the first artificial satellite and the 40th anniversary of the Outer Space Treaty. The contribution has been coordinated by the Space Generation Advisory Council (SGAC) in support of the United Nations Programme on Space Applications. It follows consultation of the SGAC community through a series of meetings, online discussions and online surveys. The first two online surveys collected over 750 different visions from the international community, totaling approximately 276 youth from over 28 countries and builds on previous SGAC policy contributions. A summary of these results was presented as the top 10 visions of today's youth as an invited input to world space leaders gathered at the Symposium on "The future of space exploration: Solutions to earthly problems" held in Boston, USA from April 12-14 2007 and at the United Nations Committee on the Peaceful Uses of Outer Space in May 2007. These key visions suggested the enhancement for humanity's reach beyond this planet - both physically and intellectual. These key visions were themed into three main categories: • Improvement of Human Survival Probability - sustained exploration to become a multi- planet species, humans to Mars, new treaty structures to ensure a secure space environment, etc • Improvement of Human Quality of Life and the Environment - new political systems or astrocracy, benefits of tele-medicine, tele-education, and commercialization of space, new energy and resources: space solar power, etc. • Improvement of Human Knowledge and Understanding - complete survey of extinct and extant life forms, use of space data for advanced environmental monitoring, etc. This paper will summarize the outcomes from a further online survey and represent key recommendations given by international youth advocates on further steps that could be taken by space agencies and organizations to make the top 10 visions a reality. In turn the online discussions that are used to engage the youth audience would be recorded and would help to reflect the confidence of the younger generation in these visions. The categories listed above would also be investigated further from the technology, policy and ethical aspects. Recent activities in development to further disseminate the necessary connections between using of space technology for solving global challenges is discussed.
NASA Astrophysics Data System (ADS)
Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua
2018-01-01
Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861 cm- 1 (v2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082 cm- 1 (v1 band), 712 cm- 1 and 699 cm- 1 (v4 band) were just the reverse. Contrastly, only the peak intensity at 862 cm- 1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350 cm- 1, 730-680 cm- 1) and (830-880 cm- 1, 690-725 cm- 1) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner.
Infrared sensors and systems for enhanced vision/autonomous landing applications
NASA Technical Reports Server (NTRS)
Kerr, J. Richard
1993-01-01
There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.
Liu, Siqi; Wei, Wei; Bai, Zhiyi; Wang, Xichang; Li, Xiaohong; Wang, Chuanxian; Liu, Xia; Liu, Yuan; Xu, Changhua
2018-01-15
Pearl powder, an important raw material in cosmetics and Chinese patent medicines, is commonly uneven in quality and frequently adulterated with low-cost shell powder in the market. The aim of this study is to establish an adequate approach based on Tri-step infrared spectroscopy with enhancing resolution combined with chemometrics for qualitative identification of pearl powder originated from three different quality grades of pearls and quantitative prediction of the proportions of shell powder adulterated in pearl powder. Additionally, computer vision technology (E-eyes) can investigate the color difference among different pearl powders and make it traceable to the pearl quality trait-visual color categories. Though the different grades of pearl powder or adulterated pearl powder have almost identical IR spectra, SD-IR peak intensity at about 861cm -1 (v 2 band) exhibited regular enhancement with the increasing quality grade of pearls, while the 1082cm -1 (v 1 band), 712cm -1 and 699cm -1 (v 4 band) were just the reverse. Contrastly, only the peak intensity at 862cm -1 was enhanced regularly with the increasing concentration of shell powder. Thus, the bands in the ranges of (1550-1350cm -1 , 730-680cm -1 ) and (830-880cm -1 , 690-725cm -1 ) could be exclusive ranges to discriminate three distinct pearl powders and identify adulteration, respectively. For massive sample analysis, a qualitative classification model and a quantitative prediction model based on IR spectra was established successfully by principal component analysis (PCA) and partial least squares (PLS), respectively. The developed method demonstrated great potential for pearl powder quality control and authenticity identification in a direct, holistic manner. Copyright © 2017. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Crouch, Roger
2004-01-01
Viewgraphs on NASA's transition to its vision for space exploration is presented. The topics include: 1) Strategic Directives Guiding the Human Support Technology Program; 2) Progressive Capabilities; 3) A Journey to Inspire, Innovate, and Discover; 4) Risk Mitigation Status Technology Readiness Level (TRL) and Countermeasures Readiness Level (CRL); 5) Biological And Physical Research Enterprise Aligning With The Vision For U.S. Space Exploration; 6) Critical Path Roadmap Reference Missions; 7) Rating Risks; 8) Current Critical Path Roadmap (Draft) Rating Risks: Human Health; 9) Current Critical Path Roadmap (Draft) Rating Risks: System Performance/Efficiency; 10) Biological And Physical Research Enterprise Efforts to Align With Vision For U.S. Space Exploration; 11) Aligning with the Vision: Exploration Research Areas of Emphasis; 12) Code U Efforts To Align With The Vision For U.S. Space Exploration; 13) Types of Critical Path Roadmap Risks; and 14) ISS Human Support Systems Research, Development, and Demonstration. A summary discussing the vision for U.S. space exploration is also provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Young, Steven D.
2005-01-01
In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.
NASA Astrophysics Data System (ADS)
Moore, Linda A.; Ferreira, Jannie T.
2003-03-01
Sports vision encompasses the visual assessment and provision of sports-specific visual performance enhancement and ocular protection for athletes of all ages, genders and levels of participation. In recent years, sports vision has been identified as one of the key performance indicators in sport. It is built on four main cornerstones: corrective eyewear, protective eyewear, visual skills enhancement and performance enhancement. Although clinically well established in the US, it is still a relatively new area of optometric specialisation elsewhere in the world and is gaining increasing popularity with eyecare practitioners and researchers. This research is often multi-disciplinary and involves input from a variety of subject disciplines, mainly those of optometry, medicine, physiology, psychology, physics, chemistry, computer science and engineering. Collaborative research projects are currently underway between staff of the Schools of Physics and Computing (DIT) and the Academy of Sports Vision (RAU).
A Vision for Ice Giant Exploration
NASA Technical Reports Server (NTRS)
Hofstadter, M.; Simon, A.; Atreya, S.; Banfield, D.; Fortney, J.; Hayes, A.; Hedman, M.; Hospodarsky, G.; Mandt, K.; Masters, A.;
2017-01-01
From Voyager to a Vision for 2050: NASA and ESA have just completed a study of candidate missionsto Uranus and Neptune, the so-called ice giant planets. It is a Pre-Decadal Survey Study, meant to inform the next Planetary Science Decadal Survey about opportunities for missions launching in the 2020's and early 2030's. There have been no space flight missions to the ice giants since the Voyager 2 flybys of Uranus in 1986 and Neptune in 1989. This paper presents some conclusions of that study (hereafter referred to as The Study), and how the results feed into a vision for where planetary science can be in 2050. Reaching that vision will require investments in technology andground-based science in the 2020's, flight during the 2030's along with continued technological development of both ground- and space-based capabilities, and data analysis and additional flights in the 2040's. We first discuss why exploring the ice giants is important. We then summarize the science objectives identified by The Study, and our vision of the science goals for 2050. We then review some of the technologies needed to make this vision a reality.
Rollo, Megan E; Aguiar, Elroy J; Williams, Rebecca L; Wynne, Katie; Kriss, Michelle; Callister, Robin; Collins, Clare E
2016-01-01
Diabetes is a chronic, complex condition requiring sound knowledge and self-management skills to optimize glycemic control and health outcomes. Dietary intake and physical activity are key diabetes self-management (DSM) behaviors that require tailored education and support. Electronic health (eHealth) technologies have a demonstrated potential for assisting individuals with DSM behaviors. This review provides examples of technologies used to support nutrition and physical activity behaviors in the context of DSM. Technologies covered include those widely used for DSM, such as web-based programs and mobile phone and smartphone applications. In addition, examples of novel tools such as virtual and augmented reality, video games, computer vision for dietary carbohydrate monitoring, and wearable devices are provided. The challenges to, and facilitators for, the use of eHealth technologies in DSM are discussed. Strategies to support the implementation of eHealth technologies within practice and suggestions for future research to enhance nutrition and physical activity behaviors as a part of broader DSM are provided.
Rollo, Megan E; Aguiar, Elroy J; Williams, Rebecca L; Wynne, Katie; Kriss, Michelle; Callister, Robin; Collins, Clare E
2016-01-01
Diabetes is a chronic, complex condition requiring sound knowledge and self-management skills to optimize glycemic control and health outcomes. Dietary intake and physical activity are key diabetes self-management (DSM) behaviors that require tailored education and support. Electronic health (eHealth) technologies have a demonstrated potential for assisting individuals with DSM behaviors. This review provides examples of technologies used to support nutrition and physical activity behaviors in the context of DSM. Technologies covered include those widely used for DSM, such as web-based programs and mobile phone and smartphone applications. In addition, examples of novel tools such as virtual and augmented reality, video games, computer vision for dietary carbohydrate monitoring, and wearable devices are provided. The challenges to, and facilitators for, the use of eHealth technologies in DSM are discussed. Strategies to support the implementation of eHealth technologies within practice and suggestions for future research to enhance nutrition and physical activity behaviors as a part of broader DSM are provided. PMID:27853384
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Buildings of the Future Scoping Study: A Framework for Vision Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Goins, John D.
2015-02-01
The Buildings of the Future Scoping Study, funded by the U.S. Department of Energy (DOE) Building Technologies Office, seeks to develop a vision for what U.S. mainstream commercial and residential buildings could become in 100 years. This effort is not intended to predict the future or develop a specific building design solution. Rather, it will explore future building attributes and offer possible pathways of future development. Whether we achieve a more sustainable built environment depends not just on technologies themselves, but on how effectively we envision the future and integrate these technologies in a balanced way that generates economic, social,more » and environmental value. A clear, compelling vision of future buildings will attract the right strategies, inspire innovation, and motivate action. This project will create a cross-disciplinary forum of thought leaders to share their views. The collective views will be integrated into a future building vision and published in September 2015. This report presents a research framework for the vision development effort based on a literature survey and gap analysis. This document has four objectives. First, it defines the project scope. Next, it identifies gaps in the existing visions and goals for buildings and discusses the possible reasons why some visions did not work out as hoped. Third, it proposes a framework to address those gaps in the vision development. Finally, it presents a plan for a series of panel discussions and interviews to explore a vision that mitigates problems with past building paradigms while addressing key areas that will affect buildings going forward.« less
Using an Augmented Reality Device as a Distance-based Vision Aid-Promise and Limitations.
Kinateder, Max; Gualtieri, Justin; Dunn, Matt J; Jarosz, Wojciech; Yang, Xing-Dong; Cooper, Emily A
2018-06-06
For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Prinzel, Lawrence J., III; Shelton, Kevin J.; Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Norman, Robert M.
2007-01-01
Experiments and flight tests have shown that a Head-Up Display (HUD) and a head-down, electronic moving map (EMM) can be enhanced with Synthetic Vision for airport surface operations. While great success in ground operations was demonstrated with a HUD, the research noted that two major HUD limitations during ground operations were their monochrome form and limited, fixed field of regard. A potential solution to these limitations found with HUDs may be emerging Head Worn Displays (HWDs). HWDs are small, lightweight full color display devices that may be worn without significant encumbrance to the user. By coupling the HWD with a head tracker, unlimited field-of-regard may be realized for commercial aviation applications. In the proposed paper, the results of two ground simulation experiments conducted at NASA Langley are summarized. The experiments evaluated the efficacy of head-worn display applications of Synthetic Vision and Enhanced Vision technology to enhance transport aircraft surface operations. The two studies tested a combined six display concepts: (1) paper charts with existing cockpit displays, (2) baseline consisting of existing cockpit displays including a Class III electronic flight bag display of the airport surface; (3) an advanced baseline that also included displayed traffic and routing information, (4) a modified version of a HUD and EMM display demonstrated in previous research; (5) an unlimited field-of-regard, full color, head-tracked HWD with a conformal 3-D synthetic vision surface view; and (6) a fully integrated HWD concept. The fully integrated HWD concept is a head-tracked, color, unlimited field-of-regard concept that provides a 3-D conformal synthetic view of the airport surface integrated with advanced taxi route clearance, taxi precision guidance, and data-link capability. The results of the experiments showed that the fully integrated HWD provided greater path performance compared to using paper charts alone. Further, when comparing the HWD with the HUD concept, there were no differences in path performance. In addition, the HWD and HUD concepts were rated via paired-comparisons the same in terms of situational awareness and workload. However, there were over twice as many taxi incursion events with the HUD than the HWD.
Vision Science and Technology at NASA: Results of a Workshop
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Editor); Mulligan, Jeffrey B. (Editor)
1990-01-01
A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program.
NASA Astrophysics Data System (ADS)
Zamora Ramos, Ernesto
Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.
77 FR 21861 - Special Conditions: Boeing, Model 777F; Enhanced Flight Vision System
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... System AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final special conditions; request for... with an advanced, enhanced flight vision system (EFVS). The EFVS consists of a head-up display (HUD) system modified to display forward-looking infrared (FLIR) imagery. The applicable airworthiness...
Clinical Verification of Image Warping as a Potential Aid for the Visually Handicapped
NASA Technical Reports Server (NTRS)
Loshin, David
1996-01-01
The bulk of this research was to designed determine potential of the Programmable Remapper (PR) as a device to enhance vision for the visually handicapped. This research indicated that remapping would have potential as a low vision device if the eye position could be monitored with feedback to specify the proper location of the remapped image. This must be accomplished at high rate so that there is no lag of the image behind the eye position. Since at this time, there is no portable eye monitor device (at a reasonable cost) that will operate under the required conditions, it would not be feasible to continue with remapping experiments for patients with central field defects. However, since patients with peripheral field defects do not have the same eye positioning requirements, they may indeed benefit from this technology. Further investigations must be performed to determine plausibility of this application of remapping.
Spaceborne GPS Current Status and Future Visions
NASA Technical Reports Server (NTRS)
Bauer, Frank H.; Hartman, Kate; Lightsey, E. Glenn
1998-01-01
The Global Positioning System (GPS), developed by the Department of Defense, is quickly revolutionizing the architecture of future spacecraft and spacecraft systems. Significant savings in spacecraft life cycle cost, in power, and in mass can be realized by exploiting Global Positioning System (GPS) technology in spaceborne vehicles. These savings are realized because GPS is a systems sensor-it combines the ability to sense space vehicle trajectory, attitude, time, and relative ranging between vehicles into one package. As a result, a reduced spacecraft sensor complement can be employed on spacecraft and significant reductions in space vehicle operations cost can be realized through enhanced on- board autonomy. This paper provides an overview of the current status of spaceborne GPS, a description of spaceborne GPS receivers available now and in the near future, a description of the 1997-1999 GPS flight experiments and the spaceborne GPS team's vision for the future.
Liquid lens: advances in adaptive optics
NASA Astrophysics Data System (ADS)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
Dynamical Systems and Motion Vision.
1988-04-01
TASK Artificial Inteligence Laboratory AREA I WORK UNIT NUMBERS 545 Technology Square . Cambridge, MA 02139 C\\ II. CONTROLLING OFFICE NAME ANO0 ADDRESS...INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I.Memo No. 1037 April, 1988 Dynamical Systems and Motion Vision Joachim Heel Abstract: In this... Artificial Intelligence L3 Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory’s [1 Artificial Intelligence Research is
Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A
2017-07-25
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.
Spinosa, Emanuele; Roberts, David A.
2017-01-01
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553
In Situ Fabrication Technologies: Meeting the Challenge for Exploration
NASA Technical Reports Server (NTRS)
Howard, Richard W.
2005-01-01
A viewgraph presentation on Lunar and Martian in situ fabrication technologies meeting the challenges for exploration is shown. The topics include: 1) Exploration Vision; 2) Vision Requirements Early in the Program; 3) Vision Requirements Today; 4) Why is ISFR Technology Needed? 5) ISFR and In Situ Resource Utilization (ISRU); 6) Fabrication Feedstock Considerations; 7) Planetary Resource Primer; 8) Average Chemical Element Abundances in Lunar Soil; 9) Chemical Elements in Aerospace Engineering Materials; 10) Schematic of Raw Regolith Processing into Constituent Components; 11) Iron, Aluminum, and Basalt Processing from Separated Elements and Compounds; 12) Space Power Systems; 13) Power Source Applicability; 14) Fabrication Systems Technologies; 15) Repair and Nondestructive Evaluation (NDE); and 16) Habitat Structures. A development overview of Lunar and Martian repair and nondestructive evaluation is also presented.
Emotion improves and impairs early vision.
Bocanegra, Bruno R; Zeelenberg, René
2009-06-01
Recent studies indicate that emotion enhances early vision, but the generality of this finding remains unknown. Do the benefits of emotion extend to all basic aspects of vision, or are they limited in scope? Our results show that the brief presentation of a fearful face, compared with a neutral face, enhances sensitivity for the orientation of subsequently presented low-spatial-frequency stimuli, but diminishes orientation sensitivity for high-spatial-frequency stimuli. This is the first demonstration that emotion not only improves but also impairs low-level vision. The selective low-spatial-frequency benefits are consistent with the idea that emotion enhances magnocellular processing. Additionally, we suggest that the high-spatial-frequency deficits are due to inhibitory interactions between magnocellular and parvocellular pathways. Our results suggest an emotion-induced trade-off in visual processing, rather than a general improvement. This trade-off may benefit perceptual dimensions that are relevant for survival at the expense of those that are less relevant.
Lin, Sue C; Gold, Robert S
2018-01-01
Assistive technology (AT) enhances the ability of individuals with disabilities to be fully engaged in activities at home, at school, and within their communities-especially for children with developmental disabilities (DD) with physical, sensory, learning, and/or communication impairments. The prevalence of children with DD in the United States has risen from 12.84% in 1997 to 15.04% in 2008. Thus, it is important to monitor the status of their AT needs, functional difficulties, services utilization, and coordination. Using data from the 2009-2010 National Survey on Children with Special Health Care Needs (NS-CSHCN), we conducted bivariate and multivariate statistical analysis, which found that 90% or more of parents of both children with DD and other CSHCN reported that their child's AT needs were met for vision, hearing, mobility, communication, and durable medical equipment; furthermore, children with DD had lower odds of AT needs met for vision and hearing and increased odds for meeting AT needs in mobility and communication. Our findings outline the current AT needs of children with DD nationally. Fulfilling these needs has the potential to engender positive lifelong effects on the child's disabilities, sense of independence, self-confidence, and productivity.
Navigation of military and space unmanned ground vehicles in unstructured terrains
NASA Technical Reports Server (NTRS)
Lescoe, Paul; Lavery, David; Bedard, Roger
1991-01-01
Development of unmanned vehicles for local navigation in terrains unstructured by humans is reviewed. Modes of navigation include teleoperation or remote control, computer assisted remote driving (CARD), and semiautonomous navigation (SAN). A first implementation of a CARD system was successfully tested using the Robotic Technology Test Vehicle developed by Jet Propulsion Laboratory. Stereo pictures were transmitted to a remotely located human operator, who performed the sensing, perception, and planning functions of navigation. A computer provided range and angle measurements and the path plan was transmitted to the vehicle which autonomously executed the path. This implementation is to be enhanced by providing passive stereo vision and a reflex control system for autonomously stopping the vehicle if blocked by an obstacle. SAN achievements include implementation of a navigation testbed on a six wheel, three-body articulated rover vehicle, development of SAN algorithms and code, integration of SAN software onto the vehicle, and a successful feasibility demonstration that represents a step forward towards the technology required for long-range exploration of the lunar or Martian surface. The vehicle includes a passive stereo vision system with real-time area-based stereo image correlation, a terrain matcher, a path planner, and a path execution planner.
Sensor Characteristics Reference Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan V.; Dansu, A.; Fuhr, P.
The Buildings Technologies Office (BTO), within the U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), is initiating a new program in Sensor and Controls. The vision of this program is: • Buildings operating automatically and continuously at peak energy efficiency over their lifetimes and interoperating effectively with the electric power grid. • Buildings that are self-configuring, self-commissioning, self-learning, self-diagnosing, self-healing, and self-transacting to enable continuous peak performance. • Lower overall building operating costs and higher asset valuation. The overarching goal is to capture 30% energy savings by enhanced management of energy consuming assets and systemsmore » through development of cost-effective sensors and controls. One step in achieving this vision is the publication of this Sensor Characteristics Reference Guide. The purpose of the guide is to inform building owners and operators of the current status, capabilities, and limitations of sensor technologies. It is hoped that this guide will aid in the design and procurement process and result in successful implementation of building sensor and control systems. DOE will also use this guide to identify research priorities, develop future specifications for potential market adoption, and provide market clarity through unbiased information« less
Air and Water System (AWS) Design and Technology Selection for the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Jones, Harry; Kliss, Mark
2005-01-01
This paper considers technology selection for the crew air and water recycling systems to be used in long duration human space exploration. The specific objectives are to identify the most probable air and water technologies for the vision for space exploration and to identify the alternate technologies that might be developed. The approach is to conduct a preliminary first cut systems engineering analysis, beginning with the Air and Water System (AWS) requirements and the system mass balance, and then define the functional architecture, review the International Space Station (ISS) technologies, and discuss alternate technologies. The life support requirements for air and water are well known. The results of the mass flow and mass balance analysis help define the system architectural concept. The AWS includes five subsystems: Oxygen Supply, Condensate Purification, Urine Purification, Hygiene Water Purification, and Clothes Wash Purification. AWS technologies have been evaluated in the life support design for ISS node 3, and in earlier space station design studies, in proposals for the upgrade or evolution of the space station, and in studies of potential lunar or Mars missions. The leading candidate technologies for the vision for space exploration are those planned for Node 3 of the ISS. The ISS life support was designed to utilize Space Station Freedom (SSF) hardware to the maximum extent possible. The SSF final technology selection process, criteria, and results are discussed. Would it be cost-effective for the vision for space exploration to develop alternate technology? This paper will examine this and other questions associated with AWS design and technology selection.
3D display considerations for rugged airborne environments
NASA Astrophysics Data System (ADS)
Barnidge, Tracy J.; Tchon, Joseph L.
2015-05-01
The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.
Machine vision for digital microfluidics
NASA Astrophysics Data System (ADS)
Shin, Yong-Jun; Lee, Jeong-Bong
2010-01-01
Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.
Kim, Hyung Nam
2017-10-16
Twenty-five years after the Americans with Disabilities Act, there has still been a lack of advancement of accessibility in healthcare for people with visual impairments, particularly older adults with low vision. This study aims to advance understanding of how older adults with low vision obtain, process, and use health information and services, and to seek opportunities of information technology to support them. A convenience sample of 10 older adults with low vision participated in semi-structured phone interviews, which were audio-recorded and transcribed verbatim for analysis. Participants shared various concerns in accessing, understanding, and using health information, care services, and multimedia technologies. Two main themes and nine subthemes emerged from the analysis. Due to the concerns, older adults with low vision tended to fail to obtain the full range of all health information and services to meet their specific needs. Those with low vision still rely on residual vision such that multimedia-based information which can be useful, but it should still be designed to ensure its accessibility, usability, and understandability.
Cheng, Yi-Yu; Qu, Hai-Bin; Zhang, Bo-Li
2016-01-01
A perspective analysis on the technological innovation in pharmaceutical engineering of Chinese medicine unveils a vision on "Future Factory" of Chinese medicine industry in mind. The strategy as well as the technical roadmap of "Chinese medicine industry 4.0" is proposed, with the projection of related core technology system. It is clarified that the technical development path of Chinese medicine industry from digital manufacture to intelligent manufacture. On the basis of precisely defining technical terms such as process control, on-line detection and process quality monitoring for Chinese medicine manufacture, the technical concepts and characteristics of intelligent pharmaceutical manufacture as well as digital pharmaceutical manufacture are elaborated. Promoting wide applications of digital manufacturing technology of Chinese medicine is strongly recommended. Through completely informationized manufacturing processes and multi-discipline cluster innovation, intelligent manufacturing technology of Chinese medicine should be developed, which would provide a new driving force for Chinese medicine industry in technology upgrade, product quality enhancement and efficiency improvement. Copyright© by the Chinese Pharmaceutical Association.
Toxicity of materials used in the manufacture of lithium batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archuleta, M.M.
1994-05-01
The growing interest in battery systems has led to major advances in high-energy and/or high-power-density lithium batteries. Potential applications for lithium batteries include radio transceivers, portable electronic instrumentation, emergency locator transmitters, night vision devices, human implantable devices, as well as uses in the aerospace and defense programs. With this new technology comes the use of new solvent and electrolyte systems in the research, development, and production of lithium batteries. The goal is to enhance lithium battery technology with the use of non-hazardous materials. Therefore, the toxicity and health hazards associated with exposure to the solvents and electrolytes used in currentmore » lithium battery research and development is evaluated and described.« less
Westerling, Anna M; Haikala, Veikko; Airaksinen, Marja
2011-12-01
Community pharmacy's strategic vision has been to extend practice responsibilities beyond dispensing and provide patient care services. Few studies have evaluated the strategic and long-term development of information technology (IT) systems to support this vision. The objective of this study was to explore international experts' visions and strategic views on IT development needs in relation to service provision in community pharmacies. Semistructured interviews were conducted with a purposive sample of 14 experts from 8 countries in 2007-2008. These experts had expertise in the development of community pharmacy services and IT. The interviews were content analyzed using a constant comparison approach and a SWOT (strengths, weaknesses, opportunities, threats) analysis was undertaken. Most of the experts shared the vision for community pharmacy adopting a patient care orientation; supported by IT-based documentation, new technological solutions, access to information, and shared patient data. Opportunities to achieve this vision included IT solutions, professional skills, and interprofessional collaboration. Threats included costs, pharmacists' attitude, and the absence of IT solutions. Those responsible for IT development in community pharmacy sector should create long-term IT development strategies that are in line with community pharmacy service development strategies. Copyright © 2011 Elsevier Inc. All rights reserved.
Gearing up to the factory of the future
NASA Astrophysics Data System (ADS)
Godfrey, D. E.
1985-01-01
The features of factories and manufacturing techniques and tools of the near future are discussed. The spur to incorporate new technologies on the factory floor will originate in management, who must guide the interfacing of computer-enhanced equipment with traditional manpower, materials and machines. Electronic control with responsiveness and flexibility will be the key concept in an integrated approach to processing materials. Microprocessor controlled laser and fluid cutters add accuracy to cutting operations. Unattended operation will become feasible when automated inspection is added to a work station through developments in robot vision. Optimum shop management will be achieved through AI programming of parts manufacturing, optimized work flows, and cost accounting. The automation enhancements will allow designers to affect directly parts being produced on the factory floor.
Vision 2015: The West Virginia Science and Technology Strategic Plan. Progress Report
ERIC Educational Resources Information Center
West Virginia Higher Education Policy Commission, 2014
2014-01-01
In 2005, West Virginia science and education leaders developed a strategic plan entitled: "Vision 2015: The West Virginia Science and Technology Strategic Plan." The plan is comprised of five (5) target areas for infrastructure development, with 14 goals for action by designated leaders from higher education, state government, and…
Employment after Vision Loss: Results of a Collective Case Study.
ERIC Educational Resources Information Center
Crudden, Adele
2002-01-01
A collective case study approach was used to examine factors that influence the job retention of persons with vision loss. Computer technology was found to be a major positive influence and print access and technology were a source of stress for most participants (n=10). (Contains 7 references.) (Author/CR)
Plebani, Mario
2018-05-24
In the last few decades, laboratory medicine has undergone monumental changes, and laboratory technology, which has made enormous advances, now has new clinical applications thanks to the identification of a growing number of biomarkers and risk factors conducive to the promotion of predictive and preventive interventions that have enhanced the role of laboratory medicine in health care delivering. However, the paradigm shift in the past 50 years has led to a gap between laboratory and clinic, with an increased risk of inappropriateness in test request and interpretation, as well as the consolidation of analytical work in focused factories and megastructurers oriented only toward achieving greater volumes, decreasing cost per test and generating a vision of laboratory services as simple commodities. A careful historical revision of the changing models for delivering laboratory services in the United States leads to the prediction that there are several reasons for counteracting the vision of clinical laboratory as a commodity, and restoring the true nature of laboratory services as an integral part of the diagnosis and therapy process. The present study, which reports on internal and external drivers for change, proposes an integrated vision of quality in laboratory medicine.
Adaptive Optics for the Human Eye
NASA Astrophysics Data System (ADS)
Williams, D. R.
2000-05-01
Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
360 degree vision system: opportunities in transportation
NASA Astrophysics Data System (ADS)
Thibault, Simon
2007-09-01
Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.
78 FR 26376 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-06
...; Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: May 30-31, 2013. Time: 8... of Committee: Integrative, Functional and Cognitive Neuroscience Integrated Review Group..., [email protected] . Name of Committee: Center for Scientific Review Special Emphasis Panel; Vision...
ERIC Educational Resources Information Center
Hinckley, June
2000-01-01
Discusses changes in technology, information, and people and the impact on music programs. The Vision 2020 project focuses on the future of music education. Addresses the events that created Vision 2020. Includes "The Housewright Declaration," a summarization of agreements from the Housewright Symposium on the Future of Music Education. (CMK)
Binary pressure-sensitive paint measurements using miniaturised, colour, machine vision cameras
NASA Astrophysics Data System (ADS)
Quinn, Mark Kenneth
2018-05-01
Recent advances in machine vision technology and capability have led to machine vision cameras becoming applicable for scientific imaging. This study aims to demonstrate the applicability of machine vision colour cameras for the measurement of dual-component pressure-sensitive paint (PSP). The presence of a second luminophore component in the PSP mixture significantly reduces its inherent temperature sensitivity, increasing its applicability at low speeds. All of the devices tested are smaller than the cooled CCD cameras traditionally used and most are of significantly lower cost, thereby increasing the accessibility of such technology and techniques. Comparisons between three machine vision cameras, a three CCD camera, and a commercially available specialist PSP camera are made on a range of parameters, and a detailed PSP calibration is conducted in a static calibration chamber. The findings demonstrate that colour machine vision cameras can be used for quantitative, dual-component, pressure measurements. These results give rise to the possibility of performing on-board dual-component PSP measurements in wind tunnels or on real flight/road vehicles.
NASA Technical Reports Server (NTRS)
Lewandowski, Leon; Struckman, Keith
1994-01-01
Microwave Vision (MV), a concept originally developed in 1985, could play a significant role in the solution to robotic vision problems. Originally our Microwave Vision concept was based on a pattern matching approach employing computer based stored replica correlation processing. Artificial Neural Network (ANN) processor technology offers an attractive alternative to the correlation processing approach, namely the ability to learn and to adapt to changing environments. This paper describes the Microwave Vision concept, some initial ANN-MV experiments, and the design of an ANN-MV system that has led to a second patent disclosure in the robotic vision field.
A Concept for Robust, High Density Terminal Air Traffic Operations
NASA Technical Reports Server (NTRS)
Isaacson, Douglas R.; Robinson, John E.; Swenson, Harry N.; Denery, Dallas G.
2010-01-01
This paper describes a concept for future high-density, terminal air traffic operations that has been developed by interpreting the Joint Planning and Development Office s vision for the Next Generation (NextGen) Air Transportation System and coupling it with emergent NASA and other technologies and procedures during the NextGen timeframe. The concept described in this paper includes five core capabilities: 1) Extended Terminal Area Routing, 2) Precision Scheduling Along Routes, 3) Merging and Spacing, 4) Tactical Separation, and 5) Off-Nominal Recovery. Gradual changes are introduced to the National Airspace System (NAS) by phased enhancements to the core capabilities in the form of increased levels of automation and decision support as well as targeted task delegation. NASA will be evaluating these conceptual technological enhancements in a series of human-in-the-loop simulations and will accelerate development of the most promising capabilities in cooperation with the FAA through the Efficient Flows Into Congested Airspace Research Transition Team.
Enhanced situational technologies applied to ship channels
NASA Astrophysics Data System (ADS)
Helgeson, Michael A.; Wacker, Roger A.
1997-06-01
The Houston Ship Channel ranks as America's number one port in foreign tonnage by welcoming more than 50,000 cargo ships and barges annually. Locally 196,000 jobs, 5.5 billion dollars in business revenue and 213 million dollars in taxes are generated. Unfortunately, 32 days of each year vessel traffic stops for hours due to fog causing an estimated 40- 100 million dollars loss as ships idly wait in the channel for weather to clear. In addition, poor visibility has contributed to past vessel collisions which have resulted in channel closure, and associated damage to property and the environment. Today's imaging technology for synthetic vision systems and enhanced situational awareness systems offers a new solution to this problem. Whereas, typically these systems have been targeted at aircraft landing systems the channel navigation application provides a peripheral ground based market. This paper describes two imaging solutions to the problem. One using an active 35 GHz scanning radar and the other using a 94 GHz passive millimeter wave camera.
The role of vision processing in prosthetic vision.
Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette
2012-01-01
Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.
NASA Technical Reports Server (NTRS)
Bauer, F. H.; Bristow, J. O.; Carpenter, J. R.; Garrison, J. L.; Hartman, K. R.; Lee, T.; Long, A. C.; Kelbel, D.; Lu, V.; How, J. P.;
2000-01-01
Formation flying is quickly revolutionizing the way the space community conducts autonomous science missions around the Earth and in space. This technological revolution will provide new, innovative ways for this community to gather scientific information, share this information between space vehicles and the ground, and expedite the human exploration of space. Once fully matured, this technology will result in swarms of space vehicles flying as a virtual platform and gathering significantly more and better science data than is possible today. Formation flying will be enabled through the development and deployment of spaceborne differential Global Positioning System (GPS) technology and through innovative spacecraft autonomy techniques, This paper provides an overview of the current status of NASA/DoD/Industry/University partnership to bring formation flying technology to the forefront as quickly as possible, the hurdles that need to be overcome to achieve the formation flying vision, and the team's approach to transfer this technology to space. It will also describe some of the formation flying testbeds, such as Orion, that are being developed to demonstrate and validate these innovative GPS sensing and formation control technologies.
SPACE: Vision and Reality: Face to Face. Proceedings Report
NASA Technical Reports Server (NTRS)
1995-01-01
The proceedings of the 11th National Space Symposium entitled 'Vision and Reality: Face to Face' is presented. Technological areas discussed include the following sections: Vision for the future; Positioning for the future; Remote sensing, the emerging era; space opportunities, Competitive vision with acquisition reality; National security requirements in space; The world is into space; and The outlook for space. An appendice is also attached.
Federal employees dental and vision insurance program. Final rule.
2008-08-26
The Office of Personnel Management (OPM) is issuing final regulations to administer the Federal Employee Dental and Vision Benefits Enhancement Act of 2004, signed into law December 23, 2004. This law establishes dental and vision benefits programs for Federal employees, annuitants, and their eligible family members.
First-in-Human Trial of a Novel Suprachoroidal Retinal Prosthesis
Ayton, Lauren N.; Blamey, Peter J.; Guymer, Robyn H.; Luu, Chi D.; Nayagam, David A. X.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Yeoh, Jonathan; McCombe, Mark F.; Briggs, Robert J.; Opie, Nicholas L.; Villalobos, Joel; Dimitrov, Peter N.; Varsamidis, Mary; Petoe, Matthew A.; McCarthy, Chris D.; Walker, Janine G.; Barnes, Nick; Burkitt, Anthony N.; Williams, Chris E.; Shepherd, Robert K.; Allen, Penelope J.
2014-01-01
Retinal visual prostheses (“bionic eyes”) have the potential to restore vision to blind or profoundly vision-impaired patients. The medical bionic technology used to design, manufacture and implant such prostheses is still in its relative infancy, with various technologies and surgical approaches being evaluated. We hypothesised that a suprachoroidal implant location (between the sclera and choroid of the eye) would provide significant surgical and safety benefits for patients, allowing them to maintain preoperative residual vision as well as gaining prosthetic vision input from the device. This report details the first-in-human Phase 1 trial to investigate the use of retinal implants in the suprachoroidal space in three human subjects with end-stage retinitis pigmentosa. The success of the suprachoroidal surgical approach and its associated safety benefits, coupled with twelve-month post-operative efficacy data, holds promise for the field of vision restoration. Trial Registration Clinicaltrials.gov NCT01603576 PMID:25521292
Dunbar, Hannah M P; Dhawahir-Scala, Felipe E
2018-06-01
Age-related macular degeneration (AMD) is the leading cause of visual impairment in the western world, causing significant reduction in quality of life. Despite treatment advances, the burden of visual impairment caused by AMD continues to rise. In addition to traditional low vision rehabilitation and support, optical and electronic aids, and strategies to enhance the use of peripheral vision, implantable telescopic devices have been indicated as a surgical means of enhancing vision. Here we examine the literature on commercially available telescopic devices discussing their design, mode of action, surgical procedure and published outcomes on visual acuity, quality of life, surgical complication rates and cost effectiveness data where available.Funding Article processing charges were funded by VisionCare Inc.
ERIC Educational Resources Information Center
Supalo, Cary A.
2012-01-01
Entry into science education for students with blindness or low vision can present economic and technological barriers to access. This manuscript discusses funding hands-on student experiences in middle school, high school, and post-secondary education. Further, the use of access technologies recently developed for science education is also…
78 FR 1216 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... Group Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section. Date: February 7... Technology A Study Section. Date: February 7-8, 2013. Time: 8:00 a.m. to 5:00 p.m. Agenda: To review and... Integrated Review Group, Cellular Aspects of Diabetes and Obesity Study Section. Date: February 7, 2013. Time...
INSIGHT: Vision & Leadership, 2002.
ERIC Educational Resources Information Center
McGraw, Tammy, Ed.
2002-01-01
This publication focuses on promising new and emerging technologies and what they might mean to the future of K-12 schools. Half of the volume contains articles devoted in some way to "Vision," and articles in the other half are under the heading of "Leadership." Contents in the "Vision" section include: "The…
Aeromedical implications of the X-Chrom lens for improving color vision deficiencies.
DOT National Transportation Integrated Search
1978-04-01
The X-Chrom contact lens is a recent device recommended to improve defective color vision. The red lens is usually worn on the nondominant eye and may require extended wearing for optimum color vision enhancement. A battery of tests was given to 24 i...
Given time: biology, nature and photographic vision.
Garlick, Steve
2009-12-01
The invention of photography in the early 19th century changed the way that we see the world, and has played an important role in the development of western science. Notably, photographic vision is implicated in the definition of a new temporal relation to the natural world at the same time as modern biological science emerges as a disciplinary formation. It is this coincidence in birth that is central to this study. I suggest that by examining the relationship of early photography to nature, we can gain some insight into the technological and epistemological underpinnings of biological vision. To this end, this article is primarily concerned with the role of photographic technology in the genealogy of biological vision. I argue that photography has always been ambiguously located between art and science, between nature and culture, and between life and death. Hence, while it may be a technological expression of the scientific desire to know and to control nature, photographic vision has continually disrupted and frustrated the ambitions of biological technoscience. The technovision of early biological science illustrates that the elusive temporality of nature has always been central to the production of knowledge of life.
Milestones on the road to independence for the blind
NASA Astrophysics Data System (ADS)
Reed, Kenneth
1997-02-01
Ken will talk about his experiences as an end user of technology. Even moderate technological progress in the field of pattern recognition and artificial intelligence can be, often surprisingly, of great help to the blind. An example is the providing of portable bar code scanners so that a blind person knows what he is buying and what color it is. In this age of microprocessors controlling everything, how can a blind person find out what his VCR is doing? Is there some technique that will allow a blind musician to convert print music into midi files to drive a synthesizer? Can computer vision help the blind cross a road including predictions of where oncoming traffic will be located? Can computer vision technology provide spoken description of scenes so a blind person can figure out where doors and entrances are located, and what the signage on the building says? He asks 'can computer vision help me flip a pancake?' His challenge to those in the computer vision field is 'where can we go from here?'
NASA Technical Reports Server (NTRS)
Drake, Bret G.; Josten, B. Kent; Monell, Donald W.
2004-01-01
The Vision for Space Exploration provides direction for the National Aeronautics and Space Administration to embark on a robust space exploration program that will advance the Nation s scientific, security, and economic interests. This plan calls for a progressive expansion of human capabilities beyond low earth orbit seeking to answer profound scientific and philosophical questions while responding to discoveries along the way. In addition, the Vision articulates the strategy for developing the revolutionary new technologies and capabilities required for the future exploration of the solar system. The National Aeronautics and Space Administration faces new challenges in successfully implementing the Vision. In order to implement a sustained and affordable exploration endeavor it is vital for NASA to do business differently. This paper provides an overview of the strategy-to-task-to-technology process being used by NASA s Exploration Systems Mission Directorate to develop the requirements and system acquisition details necessary for implementing a sustainable exploration vision.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J. (Compiler)
1999-01-01
The second NASA sponsored Workshop on Synthetic/Enhanced Vision (S/EV) Display Systems was conducted January 27-29, 1998 at the NASA Langley Research Center. The purpose of this workshop was to provide a forum for interested parties to discuss topics in the Synthetic Vision (SV) element of the NASA Aviation Safety Program and to encourage those interested parties to participate in the development, prototyping, and implementation of S/EV systems that enhance aviation safety. The SV element addresses the potential safety benefits of synthetic/enhanced vision display systems for low-end general aviation aircraft, high-end general aviation aircraft (business jets), and commercial transports. Attendance at this workshop consisted of about 112 persons including representatives from industry, the FAA, and other government organizations (NOAA, NIMA, etc.). The workshop provided opportunities for interested individuals to give presentations on the state of the art in potentially applicable systems, as well as to discuss areas of research that might be considered for inclusion within the Synthetic Vision Element program to contribute to the reduction of the fatal aircraft accident rate. Panel discussions on topical areas such as databases, displays, certification issues, and sensors were conducted, with time allowed for audience participation.
Space Tethers Programmatic Infusion Opportunities
NASA Technical Reports Server (NTRS)
Bonometti, J. A.; Frame, K. L.
2005-01-01
Programmatic opportunities abound for space Cables, Stringers and Tethers, justified by the tremendous performance advantages that these technologies offer and the rather wide gaps that must be filled by the NASA Exploration program, if the "sustainability goal" is to be met. A definition and characterization of the three categories are presented along with examples. A logical review of exploration requirements shows how each class can be infused throughout the program, from small experimental efforts to large system deployments. The economics of tethers in transportation is considered along with the impact of stringers for structural members. There is an array of synergistic methodologies that interlace their fabrication, implementation and operations. Cables, stringers and tethers can enhance a wide range of other space systems and technologies, including power storage, formation flying, instrumentation, docking mechanisms and long-life space components. The existing tether (i.e., MXER) program's accomplishments are considered consistent with NASA's new vision and can readily conform to requirements-driven technology development.
NASA Astrophysics Data System (ADS)
Baumann, Peter
2013-04-01
There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.
Landini, G; Perryer, G
2009-06-01
Individuals with red-green colour-blindness (CB) commonly experience great difficulty differentiating between certain histological stain pairs, notably haematoxylin-eosin (H&E). The prevalence of red-green CB is high (6-10% of males), including among medical and laboratory personnel, and raises two major concerns: first, accessibility and equity issues during the education and training of individuals with this disability, and second, the likelihood of errors in critical tasks such as interpreting histological images. Here we show two methods to enhance images of H&E-stained samples so the differently stained tissues can be well discriminated by red-green CBs while remaining usable by people with normal vision. Method 1 involves rotating and stretching the range of H&E hues in the image to span the perceptual range of the CB observers. Method 2 digitally unmixes the original dyes using colour deconvolution into two separate images and repositions the information into hues that are more distinctly perceived. The benefits of these methods were tested in 36 volunteers with normal vision and 11 with red-green CB using a variety of H&E stained tissue sections paired with their enhanced versions. CB subjects reported they could better perceive the different stains using the enhanced images for 85% of preparations (method 1: 90%, method 2: 73%), compared to the H&E-stained original images. Many subjects with normal vision also preferred the enhanced images to the original H&E. The results suggest that these colour manipulations confer considerable advantage for those with red-green colour vision deficiency while not disadvantaging people with normal colour vision.
ERIC Educational Resources Information Center
Andrews, Gillian
2015-01-01
Possibilities for a different form of education have provided rich sources of inspiration for science fiction writers. Isaac Asimov, Orson Scott Card, Neal Stephenson, Octavia Butler, and Vernor Vinge, among others, have all projected their own visions of what education could be. These visions sometimes engage with technologies that are currently…
ERIC Educational Resources Information Center
American Society for Training and Development, Alexandria, VA.
In 2000, the American Society for Training and Development and the National Governors Association convened the Commission on Technology and Adult Learning. The 31-member commission included representatives of the business, government, and education sectors. They formulated a vision for the future of e-learning in the United States and identified…
ERIC Educational Resources Information Center
Maryland State Dept. of Education, Baltimore.
A team consisting of Maryland State Department of Education (MSDE) staff, local educators, and other representatives developed an action plan to assist in advancing the blending of academic, career, and technology education. The team prepared a vision statement, set strategic directions, analyzed barriers, and developed recommendations and actions…
Steps to Offering Low Vision Rehabilitation Services through Clinical Video Telehealth
ERIC Educational Resources Information Center
Ihirig, Carolyn
2016-01-01
Telehealth clinical applications, which allow medical professionals to use telecommunications technologies to provide services to individuals remotely, continue to expand in areas such as low vision rehabilitation, where evaluations are provided to patients who live in rural areas. As with face-to-face low vision rehabilitation, the goal of…
INSIGHT: Vision & Leadership Inaugural Issue, 2001.
ERIC Educational Resources Information Center
McGraw, Tammy, Ed.
2001-01-01
This publication focuses on promising new and emerging technologies and what they might mean to the future of K-12 schools. Half of the volume contains articles devoted in some way to "Vision," and articles in the other half are under the heading of "Leadership." Contents in the "Vision" section include: "Customizable Content" (Walter Koetke);…
NASA Technical Reports Server (NTRS)
Cockrell, Charles E., Jr.; Auslender, Aaron H.; Guy, R. Wayne; McClinton, Charles R.; Welch, Sharon S.
2002-01-01
Third-generation reusable launch vehicle (RLV) systems are envisioned that utilize airbreathing and combined-cycle propulsion to take advantage of potential performance benefits over conventional rocket propulsion and address goals of reducing the cost and enhancing the safety of systems to reach earth orbit. The dual-mode scramjet (DMSJ) forms the core of combined-cycle or combination-cycle propulsion systems for single-stage-to-orbit (SSTO) vehicles and provides most of the orbital ascent energy. These concepts are also relevant to two-stage-to-orbit (TSTO) systems with an airbreathing first or second stage. Foundation technology investments in scramjet propulsion are driven by the goal to develop efficient Mach 3-15 concepts with sufficient performance and operability to meet operational system goals. A brief historical review of NASA scramjet development is presented along with a summary of current technology efforts and a proposed roadmap. The technology addresses hydrogen-fueled combustor development, hypervelocity scramjets, multi-speed flowpath performance and operability, propulsion-airframe integration, and analysis and diagnostic tools.
ERIC Educational Resources Information Center
Washington Office of the State Superintendent of Public Instruction, Olympia.
This final report for the Washington State Technology Plan for K-12 Common Schools provides a vision, long-term framework, and recommendations for implementation. Following an executive summary and a list of committee members, the first section of the report discusses technology in K-12 schools of tomorrow, including legislative charge, vision,…
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
NASA Technical Reports Server (NTRS)
Delnore, Victor E. (Compiler)
1994-01-01
The Fifth Combined Manufacturers' and Technologists' Airborne Windshear Review Meeting was hosted by the NASA Langley Research Center and the Federal Aviation Administration in Hampton, Virginia, on September 28-30, 1993. The purpose was to report on the highly successful windshear experiments conducted by government, academic institutions, and industry; to transfer the results to regulators, manufacturers, and users; and to set initiatives for future aeronautics technology research. The formal sessions covered recent developments in windshear flight testing, windshear modeling, flight management, and ground-based systems, airborne windshear detection systems, certification and regulatory issues, and development and applications of sensors for wake vortices and for synthetic and enhanced vision systems. This report was compiled to record and make available the technology updates and materials from the conference.
Adjustable typography: an approach to enhancing low vision text accessibility.
Arditi, Aries
2004-04-15
Millions of people have low vision, a disability condition caused by uncorrectable or partially correctable disorders of the eye. The primary goal of low vision rehabilitation is increasing access to printed material. This paper describes how adjustable typography, a computer graphic approach to enhancing text accessibility, can play a role in this process, by allowing visually-impaired users to customize fonts to maximize legibility according to their own visual needs. Prototype software and initial testing of the concept is described. The results show that visually-impaired users tend to produce a variety of very distinct fonts, and that the adjustment process results in greatly enhanced legibility. But this initial testing has not yet demonstrated increases in legibility over and above the legibility of highly legible standard fonts such as Times New Roman.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth Thomas
2012-02-01
Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970's vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performancemore » improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE's program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: (1) Highly integrated control rooms; (2) Highly automated plant; (3) Integrated operations; (4) Human performance improvement for field workers; and (5) Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenneth Thomas; Bruce Hallbert
2013-02-01
Life extension beyond 60 years for the U.S operating nuclear fleet requires that instrumentation and control (I&C) systems be upgraded to address aging and reliability concerns. It is impractical for the legacy systems based on 1970’s vintage technology operate over this extended time period. Indeed, utilities have successfully engaged in such replacements when dictated by these operational concerns. However, the replacements have been approached in a like-for-like manner, meaning that they do not take advantage of the inherent capabilities of digital technology to improve business functions. And so, the improvement in I&C system performance has not translated to bottom-line performancemore » improvement for the fleet. Therefore, wide-scale modernization of the legacy I&C systems could prove to be cost-prohibitive unless the technology is implemented in a manner to enable significant business innovation as a means of off-setting the cost of upgrades. A Future Vision of a transformed nuclear plant operating model based on an integrated digital environment has been developed as part of the Advanced Instrumentation, Information, and Control (II&C) research pathway, under the Light Water Reactor (LWR) Sustainability Program. This is a research and development program sponsored by the U.S. Department of Energy (DOE), performed in close collaboration with the nuclear utility industry, to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. DOE’s program focus is on longer-term and higher-risk/reward research that contributes to the national policy objectives of energy security and environmental security . The Advanced II&C research pathway is being conducted by the Idaho National Laboratory (INL). The Future Vision is based on a digital architecture that encompasses all aspects of plant operations and support, integrating plant systems, plant work processes, and plant workers in a seamless digital environment to enhance nuclear safety, increase productivity, and improve overall plant performance. The long-term goal is to transform the operating model of the nuclear power plants (NPP)s from one that is highly reliant on a large staff performing mostly manual activities to an operating model based on highly integrated technology with a smaller staff. This digital transformation is critical to addressing an array of issues facing the plants, including aging of legacy analog systems, potential shortage of technical workers, ever-increasing expectations for nuclear safety improvement, and relentless pressure to reduce cost. The Future Vision is based on research is being conducted in the following major areas of plant function: 1. Highly integrated control rooms 2. Highly automated plant 3. Integrated operations 4. Human performance improvement for field workers 5. Outage safety and efficiency. Pilot projects will be conducted in each of these areas as the means for industry to collectively integrate these new technologies into nuclear plant work activities. The pilot projects introduce new digital technologies into the nuclear plant operating environment at host operating plants to demonstrate and validate them for production usage. In turn, the pilot project technologies serve as the stepping stones to the eventual seamless digital environment as described in the Future Vision.« less
Bray, Nathan; Brand, Andrew; Taylor, John; Hoare, Zoe; Dickinson, Christine; Edwards, Rhiannon T
2017-08-01
To determine the incremental cost-effectiveness of portable electronic vision enhancement system (p-EVES) devices compared with optical low vision aids (LVAs), for improving near vision visual function, quality of life and well-being of people with a visual impairment. An AB/BA randomized crossover trial design was used. Eighty-two participants completed the study. Participants were current users of optical LVAs who had not tried a p-EVES device before and had a stable visual impairment. The trial intervention was the addition of a p-EVES device to the participant's existing optical LVA(s) for 2 months, and the control intervention was optical LVA use only, for 2 months. Cost-effectiveness and cost-utility analyses were conducted from a societal perspective. The mean cost of the p-EVES intervention was £448. Carer costs were £30 (4.46 hr) less for the p-EVES intervention compared with the LVA only control. The mean difference in total costs was £417. Bootstrapping gave an incremental cost-effectiveness ratio (ICER) of £736 (95% CI £481 to £1525) for a 7% improvement in near vision visual function. Cost per quality-adjusted life year (QALY) ranged from £56 991 (lower 95% CI = £19 801) to £66 490 (lower 95% CI = £23 055). Sensitivity analysis varying the commercial price of the p-EVES device reduced ICERs by up to 75%, with cost per QALYs falling below £30 000. Portable electronic vision enhancement system (p-EVES) devices are likely to be a cost-effective use of healthcare resources for improving near vision visual function, but this does not translate into cost-effective improvements in quality of life, capability or well-being. © 2016 The Authors. Acta Ophthalmologica published by John Wiley & Sons Ltd on behalf of Acta Ophthalmologica Scandinavica Foundation and European Association for Vision & Eye Research.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckers, Koenraad J; Young, Katherine R
Geothermal district heating (GDH) systems have limited penetration in the U.S., with an estimated installed capacity of only 100 MWth for a total of 21 sites. We see higher deployment in other regions, for example, in Europe with an installed capacity of more than 4,700 MWth for 257 GDH sites. The U.S. Department of Energy Geothermal Vision (GeoVision) Study is currently looking at the potential to increase the deployment in the U.S. and to understand the impact of this increased deployment. This paper reviews 31 performance, cost, and financial parameters as input for numerical simulations describing GDH system deployment inmore » support of the GeoVision effort. The focus is on GDH systems using hydrothermal and Enhanced Geothermal System resources in the U.S.; ground-source heat pumps and heat-to-electricity conversion technology were excluded. Parameters investigated include 1) capital and operation and maintenance costs for both subsurface and surface equipment; 2) performance factors such as resource recovery factors, well flow rates, and system efficiencies; and 3) financial parameters such as inflation, interest, and tax rates. Current values as well as potential future improved values under various scenarios are presented. Sources of data considered include academic and popular literature, software tools such as GETEM and GEOPHIRES, industry interviews, and analysis conducted by other task forces for the GeoVision Study, e.g., on the drilling costs and reservoir performance.« less
Flight instruments and helmet-mounted SWIR imaging systems
NASA Astrophysics Data System (ADS)
Robinson, Tim; Green, John; Jacobson, Mickey; Grabski, Greg
2011-06-01
Night vision technology has experienced significant advances in the last two decades. Night vision goggles (NVGs) based on gallium arsenide (GaAs) continues to raise the bar for alternative technologies. Resolution, gain, sensitivity have all improved; the image quality through these devices is nothing less than incredible. Panoramic NVGs and enhanced NVGs are examples of recent advances that increase the warfighter capabilities. Even with these advances, alternative night vision devices such as solid-state indium gallium arsenide (InGaAs) focal plane arrays are under development for helmet-mounted imaging systems. The InGaAs imaging system offers advantages over the existing NVGs. Two key advantages are; (1) the new system produces digital image data, and (2) the new system is sensitive to energy in the shortwave infrared (SWIR) spectrum. While it is tempting to contrast the performance of these digital systems to the existing NVGs, the advantage of different spectral detection bands leads to the conclusion that the technologies are less competitive and more synergistic. It is likely, by the end of the decade, pilots within a cockpit will use multi-band devices. As such, flight decks will need to be compatible with both NVGs and SWIR imaging systems. Insertion of NVGs in aircraft during the late 70's and early 80's resulted in many "lessons learned" concerning instrument compatibility with NVGs. These "lessons learned" ultimately resulted in specifications such as MIL-L-85762A and MIL-STD-3009. These specifications are now used throughout industry to produce NVG-compatible illuminated instruments and displays for both military and civilian applications. Inserting a SWIR imaging device in a cockpit will require similar consideration. A project evaluating flight deck instrument compatibility with SWIR devices is currently ongoing; aspects of this evaluation are described in this paper. This project is sponsored by the Air Force Research Laboratory (AFRL).
Flight Simulator Evaluation of Display Media Devices for Synthetic Vision Concepts
NASA Technical Reports Server (NTRS)
Arthur, J. J., III; Williams, Steven P.; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2004-01-01
The Synthetic Vision Systems (SVS) Project of the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSP) is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft. To accomplish these safety and capacity improvements, the SVS concept is designed to provide a clear view of the world around the aircraft through the display of computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information. Display media devices with which to implement SVS technology that have been evaluated so far within the Project include fixed field of view head up displays and head down Primary Flight Displays with pilot-selectable field of view. A simulation experiment was conducted comparing these display devices to a fixed field of view, unlimited field of regard, full color Helmet-Mounted Display system. Subject pilots flew a visual circling maneuver in IMC at a terrain-challenged airport. The data collected for this experiment is compared to past SVS research studies.
NASA Technical Reports Server (NTRS)
2003-01-01
The vision document provides an overview of the Climate Change Science Program (CCSP) long-term strategic plan to enhance scientific understanding of global climate change.This document is a companion to the comprehensive Strategic Plan for the Climate Change Science Program. The report responds to the Presidents direction that climate change research activities be accelerated to provide the best possible scientific information to support public discussion and decisionmaking on climate-related issues.The plan also responds to Section 104 of the Global Change Research Act of 1990, which mandates the development and periodic updating of a long-term national global change research plan coordinated through the National Science and Technology Council.This is the first comprehensive update of a strategic plan for U.S. global change and climate change research since the origal plan for the U.S. Global Change Research Program was adopted at the inception of the program in 1989.
Toward Head-Up and Head-Worn Displays for Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Arthur, Jarvis J.; Bailey, Randall E.; Shelton, Kevin J.; Kramer, Lynda J.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.; Ellis, Kyle K.
2015-01-01
A key capability envisioned for the future air transportation system is the concept of equivalent visual operations (EVO). EVO is the capability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. Enhanced Flight Vision Systems (EFVS) offer a path to achieve EVO. NASA has successfully tested EFVS for commercial flight operations that has helped establish the technical merits of EFVS, without reliance on natural vision, to runways without category II/III ground-based navigation and lighting requirements. The research has tested EFVS for operations with both Head-Up Displays (HUDs) and "HUD equivalent" Head-Worn Displays (HWDs). The paper describes the EVO concept and representative NASA EFVS research that demonstrate the potential of these technologies to safely conduct operations in visibilities as low as 1000 feet Runway Visual Range (RVR). Future directions are described including efforts to enable low-visibility approach, landing, and roll-outs using EFVS under conditions as low as 300 feet RVR.
Lehoux, Pascale; Denis, Jean-Louis; Tailliez, Stéphanie; Hivon, Myriam
2005-08-01
Health technology assessment (HTA) has received increasing support over the past twenty years in both North America and Europe. The justification for this field of policy-oriented research is that evidence about the efficacy, safety, and cost-effectiveness of technology should contribute to decision and policy making. However, concerns about the ability of HTA producers to increase the use of their findings by decision makers have been expressed. Although HTA practitioners have recognized that dissemination activities need to be intensified, why and how particular approaches should be adopted is still under debate. Using an institutional theory perspective, this article examines HTA as a means of implementing knowledge-based change within health care systems. It presents the results of a case study on the dissemination strategies of six Canadian HTA agencies. Chief executive officers and executives (n = 11), evaluators (n = 19), and communications staff (n = 10) from these agencies were interviewed. Our results indicate that the target audience of HTA is frequently limited to policy makers, that three conflicting visions of HTA dissemination coexist, that active dissemination strategies have only occasionally been applied, and that little attention has been paid to the management of diverging views about the value of health technology. Our discussion explores the strengths, limitations, and trade-offs associated with the three visions. Further efforts should be deployed within agencies to better articulate a shared vision and to devise dissemination strategies that are consistent with this vision.
[Human, transhuman, posthuman. Representations of the body between incompleteness and enhancement].
Maestrutti, Marina
2011-01-01
"Posthuman" is often used to indicate some position, practice, perspective and vision concerning the future of human beings closely related to the use of contemporary technologies. This contribution would like to analyze some conceptions of the notion of posthuman and to present it as a possible form of "non-anthropocentric" thought which considers technological changes as non-human realities strictly involved in the construction and the definition of what constitutes a human being (and his body) and its predicates. Contrary to anthropocentrism which has characterized Western thought from humanism up to the extreme outcomes of transhumanism, non-anthropocentric posthumanism shows how the human being, who has always been the product of hybridization with the non-human (environment, animals and techniques), is built not only by his own strength but always through his partnership and his environment. The idea of enhancement of the body by technology to reach another stage of human evolution is one of the constant elements characterizing transhumanism. Posthumanism suggests no longer considering the interface with technology as an ergonomic relationship with an external tool that just extends the human body, but as a hybrid, or interpenetration that questions the separation of the body and its centrality. In this perspective, the question is not of simply establishing which is a good use of a technology but, every time, of redefining ourselves in our perspectives and our predicates with regard to what a technology allows and opens up to us.
Creating a vision for your medical call center.
Barr, J L; Laufenberg, S; Sieckman, B L
1998-01-01
MCC technologies and applications that can have a positive impact on managed care delivery are almost limitless. As you determine your vision, be sure to have in mind the following questions: (1) Do you simply want an efficient front end for receiving calls? (2) Do you want to offer triage services? (3) Is your organization ready for a fully functional "electronic physician's office?" Understand your organization's strategy. Where are you going, not only today but five years from now? That information is essential to determine your vision. Once established, your vision will help determine what you need and whether you should build or outsource. Vendors will assist in cost/benefit analysis of their equipment, but do not lose sight of internal factors such as "prior inclination" costs in the case of a nurse triage program. The technology is available to take your vision to its outer reaches. With the projected increase in utilization of call center services, don't let your organization be left behind!
Cochlear Implantation, Enhancements, Transhumanism and Posthumanism: Some Human Questions.
Lee, Joseph
2016-02-01
Biomedical engineering technologies such as brain-machine interfaces and neuroprosthetics are advancements which assist human beings in varied ways. There are exciting yet speculative visions of how the neurosciences and bioengineering may influence human nature. However, these could be preparing a possible pathway towards an enhanced and even posthuman future. This article seeks to investigate several ethical themes and wider questions of enhancement, transhumanism and posthumanism. Four themes of interest are: autonomy, identity, futures, and community. Three larger questions can be asked: will everyone be enhanced? Will we be "human" if we are not, one day, transhuman? Should we be enhanced or not? The article proceeds by concentrating on a widespread and sometimes controversial application: the cochlear implant, an auditory prosthesis implanted into Deaf patients. Cochlear implantation and its reception in both the deaf and hearing communities have a distinctive moral discourse, which can offer surprising insights. The paper begins with several points about the enhancement of human beings, transhumanism's reach beyond the human, and posthuman aspirations. Next it focuses on cochlear implants on two sides. Firstly, a shorter consideration of what technologies may do to humans in a transhumanist world. Secondly, a deeper analysis of cochlear implantation's unique socio-political movement, its ethical explanations and cultural experiences linked with pediatric cochlear implantation-and how those wary of being thrust towards posthumanism could marshal such ideas by analogy. As transhumanism approaches, the issues and questions merit continuing intense analysis.
NASA Astrophysics Data System (ADS)
Roco, Mihail C.; Bainbridge, William S.
2013-09-01
Convergence of knowledge and technology for the benefit of society (CKTS) is the core opportunity for progress in the twenty-first century. CKTS is defined as the escalating and transformative interactions among seemingly different disciplines, technologies, communities, and domains of human activity to achieve mutual compatibility, synergism, and integration, and through this process to create added value and branch out to meet shared goals. Convergence has been progressing by stages over the past several decades, beginning with nanotechnology for the material world, followed by convergence of nanotechnology, biotechnology, information, and cognitive science (NBIC) for emerging technologies. CKTS is the third level of convergence. It suggests a general process to advance creativity, innovation, and societal progress based on five general purpose principles: (1) the interdependence of all components of nature and society, (2) decision analysis for research, development, and applications based on dynamic system-logic deduction, (3) enhancement of creativity and innovation through evolutionary processes of convergence that combines existing principles and divergence that generates new ones, (4) the utility of higher-level cross-domain languages to generate new solutions and support transfer of new knowledge, and (5) the value of vision-inspired basic research embodied in grand challenges. CKTS is a general purpose approach in knowledge society. It allows society to answer questions and resolve problems that isolated capabilities cannot, as well as to create new competencies, knowledge, and technologies on this basis. Possible solutions are outlined for key societal challenges in the next decade, including support for foundational emerging technologies NBIC to penetrate essential platforms of human activity and create new industries and jobs, improve lifelong wellness and human potential, achieve personalized and integrated healthcare and education, and secure a sustainable quality of life for all. This paper provides a 10-year "NBIC2" vision within a longer-term framework for converging technology and human progress outlined in a previous study of unifying principles across "NBIC" fields that began with nanotechnology, biotechnology, information technology, and technologies based on and enabling cognitive science (Roco and Bainbridge, Converging technologies for improving human performance: nanotechnology, biotechnology, information technology and cognitive sciences, 2003).
Behavioral Informatics and Computational Modeling in Support of Proactive Health Management and Care
Jimison, Holly B.; Korhonen, Ilkka; Gordon, Christine M.; Saranummi, Niilo
2016-01-01
Health-related behaviors are among the most significant determinants of health and quality of life. Improving health behavior is an effective way to enhance health outcomes and mitigate the escalating challenges arising from an increasingly aging population and the proliferation of chronic diseases. Although it has been difficult to obtain lasting improvements in health behaviors on a wide scale, advances at the intersection of technology and behavioral science may provide the tools to address this challenge. In this paper, we describe a vision and an approach to improve health behavior interventions using the tools of behavioral informatics, an emerging transdisciplinary research domain based on system-theoretic principles in combination with behavioral science and information technology. The field of behavioral informatics has the potential to optimize interventions through monitoring, assessing, and modeling behavior in support of providing tailored and timely interventions. We describe the components of a closed-loop system for health interventions. These components range from fine grain sensor characterizations to individual-based models of behavior change. We provide an example of a research health coaching platform that incorporates a closed-loop intervention based on these multiscale models. Using this early prototype, we illustrate how the optimized and personalized methodology and technology can support self-management and remote care. We note that despite the existing examples of research projects and our platform, significant future research is required to convert this vision to full-scale implementations. PMID:26441408
Enhanced computer vision with Microsoft Kinect sensor: a review.
Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie
2013-10-01
With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.
Creating a Realistic IT Vision: The Roles and Responsibilities of a Chief Information Officer.
ERIC Educational Resources Information Center
Penrod, James I.
2003-01-01
Discusses the crucial position of the chief information officer (CIO) at higher education institutions and reviews the six major stages of information technology (IT) planning. Includes fundamental elements related to an IT vision; roles of the CIO; the six-stage planning model for a realistic IT vision; and factors for success. (AEF)
ERIC Educational Resources Information Center
Klostermann, Brenda K.; Presley, Jennifer B.
2005-01-01
Findings are presented from our case study evaluating the implementation of the IBHE's federally funded grant, "A Common Vision: Teacher Quality Enhancement in the Middle Grades in Illinois." Four sites were examined in terms of how they organized to attain the grant goals, and what aspects of organizational culture and leadership…
(En)visioning Success: The Anatomy and Functions of Vision in the Basic Course.
ERIC Educational Resources Information Center
Williams, Glen
The success of the basic course in speech communication depends largely on a vision that values the course and its place in the undergraduate curriculum; emphasizes the necessity of ongoing training and development of teaching assistants and instructors; and values scholarship that will enhance those efforts as well as improve dedication. Vision…
ERIC Educational Resources Information Center
Weinberg, Adam S.
1999-01-01
Explores an effort by Colgate University (New York) to enhance economic development in two low-income hamlets in New York through community-visioning programs. Describes the process of community visioning and shows how Colgate has been instrumental in its promotion. Argues that universities are better situated than governments or nonprofits to…
An Analysis of Helicopter Pilot Scan Techniques While Flying at Low Altitudes and High Speed
2012-09-01
Manager SV Synthetic Vision TFH Total Flight Hours TOFT Tactical Operational Flight Trainer VFR Visual Flight Rules VMC Visual Meteorological...Crognale, 2008). Recently, the use of synthetic vision (SV) and a heads-up- display (HUD) have been a topic of discussion in the aviation community... Synthetic vision uses external cameras to provide the pilot with an enhanced view of the outside world, usually with the assistance of night vision
GeoVision Exploration Task Force Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doughty, Christine; Dobson, Patrick F.; Wall, Anna
The GeoVision study effort included ground-breaking, detailed research on current and future market conditions and geothermal technologies in order to forecast and quantify the electric and non-electric deployment potentials under a range of scenarios, in addition to their impacts on the Nation’s jobs, economy and environment. Coordinated by the U.S. Department of Energy’s (DOE’s) Geothermal Technologies Office (GTO), the GeoVision study development relied on the collection, modeling, and analysis of robust datasets through seven national laboratory partners, which were organized into eight technical Task Force groups. The purpose of this report is to provide a central repository for the researchmore » conducted by the Exploration Task Force. The Exploration Task Force consists of four individuals representing three national laboratories: Patrick Dobson (task lead) and Christine Doughty of Lawrence Berkeley National Laboratory, Anna Wall of National Renewable Energy Laboratory, Travis McLing of Idaho National Laboratory, and Chester Weiss of Sandia National Laboratories. As part of the GeoVision analysis, our team conducted extensive scientific and financial analyses on a number of topics related to current and future geothermal exploration methods. The GeoVision Exploration Task Force complements the drilling and resource technology investigations conducted as part of the Reservoir Maintenance and Development Task Force. The Exploration Task Force however has focused primarily on early stage R&D technologies in exploration and confirmation drilling, along with an evaluation of geothermal financing challenges and assumptions, and innovative “blue-sky” technologies. This research was used to develop geothermal resource supply curves (through the use of GETEM) for use in the ReEDS capacity expansion modeling that determines geothermal technology deployment potential. It also catalogues and explores the large array of early-stage R&D technologies with the potential to dramatically reduce exploration and geothermal development costs, forming the basis of the GeoVision Technology Improvement (TI) scenario. These modeling topics are covered in detail in Potential to Penetration task force report. Most of the research contained herein has been published in peer-reviewed papers or conference proceedings and are cited and referenced accordingly. The sections that follow provide a central repository for all of the research findings of the Exploration and Confirmation Task Force. In summary, it provides a comprehensive discussion of Engineered Geothermal Systems (EGS) and associated technology challenges, the risks and costs of conducting geothermal exploration, a review of existing government efforts to date in advancing early-stage R&D in both exploration and EGS technologies, as well as a discussion of promising and innovative technologies and implementation of blue-sky concepts that could significantly reduce costs, lower risks, and shorten the time needed to explore and develop geothermal resources of all types.« less
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien
2012-09-01
This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.
NASA Astrophysics Data System (ADS)
McKinley, John B.; Pierson, Roger; Ertem, M. C.; Krone, Norris J., Jr.; Cramer, James A.
2008-04-01
Flight tests were conducted at Greenbrier Valley Airport (KLWB) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Norris Electro Optical Systems Corporation (NEOC) developmental ultraviolet (UV) sensor. These flights were sponsored by NEOC under a Federal Aviation Administration program, and the ultraviolet concepts, technology, system mechanization, and hardware for landing during low visibility landing conditions have been patented by NEOC. Imagery from the UV sensor, HUD guidance cues, and out-the-window videos were separately recorded at the engineering workstation for each approach. Inertial flight path data were also recorded. Various configurations of portable UV emitters were positioned along the runway edge and threshold. The UV imagery of the runway outline was displayed on the HUD along with guidance generated from the mission computer. Enhanced Flight Vision System (EFVS) approaches with the UV sensor were conducted from the initial approach fix to the ILS decision height in both VMC and IMC. Although the availability of low visibility conditions during the flight test period was limited, results from previous fog range testing concluded that UV EFVS has the performance capability to penetrate CAT II runway visual range obscuration. Furthermore, independent analysis has shown that existing runway light emit sufficient UV radiation without the need for augmentation other than lens replacement with UV transmissive quartz lenses. Consequently, UV sensors should qualify as conforming to FAA requirements for EFVS approaches. Combined with Synthetic Vision System (SVS), UV EFVS would function as both a precision landing aid, as well as an integrity monitor for the GPS and SVS database.
Part-Task Simulation of Synthetic and Enhanced Vision Concepts for Lunar Landing
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Bailey, Randall E.; Jackson, E. Bruce; Williams, Steven P.; Kramer, Lynda J.; Barnes, James R.
2010-01-01
During Apollo, the constraints placed by the design of the Lunar Module (LM) window for crew visibility and landing trajectory were a major problem. Lunar landing trajectories were tailored to provide crew visibility using nearly 70 degrees look-down angle from the canted LM windows. Apollo landings were scheduled only at specific times and locations to provide optimal sunlight on the landing site. The complications of trajectory design and crew visibility are still a problem today. Practical vehicle designs for lunar lander missions using optimal or near-optimal fuel trajectories render the natural vision of the crew from windows inadequate for the approach and landing task. Further, the sun angles for the desirable landing areas in the lunar polar regions create visually powerful, season-long shadow effects. Fortunately, Synthetic and Enhanced Vision (S/EV) technologies, conceived and developed in the aviation domain, may provide solutions to this visibility problem and enable additional benefits for safer, more efficient lunar operations. Piloted simulation evaluations have been conducted to assess the handling qualities of the various lunar landing concepts, including the influence of cockpit displays and the informational data and formats. Evaluation pilots flew various landing scenarios with S/EV displays. For some of the evaluation trials, an eye glasses-mounted, monochrome monocular display, coupled with head tracking, was worn. The head-worn display scene consisted of S/EV fusion concepts. The results of this experiment showed that a head-worn system did not increase the pilot s workload when compared to using just the head-down displays. As expected, the head-worn system did not provide an increase in performance measures. Some pilots commented that the head-worn system provided greater situational awareness compared to just head-down displays.
Part-task simulation of synthetic and enhanced vision concepts for lunar landing
NASA Astrophysics Data System (ADS)
Arthur, Jarvis J., III; Bailey, Randall E.; Jackson, E. Bruce; Barnes, James R.; Williams, Steven P.; Kramer, Lynda J.
2010-04-01
During Apollo, the constraints placed by the design of the Lunar Module (LM) window for crew visibility and landing trajectory were "a major problem." Lunar landing trajectories were tailored to provide crew visibility using nearly 70 degrees look-down angle from the canted LM windows. Apollo landings were scheduled only at specific times and locations to provide optimal sunlight on the landing site. The complications of trajectory design and crew visibility are still a problem today. Practical vehicle designs for lunar lander missions using optimal or near-optimal fuel trajectories render the natural vision of the crew from windows inadequate for the approach and landing task. Further, the sun angles for the desirable landing areas in the lunar polar regions create visually powerful, season-long shadow effects. Fortunately, Synthetic and Enhanced Vision (S/EV) technologies, conceived and developed in the aviation domain, may provide solutions to this visibility problem and enable additional benefits for safer, more efficient lunar operations. Piloted simulation evaluations have been conducted to assess the handling qualities of the various lunar landing concepts, including the influence of cockpit displays and the informational data and formats. Evaluation pilots flew various landing scenarios with S/EV displays. For some of the evaluation trials, an eye glasses-mounted, monochrome monocular display, coupled with head tracking, was worn. The head-worn display scene consisted of S/EV fusion concepts. The results of this experiment showed that a head-worn system did not increase the pilot's workload when compared to using just the head-down displays. As expected, the head-worn system did not provide an increase in performance measures. Some pilots commented that the head-worn system provided greater situational awareness compared to just head-down displays.
SunShot Vision Study: February 2012 (Book)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2012-02-01
The objective of the SunShot Vision Study is to provide an in-depth assessment of the potential for solar technologies to meet a significant share of electricity demand in the United States during the next several decades. Specifically, it explores a future in which the price of solar technologies declines by about 75% between 2010 and 2020 - in line with the U.S. Department of Energy (DOE) SunShot Initiative's targets.
Location Technologies for Apparel Assembly
1991-09-01
ADDRESS (Stry, State, and ZIP Code) School of Textile & Fiber Engineering Georgia Institute of Technology Atlanta, Georgia 30332-0295 206 O’Keefe...at a cost of less than $500. A review is also given of state-of-the- art vision systems. These systems have the nccessry- accuracy and precision for...of state-of-the- art vision systems. These systems have the necessary accuracy and precision for apparel manufacturing applications and could
ERIC Educational Resources Information Center
Supalo, Cary A.; Humphrey, Jennifer R.; Mallouk, Thomas E.; Wohlers, H. David; Carlsen, William S.
2016-01-01
To determine whether a suite of audible adaptive technologies would increase the hands-on participation of high school students with blindness or low vision in chemistry and physics courses, data were examined from a multi-year field study conducted with students in mainstream classrooms at secondary schools across the United States. The students…
Stereo 3-D Vision in Teaching Physics
ERIC Educational Resources Information Center
Zabunov, Svetoslav
2012-01-01
Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…
ERIC Educational Resources Information Center
Williams, Michael D.; Ray, Christopher T.; Griffith, Jennifer; De l'Aune, William
2011-01-01
The promise of novel technological strategies and solutions to assist persons with visual impairments (that is, those who are blind or have low vision) is frequently discussed and held to be widely beneficial in countless applications and daily activities. One such approach involving a tactile-vision sensory substitution modality as a mechanism to…
Temporal Properties of Liquid Crystal Displays: Implications for Vision Science Experiments
Elze, Tobias; Tanner, Thomas G.
2012-01-01
Liquid crystal displays (LCD) are currently replacing the previously dominant cathode ray tubes (CRT) in most vision science applications. While the properties of the CRT technology are widely known among vision scientists, the photometric and temporal properties of LCDs are unfamiliar to many practitioners. We provide the essential theory, present measurements to assess the temporal properties of different LCD panel types, and identify the main determinants of the photometric output. Our measurements demonstrate that the specifications of the manufacturers are insufficient for proper display selection and control for most purposes. Furthermore, we show how several novel display technologies developed to improve fast transitions or the appearance of moving objects may be accompanied by side–effects in some areas of vision research. Finally, we unveil a number of surprising technical deficiencies. The use of LCDs may cause problems in several areas in vision science. Aside from the well–known issue of motion blur, the main problems are the lack of reliable and precise onsets and offsets of displayed stimuli, several undesirable and uncontrolled components of the photometric output, and input lags which make LCDs problematic for real–time applications. As a result, LCDs require extensive individual measurements prior to applications in vision science. PMID:22984458
Ergonomic Enhancement for Older Readers with Low Vision
ERIC Educational Resources Information Center
Watson, Gale R.; Ramsey, Vincent; De l'Aune, William; Elk, Arona
2004-01-01
This study found that the provision of ergonomic workstations for 12 older persons with age-related macular degeneration who used low vision devices significantly increased the participants' reading speed and decreased their discomfort when reading.
Neurobionics and the brain-computer interface: current applications and future horizons.
Rosenfeld, Jeffrey V; Wong, Yan Tat
2017-05-01
The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.
Diabetes Self-Management Education Enhanced by the Low Vision Professional
ERIC Educational Resources Information Center
Sokol-McKay, Debra A.
2007-01-01
Diabetes currently affects 20.8 million people in the United States and is the leading cause of blindness in people between the ages of 20 and 74 years. The author uses a fictional but typical example to explain the ways in which low vision specialists can improve the diabetes self-management program of a person with low vision and demonstrates…
Understanding and preventing computer vision syndrome.
Loh, Ky; Redd, Sc
2008-01-01
The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
Rapid matching of stereo vision based on fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei
2016-09-01
As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.
2003-01-22
ProVision Technologies, a NASA research partnership center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. Health-related applications of HSI include scanning chickens during processing to help prevent contaminated food from getting to the table. ProVision is working with Sanderson Farms of Mississippi and the U.S. Department of Agriculture. ProVision has a record in its spectral library of the unique spectral signature of fecal contamination, so chickens can be scanned and those with a positive reading can be separated. HSI sensors can also determine the quantity of surface contamination. Research in this application is quite advanced, and ProVision is working on a licensing agreement for the technology. The potential for future use of this equipment in food processing and food safety is enormous.
Next-generation Digital Earth.
Goodchild, Michael F; Guo, Huadong; Annoni, Alessandro; Bian, Ling; de Bie, Kees; Campbell, Frederick; Craglia, Max; Ehlers, Manfred; van Genderen, John; Jackson, Davina; Lewis, Anthony J; Pesaresi, Martino; Remetey-Fülöpp, Gábor; Simpson, Richard; Skidmore, Andrew; Wang, Changlin; Woodgate, Peter
2012-07-10
A speech of then-Vice President Al Gore in 1998 created a vision for a Digital Earth, and played a role in stimulating the development of a first generation of virtual globes, typified by Google Earth, that achieved many but not all the elements of this vision. The technical achievements of Google Earth, and the functionality of this first generation of virtual globes, are reviewed against the Gore vision. Meanwhile, developments in technology continue, the era of "big data" has arrived, the general public is more and more engaged with technology through citizen science and crowd-sourcing, and advances have been made in our scientific understanding of the Earth system. However, although Google Earth stimulated progress in communicating the results of science, there continue to be substantial barriers in the public's access to science. All these factors prompt a reexamination of the initial vision of Digital Earth, and a discussion of the major elements that should be part of a next generation.
Hyperspectral Imaging of fecal contamination on chickens
NASA Technical Reports Server (NTRS)
2003-01-01
ProVision Technologies, a NASA research partnership center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. Health-related applications of HSI include scanning chickens during processing to help prevent contaminated food from getting to the table. ProVision is working with Sanderson Farms of Mississippi and the U.S. Department of Agriculture. ProVision has a record in its spectral library of the unique spectral signature of fecal contamination, so chickens can be scanned and those with a positive reading can be separated. HSI sensors can also determine the quantity of surface contamination. Research in this application is quite advanced, and ProVision is working on a licensing agreement for the technology. The potential for future use of this equipment in food processing and food safety is enormous.
Expanding Human Cognition and Communication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spohrer, Jim; Pierce, Brian M.; Murray, Cherry A.
To be able to chart the most profitable future directions for societal transformation and corresponding scientific research, five multidisciplinary themes focused on major goals have been identified to fulfill the overall motivating vision of convergence described in the previous pages. The first, “Expanding Human Cognition and Communication,” is devoted to technological breakthroughs that have the potential to enhance individuals’ mental and interaction abilities. Throughout the twentieth century, a number of purely psychological techniques were offered for strengthening human character and personality, but evaluation research has generally failed to confirm the alleged benefits of these methods (Druckman and Bjork 1992; 1994).more » Today, there is good reason to believe that a combination of methods, drawing upon varied branches of converging science and technology, would be more effective than attempts that rely upon mental training alone.« less
Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality
NASA Technical Reports Server (NTRS)
Prahst, Stephen; Armstead, Betty Jo
1996-01-01
In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
Information Systems for NASA's Aeronautics and Space Enterprises
NASA Technical Reports Server (NTRS)
Kutler, Paul
1998-01-01
The aerospace industry is being challenged to reduce costs and development time as well as utilize new technologies to improve product performance. Information technology (IT) is the key to providing revolutionary solutions to the challenges posed by the increasing complexity of NASA's aeronautics and space missions and the sophisticated nature of the systems that enable them. The NASA Ames vision is to develop technologies enabling the information age, expanding the frontiers of knowledge for aeronautics and space, improving America's competitive position, and inspiring future generations. Ames' missions to accomplish that vision include: 1) performing research to support the American aviation community through the unique integration of computation, experimentation, simulation and flight testing, 2) studying the health of our planet, understanding living systems in space and the origins of the universe, developing technologies for space flight, and 3) to research, develop and deliver information technologies and applications. Information technology may be defined as the use of advance computing systems to generate data, analyze data, transform data into knowledge and to use as an aid in the decision-making process. The knowledge from transformed data can be displayed in visual, virtual and multimedia environments. The decision-making process can be fully autonomous or aided by a cognitive processes, i.e., computational aids designed to leverage human capacities. IT Systems can learn as they go, developing the capability to make decisions or aid the decision making process on the basis of experiences gained using limited data inputs. In the future, information systems will be used to aid space mission synthesis, virtual aerospace system design, aid damaged aircraft during landing, perform robotic surgery, and monitor the health and status of spacecraft and planetary probes. NASA Ames through the Center of Excellence for Information Technology Office is leading the effort in pursuit of revolutionary, IT-based approaches to satisfying NASA's aeronautics and space requirements. The objective of the effort is to incorporate information technologies within each of the Agency's four Enterprises, i.e., Aeronautics and Space Transportation Technology, Earth, Science, Human Exploration and Development of Space and Space Sciences. The end results of these efforts for Enterprise programs and projects should be reduced cost, enhanced mission capability and expedited mission completion.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
Does having a "brand" help you lead others?
Davidhizar, Ruth
Managing expertly requires many qualities one of which is managing a personal brand. The manager must manage the brand with clarity, consistency, the right technologies, skillful human resources, and clear vision in order for the message to be heard and for effectiveness to soar. Where as technology is valuable it is just as important to talk to people face to face to clearly as effectively sell a personal brand and a personal vision (Noonan, 1991).
Evaluation of novel technologies for the miniaturization of flash imaging lidar
NASA Astrophysics Data System (ADS)
Mitev, V.; Pollini, A.; Haesler, J.; Perenzoni, D.; Stoppa, D.; Kolleck, Christian; Chapuy, M.; Kervendal, E.; Pereira do Carmo, João.
2017-11-01
Planetary exploration constitutes one of the main components in the European Space activities. Missions to Mars, Moon and asteroids are foreseen where it is assumed that the human missions shall be preceded by robotic exploitation flights. The 3D vision is recognised as a key enabling technology in the relative proximity navigation of the space crafts, where imaging LiDAR is one of the best candidates for such 3D vision sensor.
Technology Assessment in Support of the Presidential Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Weisbin, Charles R.; Lincoln, William; Mrozinski, Joe; Hua, Hook; Merida, Sofia; Shelton, Kacie; Adumitroaie, Virgil; Derleth, Jason; Silberg, Robert
2006-01-01
This paper discusses the process and results of technology assessment in support of the United States Vision for Space Exploration of the Moon, Mars and Beyond. The paper begins by reviewing the Presidential Vision: a major endeavor in building systems of systems. It discusses why we wish to return to the Moon, and the exploration architecture for getting there safely, sustaining a presence, and safely returning. Next, a methodology for optimal technology investment is proposed with discussion of inputs including a capability hierarchy, mission importance weightings, available resource profiles as a function of time, likelihoods of development success, and an objective function. A temporal optimization formulation is offered, and the investment recommendations presented along with sensitivity analyses. Key questions addressed are sensitivity of budget allocations to cost uncertainties, reduction in available budget levels, and shifting funding within constraints imposed by mission timeline.
Cryogenic Fluid Management Technologies for Advanced Green Propulsion Systems
NASA Technical Reports Server (NTRS)
Motil, Susan M.; Meyer, Michael L.; Tucker, Stephen P.
2007-01-01
In support of the Exploration Vision for returning to the Moon and beyond, NASA and its partners are developing and testing critical cryogenic fluid propellant technologies that will meet the need for high performance propellants on long-term missions. Reliable knowledge of low-gravity cryogenic fluid management behavior is lacking and yet is critical in the areas of tank thermal and pressure control, fluid acquisition, mass gauging, and fluid transfer. Such knowledge can significantly reduce or even eliminate tank fluid boil-off losses for long term missions, reduce propellant launch mass and required on-orbit margins, and simplify vehicle operations. The Propulsion and Cryogenic Advanced Development (PCAD) Project is performing experimental and analytical evaluation of several areas within Cryogenic Fluid Management (CFM) to enable NASA's Exploration Vision. This paper discusses the status of the PCAD CFM technology focus areas relative to the anticipated CFM requirements to enable execution of the Vision for Space Exploration.
Wilson, Kumanan; Atkinson, Katherine M; Deeks, Shelley L; Crowcroft, Natasha S
2016-01-01
Immunization registries or information systems are critical to improving the quality and evaluating the ongoing success of immunization programs. However, the completeness of these systems is challenged by a myriad of factors including the fragmentation of vaccine administration, increasing mobility of individuals, new vaccine development, use of multiple products, and increasingly frequent changes in recommendations. Mobile technologies could offer a solution, which mitigates some of these challenges. Engaging individuals to have more control of their own immunization information using their mobile devices could improve the timeliness and accuracy of data in central immunization information systems. Other opportunities presented by mobile technologies that could be exploited to improve immunization information systems include mobile reporting of adverse events following immunization, the capacity to scan 2D barcodes, and enabling bidirectional communication between individuals and public health officials. Challenges to utilizing mobile solutions include ensuring privacy of data, access, and equity concerns, obtaining consent and ensuring adoption of technology at sufficiently high rates. By empowering individuals with their own health information, mobile technologies can also serve as a mechanism to transfer immunization information as individuals cross local, regional, and national borders. Ultimately, mobile enhanced immunization information systems can help realize the goal of the individual, the healthcare provider, and public health officials always having access to the same immunization information. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
New Directions for NASA's Advanced Life Support Program
NASA Technical Reports Server (NTRS)
Barta, Daniel J.
2006-01-01
Advanced Life Support (ALS), an element of Human Systems Research and Technology s (HSRT) Life Support and Habitation Program (LSH), has been NASA s primary sponsor of life support research and technology development for the agency. Over its history, ALS sponsored tasks across a diverse set of institutions, including field centers, colleges and universities, industry, and governmental laboratories, resulting in numerous publications and scientific articles, patents and new technologies, as well as education and training for primary, secondary and graduate students, including minority serving institutions. Prior to the Vision for Space Exploration (VSE) announced on January 14th, 2004 by the President, ALS had been focused on research and technology development for long duration exploration missions, emphasizing closed-loop regenerative systems, including both biological and physicochemical. Taking a robust and flexible approach, ALS focused on capabilities to enable visits to multiple potential destinations beyond low Earth orbit. ALS developed requirements, reference missions, and assumptions upon which to structure and focus its development program. The VSE gave NASA a plan for steady human and robotic space exploration based on specific, achievable goals. Recently, the Exploration Systems Architecture Study (ESAS) was chartered by NASA s Administrator to determine the best exploration architecture and strategy to implement the Vision. The study identified key technologies required to enable and significantly enhance the reference exploration missions and to prioritize near-term and far-term technology investments. This technology assessment resulted in a revised Exploration Systems Mission Directorate (ESMD) technology investment plan. A set of new technology development projects were initiated as part of the plan s implementation, replacing tasks previously initiated under HSRT and its sister program, Exploration Systems Research and Technology (ESRT). The Exploration Life Support (ELS) Project, under the Exploration Technology Development Program, has recently been initiated to perform directed life support technology development in support of Constellation and the Crew Exploration Vehicle (CEV). ELS) has replaced ALS, with several major differences. Thermal Control Systems have been separated into a new stand alone project (Thermal Systems for Exploration Missions). Tasks in Advanced Food Technology have been relocated to the Human Research Program. Tasks in a new discipline area, Habitation Engineering, have been added. Research and technology development for capabilities required for longer duration stays on the Moon and Mars, including bioregenerative system, have been deferred.
NASA Technical Reports Server (NTRS)
Prinzel, L.J.; Kramer, L.J.
2009-01-01
A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.
Office of the CIO: Setting the Vision
NASA Technical Reports Server (NTRS)
Rinaldi, James J.
2006-01-01
This slide presentation reviews the vision of the Office of JPL's Chief Information Officer for future of information technology (IT) at JPL. This includes a strong working relation with industry to provide cost efficient and effective IT services. This includes a vision of taking desktop to the next level and the process to achieve it and ensuring that JPL becomes a world class IT provider.
Gross, Joshua B; Powers, Amanda K; Davis, Erin M; Kaplan, Shane A
2016-06-30
Cave-dwelling animals evolve various traits as a consequence of life in darkness. Constructive traits (e.g., enhanced non-visual sensory systems) presumably arise under strong selective pressures. The mechanism(s) driving regression of features, however, are not well understood. Quantitative trait locus (QTL) analyses in Astyanax mexicanus Pachón cave x surface hybrids revealed phenotypic effects associated with vision and pigmentation loss. Vision QTL were uniformly associated with reductions in the homozygous cave condition, however pigmentation QTL demonstrated mixed phenotypic effects. This implied pigmentation might be lost through both selective and neutral forces. Alternatively, in this report, we examined if a pleiotropic interaction may exist between vision and pigmentation since vision loss has been shown to result in darker skin in other fish and amphibian model systems. We discovered that certain members of Pachón x surface pedigrees are significantly darker than surface-dwelling fish. All of these "hypermelanic" individuals demonstrated severe visual system malformations suggesting they may be blind. A vision-mediated behavioral assay revealed that these fish, in stark contrast to surface fish, behaved the same as blind cavefish. Further, hypermelanic melanophores were larger and more dendritic in morphology compared to surface fish melanophores. However, hypermelanic melanophores responded normally to melanin-concentrating hormone suggesting darkening stemmed from vision loss, rather than a defect in pigment cell function. Finally, a number of genomic regions were coordinately associated with both reduced vision and increased pigmentation. This work suggests hypermelanism in hybrid Astyanax results from blindness. This finding provides an alternative explanation for phenotypic effect studies of pigmentation QTL as stemming (at least in part) from environmental, rather than exclusively genetic, interactions between two regressive phenotypes. Further, this analysis reveals persistence of background adaptation in Astyanax. As the eye was lost in cave-dwelling forms, enhanced pigmentation resulted. Given the extreme cave environment, which is often devoid of nutrition, enhanced pigmentation may impose an energetic cost. Such an energetic cost would be selected against, as a means of energy conservation. Thus, the pleiotropic interaction between vision loss and pigmentation may reveal an additional selective pressure favoring the loss of pigmentation in cave-dwelling animals.
A Vision for the Next Ten Years for Integrated Ocean Observing Data
NASA Astrophysics Data System (ADS)
Willis, Z. S.
2012-12-01
Ocean observing has come a long way since the Ocean Sciences Decadal Committee met over a decade ago. Since then, our use of the ocean and coast and their vast resources has increased substantially - with increased shipping, fishing, offshore energy development and recreational boating. That increased use has also spearheaded advances in observing systems. Cutting-edge autonomous and remotely operated vehicles scour the surface and travel to depths collecting essential biogeochemical data for better managing our marine resources. Satellites enable the global mapping of practically every physical ocean variable imaginable. A nationally-integrated coastal network of high-frequency radars lines the borders of the U.S. feeding critical navigation, response, and environmental information continuously. Federal, academic, and industry communities have joined in unique partnerships at regional, national, and global levels to address common challenges to monitoring our ocean. The 2002 Workshop, Building Consensus: Toward an Integrated and Sustained Ocean Observing System laid the framework for the current United States Integrated Ocean Observing System (U.S. IOOS). Ten years later, U.S. IOOS has moved from concept to reality, though much work remains to meet the nation's ocean observing needs. Today, new research and technologies, evolving users and user requirements, economic and funding challenges, and diverse institutional mandates all influence the future growth and implementation of U.S. IOOS. In light of this new environment, the Interagency Ocean Observation Committee (IOOC) will host the 2012 Integrated Ocean Observing System Summit in November 2012, providing a forum to develop a comprehensive ocean observing vision for the next decade, utilizing the knowledge and expertise gained by the IOOS-wide community over the past ten years. This effort to bring together ocean observing stakeholders at the regional, national, and global levels to address these challenges going forward: - Enhancing information delivery and integration to save lives, enhance the economy and protect the environment - Disseminating seamless information across regional and national boundaries - Harnessing technological innovations for new frontiers and opportunities The anticipated outcomes of the IOOS Summit include a highlight of the past decade of progress towards an integrated system, revisiting and updating user requirements, an assessment of existing observing system capabilities and gaps, identifying integration challenges/opportunities, and, establishing an U.S. IOOS-community-wide vision for the next 10 years of ocean observing. Most important will be the execution of priorities identified before and during the Summit, carrying them forward into a new decade of an enhanced Integrated and Sustained Ocean Observing System.
3D vision upgrade kit for the TALON robot system
NASA Astrophysics Data System (ADS)
Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-02-01
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
NASA Technical Reports Server (NTRS)
1999-01-01
Under an SBIR agreement with Langley Research Center, Vision Micro Design Inc. has developed a line of advanced engine monitoring systems using the latest technology in graphic analog and digital displays. Vision Micro Design is able to meet the needs of today's pilots.
Digital enhancement of multispectral MSS data for maximum image visibility
NASA Technical Reports Server (NTRS)
Algazi, V. R.
1973-01-01
A systematic approach to the enhancement of images has been developed. This approach exploits two principal features involved in the observation of images: the properties of human vision and the statistics of the images being observed. The rationale of the enhancement procedure is as follows: in the observation of some features of interest in an image, the range of objective luminance-chrominance values being displayed is generally limited and does not use the whole perceptual range of vision of the observer. The purpose of the enhancement technique is to expand and distort in a systematic way the grey scale values of each of the multispectral bands making up a color composite, to enhance the average visibility of the features being observed.
Piloting Augmented Reality Technology to Enhance Realism in Clinical Simulation.
Vaughn, Jacqueline; Lister, Michael; Shaw, Ryan J
2016-09-01
We describe a pilot study that incorporated an innovative hybrid simulation designed to increase the perception of realism in a high-fidelity simulation. Prelicensure students (N = 12) cared for a manikin in a simulation lab scenario wearing Google Glass, a wearable head device that projected video into the students' field of vision. Students reported that the simulation gave them confidence that they were developing skills and knowledge to perform necessary tasks in a clinical setting and that they met the learning objectives of the simulation. The video combined visual images and cues seen in a real patient and created a sense of realism the manikin alone could not provide.
A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles
NASA Technical Reports Server (NTRS)
Delgado, Frank; Abernathy, Mike
2004-01-01
A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.
ERIC Educational Resources Information Center
Clark, Gilbert, Ed.
1999-01-01
This theme issue of "InSEA News" focuses on contemporary technology and art education. The articles are: "International Travel and Contemporary Technology" (Gilbert Clark); "Recollections and Visions for Electronic Computing in Art Education" (Guy Hubbard); "Using Technologies in Art Education: A Review of…
NASA Technical Reports Server (NTRS)
Deo, Ravi; Wang, Donny; Bohlen, Jim; Fukuda, Cliff
2008-01-01
A trade study was conducted to determine the suitability of composite structures for weight and life cycle cost savings in primary and secondary structural systems for crew exploration vehicles, crew and cargo launch vehicles, landers, rovers, and habitats. The results of the trade study were used to identify and rank order composite material technologies that can have a near-term impact on a broad range of exploration mission applications. This report recommends technologies that should be developed to enable usage of composites on Vision for Space Exploration vehicles towards mass and life-cycle cost savings.
Lindberg, D A; Humphreys, B L
1995-01-01
The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116
Indoor Navigation by People with Visual Impairment Using a Digital Sign System
Legge, Gordon E.; Beckmann, Paul J.; Tjan, Bosco S.; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan
2013-01-01
There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects—blind, low vision, blindfolded sighted, and normally sighted controls—were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment. PMID:24116156
NASA Technical Reports Server (NTRS)
Baumeister, Joseph
2009-01-01
NASA has established 6 Themes for Exploration: 1) USE THE MOON: Reduce risks and cost and increase productivity of future missions by testing technologies, systems, and operations in a planetary environment other than the Earth. 2) PURSUE SCIENTIFIC: Engage in scientific investigations of the Moon (solar system processes), on the Moon (use the unique environment), and from the Moon (to study other celestial phenomena). 3) EXTEND PERMANENT HUMAN PRESENCE: Develop the capabilities and infrastructure required to expand the number of people, the duration, the self-sufficiency, and the degree of non-governmental activity. 4) EXPAND EARTH S ECONOMIC SPHERE: Create new markets based on lunar activity that will return economic, technological, and quality-of-life benefits. 5) ENHANCE GLOBAL SECURTIY: Provide a challenging, shared, and peaceful global vision that unites nations in pursuit of common objectives. 6) ENGAGE, INSPIRE: Excite the public about space, encourage students to pursue careers in high technology fields, ensure that individuals enter the workforce with the scientific and technical knowledge necessary to sustain exploration.
Robotic space simulation integration of vision algorithms into an orbital operations simulation
NASA Technical Reports Server (NTRS)
Bochsler, Daniel C.
1987-01-01
In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.
George, Daniel R; Whitehouse, Peter J
2011-10-01
In the therapeutic void created by over 20 failed Alzheimer's disease drugs during the past decade, a new marketplace of "brain fitness" technology products has emerged. Ranging from video games and computer software to mobile phone apps and hand-held devices, these commercial products promise to maintain and enhance the memory, concentration, visual and spatial skills, verbal recall, and executive functions of individual users. It is instructive to view these products as sociocultural objects deeply imbued with the values and ideologies of our age; consequently, this article offers a critique of the brain fitness technology marketplace while identifying limitations in the capacity of commercial products to realistically improve cognitive health. A broader conception of brain health is presented, going beyond the reductionism of the commercial brain fitness marketplace and asking how our most proximate relationships and local communities can play a role in supporting cognitive and psychosocial well-being. This vision is grounded in recent experiences at The Intergenerational School in Cleveland, OH, a multigenerational community-oriented learning environment that is implementing brain fitness technology in novel ways.
SOFIA Update and Science Vision
NASA Technical Reports Server (NTRS)
Smith, Kimberly
2017-01-01
I will present an overview of the SOFIA program, its science vision and upcoming plans for the observatory. The talk will feature several scientific highlights since full operations, along with summaries of planned science observations for this coming year, platform enhancements and new instrumentation.
Technologies Render Views of Earth for Virtual Navigation
NASA Technical Reports Server (NTRS)
2012-01-01
On a December night in 1995, 159 passengers and crewmembers died when American Airlines Flight 965 flew into the side of a mountain while in route to Cali, Colombia. A key factor in the tragedy: The pilots had lost situational awareness in the dark, unfamiliar terrain. They had no idea the plane was approaching a mountain until the ground proximity warning system sounded an alarm only seconds before impact. The accident was of the kind most common at the time CFIT, or controlled flight into terrain says Trey Arthur, research aerospace engineer in the Crew Systems and Aviation Operations Branch at NASA s Langley Research Center. In situations such as bad weather, fog, or nighttime flights, pilots would rely on airspeed, altitude, and other readings to get an accurate sense of location. Miscalculations and rapidly changing conditions could contribute to a fully functioning, in-control airplane flying into the ground. To improve aviation safety by enhancing pilots situational awareness even in poor visibility, NASA began exploring the possibilities of synthetic vision creating a graphical display of the outside terrain on a screen inside the cockpit. How do you display a mountain in the cockpit? You have to have a graphics-powered computer, a terrain database you can render, and an accurate navigation solution, says Arthur. In the mid-1990s, developing GPS technology offered a means for determining an aircraft s position in space with high accuracy, Arthur explains. As the necessary technologies to enable synthetic vision emerged, NASA turned to an industry partner to develop the terrain graphical engine and database for creating the virtual rendering of the outside environment.
Personal vision: enhancing work engagement and the retention of women in the engineering profession.
Buse, Kathleen R; Bilimoria, Diana
2014-01-01
This study examines how personal vision enhances work engagement and the retention of women in the engineering profession. Using a mixed method approach to understand the factors related to the retention of women in the engineering profession, we first interviewed women who persisted and women who opted out of the profession (Buse and Bilimoria, 2014). In these rich stories, we found that women who persisted had a personal vision that included their profession, and that this personal vision enabled them to overcome the bias, barriers and discrimination in the engineering workplace. To validate this finding on a larger population, we developed a scale to measure one's personal vision conceptualized as the ideal self (Boyatzis and Akrivou, 2006). The measure was tested in a pilot study and then used in a study of 495 women with engineering degrees. The findings validate that the ideal self is comprised of self-efficacy, hope, optimism and core identity. For these women, the ideal self directly impacts work engagement and work engagement directly impacts career commitment to engineering. The findings add to extant theory related to the role of personal vision and intentional change theory. From a practical perspective, these findings will aid efforts to retain women in engineering and other STEM professions.
Personal vision: enhancing work engagement and the retention of women in the engineering profession
Buse, Kathleen R.; Bilimoria, Diana
2014-01-01
This study examines how personal vision enhances work engagement and the retention of women in the engineering profession. Using a mixed method approach to understand the factors related to the retention of women in the engineering profession, we first interviewed women who persisted and women who opted out of the profession (Buse and Bilimoria, 2014). In these rich stories, we found that women who persisted had a personal vision that included their profession, and that this personal vision enabled them to overcome the bias, barriers and discrimination in the engineering workplace. To validate this finding on a larger population, we developed a scale to measure one's personal vision conceptualized as the ideal self (Boyatzis and Akrivou, 2006). The measure was tested in a pilot study and then used in a study of 495 women with engineering degrees. The findings validate that the ideal self is comprised of self-efficacy, hope, optimism and core identity. For these women, the ideal self directly impacts work engagement and work engagement directly impacts career commitment to engineering. The findings add to extant theory related to the role of personal vision and intentional change theory. From a practical perspective, these findings will aid efforts to retain women in engineering and other STEM professions. PMID:25538652
In-process fault detection for textile fabric production: onloom imaging
NASA Astrophysics Data System (ADS)
Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til
2011-05-01
Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.
The ACT Vision Mission Study Simulation Effort
NASA Astrophysics Data System (ADS)
Wunderer, C. B.; Kippen, R. M.; Bloser, P. F.; Boggs, S. E.; McConnell, M. L.; Hoover, A.; Oberlack, U.; Sturner, S.; Tournear, D.; Weidenspointner, G.; Zoglauer, A.
2004-12-01
The Advanced Compton Telescope (ACT) has been selected by NASA for a one-year "Vision Mission" study. The main goal of this study is to determine feasible instrument configurations to achieve ACT's sensitivity requirements, and to give recommendations for technology development. Space-based instruments operating in the energy range of nuclear lines are subject to complex backgrounds generated by cosmic-ray interactions and diffuse gamma rays; typically measurements are significantly background-dominated. Therefore accurate, detailed simulations of the background induced in different ACT configurations, and exploration of event selection and reconstruction techniques for reducing these backgrounds, are crucial to determining both the capabilities of a given instrument configuration and the technology enhancements that would result in the most significant performance improvements. The ACT Simulation team has assembled a complete suite of tools that allows the generation of particle backgrounds for a given orbit (based on CREME96), their propagation through any instrument and spacecraft geometry (using MGGPOD) - including delayed photon emission from instrument activation - as well as the event selection and reconstruction of Compton-scatter events in the given detectors (MEGAlib). The package can deal with polarized photon beams as well as e.g. anticoincidence shields. We will report on the progress of the ACT simulation effort and the suite of tools used. We thank Elena Novikova at NRL for her contributions, and NASA for support of this research.
Simulation Evaluation of Synthetic Vision as an Enabling Technology for Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.
2008-01-01
Enhanced Vision (EV) and synthetic vision (SV) systems may serve as enabling technologies to meet the challenges of the Next Generation Air Transportation System (NextGen) Equivalent Visual Operations (EVO) concept ? that is, the ability to achieve or even improve on the safety of Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and even, perhaps, retain VFR procedures independent of actual weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A piloted simulation experiment was conducted to evaluate the effects of the presence or absence of Synthetic Vision, the location of this information during an instrument approach (i.e., on a Head-Up or Head-Down Primary Flight Display), and the type of airport lighting information on landing minima. The quantitative data from this experiment were analyzed to begin the definition of performance-based criteria for all-weather approach and landing operations. Objective results from the present study showed that better approach performance was attainable with the head-up display (HUD) compared to the head-down display (HDD). A slight performance improvement in HDD performance was shown when SV was added, as the pilots descended below 200 ft to a 100 ft decision altitude, but this performance was not tested for statistical significance (nor was it expected to be statistically significant). The touchdown data showed that regardless of the display concept flown (SV HUD, Baseline HUD, SV HDD, Baseline HDD) a majority of the runs were within the performance-based defined approach and landing criteria in all the visibility levels, approach lighting systems, and decision altitudes tested. For this visual flight maneuver, RVR appeared to be the most significant influence in touchdown performance. The approach lighting system clearly impacted the pilot's ability to descend to 100 ft height above touchdown based on existing Federal Aviation Regulation (FAR) 91.175 using a 200 ft decision height, but did not appear to influence touchdown performance or approach path maintenance
Converging technologies in higher education: paradigm for the "new" liberal arts?
Balmer, Robert T
2006-12-01
This article discusses the historic relationship between the practical arts (technology) and the mental (liberal) arts, suggesting that Converging Technologies is a new higher education paradigm that integrates the arts, humanities, and sciences with modern technology. It explains that the paradigm really includes all fields in higher education from philosophy to art to music to modern languages and beyond. To implement a transformation of this magnitude, it is necessary to understand the psychology of change in academia. Union College in Schenectady, New York, implemented a Converging Technologies Educational Paradigm in five steps: (1) create a compelling vision, (2) communicate the vision, (3) empower the faculty, (4) create short-term successes, and (5) institutionalize the results. This case study of Union College demonstrates it is possible to build a pillar of educational excellence based on Converging Technologies.
The organising vision for telehealth and telecare: discourse analysis.
Greenhalgh, Trisha; Procter, Rob; Wherton, Joe; Sugarhood, Paul; Shaw, Sara
2012-01-01
To (1) map how different stakeholders understand telehealth and telecare technologies and (2) explore the implications for development and implementation of telehealth and telecare services. Discourse analysis. 68 publications representing diverse perspectives (academic, policy, service, commercial and lay) on telehealth and telecare plus field notes from 10 knowledge-sharing events. Following a familiarisation phase (browsing and informal interviews), we studied a systematic sample of texts in detail. Through repeated close reading, we identified assumptions, metaphors, storylines, scenarios, practices and rhetorical positions. We added successive findings to an emerging picture of the whole. Telehealth and telecare technologies featured prominently in texts on chronic illness and ageing. There was no coherent organising vision. Rather, four conflicting discourses were evident and engaged only minimally with one another's arguments. Modernist discourse presented a futuristic utopian vision in which assistive technologies, implemented at scale, would enable society to meet its moral obligations to older people by creating a safe 'smart' home environment where help was always at hand, while generating efficiency savings. Humanist discourse emphasised the uniqueness and moral worth of the individual and tailoring to personal and family context; it considered that technologies were only sometimes fit for purpose and could create as well as solve problems. Political economy discourse envisaged a techno-economic complex of powerful vested interests driving commodification of healthcare and diversion of public funds into private business. Change management discourse recognised the complicatedness of large-scale technology programmes and emphasised good project management and organisational processes. Introduction of telehealth and telecare is hampered because different stakeholders hold different assumptions, values and world views, 'talk past' each other and compete for recognition and resources. If investments in these technologies are to bear fruit, more effective inter-stakeholder dialogue must occur to establish an organising vision that better accommodates competing discourses.
Image processing for flight crew enhanced situation awareness
NASA Technical Reports Server (NTRS)
Roberts, Barry
1993-01-01
This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.
A Two-Century-Old Vision for the Future.
ERIC Educational Resources Information Center
Fuchs, Ira H.
1988-01-01
Discusses the necessity of acquiring and developing technological advances for use in the classroom to provide a vision for the future. Topics discussed include microcomputers; workstations; software; networks; cooperative endeavors in industry and academia; artificial intelligence; and the necessity for financial support. (LRW)
Division Reports from the 2005 AECT Convention
ERIC Educational Resources Information Center
TechTrends: Linking Research & Practice to Improve Learning, 2006
2006-01-01
The Association for Educational Communication & Technology held its International Convention in Orlando, Florida, October 18-22, 2005. The convention theme was "Exploring the Vision". Division report highlights include: (1) Reflections on a Convention: A Vision Explored (Wes Miller); (2) Definition and Terminology Committee (Al…
NASA Technical Reports Server (NTRS)
Dorais, Christopher M.
2004-01-01
The Vision Research Lab at NASA John Glenn Research Center is headed by Dr. Rafat Ansari. Dr. Ansari and other researchers have developed technologies that primarily use laser and fiber optics to non-invasively detect different ailments and diseases of the eye. One of my goals as a LERCIP intern and ACCESS scholar for the 2004 summer is to inform other NASA employees, researchers and the general public about these technologies through the development of a website. The website incorporates the theme that the eye is a window to the body. Thus by investigating the processes of the eye, we can better understand and diagnosis different ailments and diseases. These ailments occur in not only earth bound humans, but astronauts as well as a result of exposure to elevated levels of radiation and microgravity conditions. Thus the technologies being developed at the Vision Research Lab are invaluable to humans on Earth in addition to those astronauts in space. One of my first goals was to research the technologies being developed at the lab. The first several days were spent immersing myself in the various articles, journals and reports about the theories behind Dynamic Light Scattering, Laser Doppler Flowmetry, Autofluoresence, Raman Spectroscopy, Polarimetry and Oximetry. Interviews with the other researchers proved invaluable to help understand these theories as well gain hands on experience with the devices being developed using these technologies. The rest of the Vision Research Team and I sat down and discussed how the overall website should be presented. Combining this information with the knowledge of the theories and applications of the hardware being developed, I worked out different ideas to present this information. I quickly learned Paint Shop Pro 8 and FrontPage 2002, as well as using online tutorials and other resources to help design an effective website. The Vision Research Lab website incorporates the anatomy and physiology of the eye, different diseases that affect the eye and the technologies being develop at the lab to help diagnosis these diseases. It also includes background information on Dr. Ansari as well as other researchers involved in the lab and it includes segments on patents, awards and achievements. There are links to help viewers navigate to internal and external websites to further investigate different ideas and hrther understand the implications of these technologies at being developed.
A Path Model of Factors Affecting Secondary School Students' Technological Literacy
ERIC Educational Resources Information Center
Avsec, Stanislav; Jamšek, Janez
2018-01-01
Technological literacy defines a competitive vision for technology education. Working together with competitive supremacy, technological literacy shapes the actions of technology educators. Rationalised by the dictates of industry, technological literacy was constructed as a product of the marketplace. There are many models that visualise…
Code of Federal Regulations, 2010 CFR
2010-01-01
... with impaired vision), crutches, and walkers; and (3) Other assistive devices for stowage or use within... syringes or auto-injectors, vision-enhancing devices, and POCs, ventilators and respirators that use non...
Progress toward the maintenance and repair of degenerating retinal circuitry.
Vugler, Anthony A
2010-01-01
Retinal diseases such as age-related macular degeneration and retinitis pigmentosa remain major causes of severe vision loss in humans. Clinical trials for treatment of retinal degenerations are underway and advancements in our understanding of retinal biology in health/disease have implications for novel therapies. A review of retinal biology is used to inform a discussion of current strategies to maintain/repair neural circuitry in age-related macular degeneration, retinitis pigmentosa, and Type 2 Leber congenital amaurosis. In age-related macular degeneration/retinitis pigmentosa, a progressive loss of rods/cones results in corruption of bipolar cell circuitry, although retinal output neurons/photoreceptive melanopsin cells survive. Visual function can be stabilized/enhanced after treatment in age-related macular degeneration, but in advanced degenerations, reorganization of retinal circuitry may preclude attempts to restore cone function. In Type 2 Leber congenital amaurosis, useful vision can be restored by gene therapy where central cones survive. Remarkable progress has been made in restoring vision to rodents using light-responsive ion channels inserted into bipolar cells/retinal ganglion cells. Advances in genetic, cellular, and prosthetic therapies show varying degrees of promise for treating retinal degenerations. While functional benefits can be obtained after early therapeutic interventions, efforts should be made to minimize circuitry changes as soon as possible after rod/cone loss. Advances in retinal anatomy/physiology and genetic technologies should allow refinement of future reparative strategies.
Smart unattended sensor networks with scene understanding capabilities
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2006-05-01
Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.
Vision in our three-dimensional world
2016-01-01
Many aspects of our perceptual experience are dominated by the fact that our two eyes point forward. Whilst the location of our eyes leaves the environment behind our head inaccessible to vision, co-ordinated use of our two eyes gives us direct access to the three-dimensional structure of the scene in front of us, through the mechanism of stereoscopic vision. Scientific understanding of the different brain regions involved in stereoscopic vision and three-dimensional spatial cognition is changing rapidly, with consequent influences on fields as diverse as clinical practice in ophthalmology and the technology of virtual reality devices. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269595
ERIC Educational Resources Information Center
Vail, Kathleen
2003-01-01
Practitioners and researchers in the education technology field asked to give their vision of the future list laptop computers, personal digital assistants, electronic testing, wireless networking, and multimedia technology among the technology advances headed soon for schools. A sidebar lists 12 online resources. (MLF)
NASA Technical Reports Server (NTRS)
2003-01-01
Many people are familiar with the popular science fiction series Star Trek: The Next Generation, a show featuring a blind character named Geordi La Forge, whose visor-like glasses enable him to see. What many people do not know is that a product very similar to Geordi's glasses is available to assist people with vision conditions, and a NASA engineer's expertise contributed to its development. The JORDY(trademark) (Joint Optical Reflective Display) device, designed and manufactured by a privately-held medical device company known as Enhanced Vision, enables people with low vision to read, write, and watch television. Low vision, which includes macular degeneration, diabetic retinopathy, and glaucoma, describes eyesight that is 20/70 or worse, and cannot be fully corrected with conventional glasses.
Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion
Enhancement of vision by monocular deprivation in adult mice.
Prusky, Glen T; Alam, Nazia M; Douglas, Robert M
2006-11-08
Plasticity of vision mediated through binocular interactions has been reported in mammals only during a "critical" period in juvenile life, wherein monocular deprivation (MD) causes an enduring loss of visual acuity (amblyopia) selectively through the deprived eye. Here, we report a different form of interocular plasticity of vision in adult mice in which MD leads to an enhancement of the optokinetic response (OKR) selectively through the nondeprived eye. Over 5 d of MD, the spatial frequency sensitivity of the OKR increased gradually, reaching a plateau of approximately 36% above pre-deprivation baseline. Eye opening initiated a gradual decline, but sensitivity was maintained above pre-deprivation baseline for 5-6 d. Enhanced function was restricted to the monocular visual field, notwithstanding the dependence of the plasticity on binocular interactions. Activity in visual cortex ipsilateral to the deprived eye was necessary for the characteristic induction of the enhancement, and activity in visual cortex contralateral to the deprived eye was necessary for its maintenance after MD. The plasticity also displayed distinct learning-like properties: Active testing experience was required to attain maximal enhancement and for enhancement to persist after MD, and the duration of enhanced sensitivity after MD was extended by increasing the length of MD, and by repeating MD. These data show that the adult mouse visual system maintains a form of experience-dependent plasticity in which the visual cortex can modulate the normal function of subcortical visual pathways.
Energy Horizons: A Science and Technology Vision for Air Force Energy
2012-04-01
target energy as a center of gravity. To date , more than 3,000 American Sol- diers and contractors have been killed or wounded protecting supply...failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE APR 2012 2. REPORT TYPE...3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Energy Horizons: A Science and Technology Vision for Air Force Energy 5a
NASA Astrophysics Data System (ADS)
Glaser, Steffen J.; Boscain, Ugo; Calarco, Tommaso; Koch, Christiane P.; Köckenberger, Walter; Kosloff, Ronnie; Kuprov, Ilya; Luy, Burkhard; Schirmer, Sophie; Schulte-Herbrüggen, Thomas; Sugny, Dominique; Wilhelm, Frank K.
2015-12-01
It is control that turns scientific knowledge into useful technology: in physics and engineering it provides a systematic way for driving a dynamical system from a given initial state into a desired target state with minimized expenditure of energy and resources. As one of the cornerstones for enabling quantum technologies, optimal quantum control keeps evolving and expanding into areas as diverse as quantum-enhanced sensing, manipulation of single spins, photons, or atoms, optical spectroscopy, photochemistry, magnetic resonance (spectroscopy as well as medical imaging), quantum information processing and quantum simulation. In this communication, state-of-the-art quantum control techniques are reviewed and put into perspective by a consortium of experts in optimal control theory and applications to spectroscopy, imaging, as well as quantum dynamics of closed and open systems. We address key challenges and sketch a roadmap for future developments.
Three-dimensional surface imaging system for assessing human obesity
NASA Astrophysics Data System (ADS)
Xu, Bugao; Yu, Wurong; Yao, Ming; Pepper, M. Reese; Freeland-Graves, Jeanne H.
2009-10-01
The increasing prevalence of obesity suggests a need to develop a convenient, reliable, and economical tool for assessment of this condition. Three-dimensional (3-D) body surface imaging has emerged as an exciting technology for the estimation of body composition. We present a new 3-D body imaging system, which is designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology is used to satisfy the requirement for a simple hardware setup and fast image acquisition. The portability of the system is created via a two-stand configuration, and the accuracy of body volume measurements is improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3-D body imaging. Body measurement functions dedicated to body composition assessment also are developed. The overall performance of the system is evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.
A 3D surface imaging system for assessing human obesity
NASA Astrophysics Data System (ADS)
Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.
2009-08-01
The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.
Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images
NASA Technical Reports Server (NTRS)
Sams, Clarence F.
2016-01-01
The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.
Synthetic Biology Expands the Industrial Potential of Yarrowia lipolytica.
Markham, Kelly A; Alper, Hal S
2018-06-04
The oleaginous yeast Yarrowia lipolytica is quickly emerging as the most popular non-conventional (i.e., non-model organism) yeast in the bioproduction field. With a high propensity for flux through tricarboxylic acid (TCA) cycle intermediates and biological precursors such as acetyl-CoA and malonyl-CoA, this host is especially well suited to meet our industrial chemical production needs. Recent progress in synthetic biology tool development has greatly enhanced our ability to rewire this organism, with advances in genetic component design, CRISPR technologies, and modular cloning strategies. In this review we investigate recent developments in metabolic engineering and describe how the new tools being developed help to realize the full industrial potential of this host. Finally, we conclude with our vision of the developments that will be necessary to enhance future engineering efforts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Artificial vision support system (AVS(2)) for improved prosthetic vision.
Fink, Wolfgang; Tarbell, Mark A
2014-11-01
State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.
DOT National Transportation Integrated Search
2015-02-15
There are approximately 2 million adults with reported vision loss in the United States. Independent travel and active interactions with the surrounding environment present significant daily challenges for these individuals, ultimately reducing quali...
Goodchild, Michael F.; Guo, Huadong; Annoni, Alessandro; Bian, Ling; de Bie, Kees; Campbell, Frederick; Craglia, Max; Ehlers, Manfred; van Genderen, John; Jackson, Davina; Lewis, Anthony J.; Pesaresi, Martino; Remetey-Fülöpp, Gábor; Simpson, Richard; Skidmore, Andrew; Wang, Changlin; Woodgate, Peter
2012-01-01
A speech of then-Vice President Al Gore in 1998 created a vision for a Digital Earth, and played a role in stimulating the development of a first generation of virtual globes, typified by Google Earth, that achieved many but not all the elements of this vision. The technical achievements of Google Earth, and the functionality of this first generation of virtual globes, are reviewed against the Gore vision. Meanwhile, developments in technology continue, the era of “big data” has arrived, the general public is more and more engaged with technology through citizen science and crowd-sourcing, and advances have been made in our scientific understanding of the Earth system. However, although Google Earth stimulated progress in communicating the results of science, there continue to be substantial barriers in the public’s access to science. All these factors prompt a reexamination of the initial vision of Digital Earth, and a discussion of the major elements that should be part of a next generation. PMID:22723346
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
Employment Retention after Vision Loss: Intensive Case Studies.
ERIC Educational Resources Information Center
Crudden, Adele; Fireison, Cara K.
This study examined the lives of 10 individuals with blindness or severe visual impairment who maintained competitive employment despite their vision loss. The study was designed to provide information regarding the personal characteristics and current practices related to work environment alterations which enhance competitive employment…
Stereo vision with distance and gradient recognition
NASA Astrophysics Data System (ADS)
Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu
2007-12-01
Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.
Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)
NASA Astrophysics Data System (ADS)
Ashcraft, Todd W.; Atac, Robert
2012-06-01
Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.
High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired.
Moshtael, Howard; Aslam, Tariq; Underwood, Ian; Dhillon, Baljean
2015-08-01
Recent advances in digital image processing provide promising methods for maximizing the residual vision of the visually impaired. This paper seeks to introduce this field to the readership and describe its current state as found in the literature. A systematic search revealed 37 studies that measure the value of image processing techniques for subjects with low vision. The techniques used are categorized according to their effect and the principal findings are summarized. The majority of participants preferred enhanced images over the original for a wide range of enhancement types. Adapting the contrast and spatial frequency content often improved performance at object recognition and reading speed, as did techniques that attenuate the image background and a technique that induced jitter. A lack of consistency in preference and performance measures was found, as well as a lack of independent studies. Nevertheless, the promising results should encourage further research in order to allow their widespread use in low-vision aids.
Schmidt, Jan Cornelius
2016-08-01
Synthetic biology is regarded as one of the key technosciences of the future. The goal of this paper is to present some fundamental considerations to enable procedures of a technology assessment (TA) of synthetic biology. To accomplish such an early "upstream" assessment of a not yet fully developed technology, a special type of TA will be considered: Prospective TA (ProTA). At the center of ProTA are the analysis and the framing of "synthetic biology," including a characterization and assessment of the technological core. The thesis is that if there is any differentia specifica giving substance to the umbrella term "synthetic biology," it is the idea of harnessing self-organization for engineering purposes. To underline that we are likely experiencing an epochal break in the ontology of technoscientific systems, this new type of technology is called "late-modern technology." -I start this paper by analyzing the three most common visions of synthetic biology. Then I argue that one particular vision deserves more attention because it underlies the others: the vision of self-organization. I discuss the inherent limits of this new type of late-modern technology in the attempt to control and monitor possible risk issues. I refer to Hans Jonas' ethics and his early anticipation of the risks of a novel type of technology. I end by drawing conclusions for the approach of ProTA towards an early societal shaping of synthetic biology.
Analysis of Risk Compensation Behavior on Night Vision Enhancement System
NASA Astrophysics Data System (ADS)
Hiraoka, Toshihiro; Masui, Junya; Nishikawa, Seimei
Advanced driver assistance systems (ADAS) such as a forward obstacle collision warning system (FOCWS) and a night vision enhancement system (NVES) aim to decrease driver's mental workload and enhance vehicle safety by provision of useful information to support driver's perception process and judgment process. On the other hand, the risk homeostasis theory (RHT) cautions that an enhanced safety and a reduced risk would cause a risk compensation behavior such as increasing the vehicle velocity. Therefore, the present paper performed the driving simulator experiments to discuss dependence on the NVES and emergence of the risk compensation behavior. Moreover, we verified the side-effects of spontaneous behavioral adaptation derived from the presentation of the fuel-consumption meter on the risk compensation behavior.
The visually impaired patient.
Rosenberg, Eric A; Sperazza, Laura C
2008-05-15
Blindness or low vision affects more than 3 million Americans 40 years and older, and this number is projected to reach 5.5 million by 2020. In addition to treating a patient's vision loss and comorbid medical issues, physicians must be aware of the physical limitations and social issues associated with vision loss to optimize health and independent living for the visually impaired patient. In the United States, the four most prevalent etiologies of vision loss in persons 40 years and older are age-related macular degeneration, cataracts, glaucoma, and diabetic retinopathy. Exudative macular degeneration is treated with laser therapy, and progression of nonexudative macular degeneration in its advanced stages may be slowed with high-dose antioxidant and zinc regimens. The value of screening for glaucoma is uncertain; management of this condition relies on topical ocular medications. Cataract symptoms include decreased visual acuity, decreased color perception, decreased contrast sensitivity, and glare disability. Lifestyle and environmental interventions can improve function in patients with cataracts, but surgery is commonly performed if the condition worsens. Diabetic retinopathy responds to tight glucose control, and severe cases marked by macular edema are treated with laser photocoagulation. Vision-enhancing devices can help magnify objects, and nonoptical interventions include special filters and enhanced lighting.
NASA Astrophysics Data System (ADS)
Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Gu, Dong-Chen; Sun, Su-Qin; Xu, Chang-Hua; Wang, Xi-Chang
2015-11-01
Fluorescent brightener, industrial whitening agent, has been illegally used to whitening wheat flour. In this article, computer vision technology (E-eyes) and colorimetry were employed to investigate color difference among different concentrations of fluorescent brightener in wheat flour using DMS as an example. Tri-step infrared spectroscopy (Fourier transform-infrared spectroscopy coupled with second derivative infrared spectroscopy (SD-IR) and two dimensional correlation infrared spectroscopy (2DCOS-IR)) was used to identify and quantitate DMS in wheat flour. According to color analysis, the whitening effect was significant when added with less than 30 mg/g DMS but when more than 100 mg/g, the flour began greenish. Thus it was speculated that the concentration of DMS should be below 100 mg/g in real flour adulterant with DMS. With the increase of the concentration, the spectral similarity of wheat flour with DMS to DMS standard was increasing. SD-IR peaks at 1153 cm-1, 1141 cm-1, 1112 cm-1, 1085 cm-1 and 1025 cm-1 attributed to DMS were regularly enhanced. Furthermore, it could be differentiated by 2DOS-IR between DMS standard and wheat flour added with DMS low to 0.05 mg/g and the bands in the range of 1000-1500 cm-1 could be an exclusive range to identify whether wheat flour contained DMS. Finally, a quantitative prediction model based on IR spectra was established successfully by Partial least squares (PLS) with a concentration range from 1 mg/g to 100 mg/g. The calibration set gave a determination coefficient of 0.9884 with a standard error (RMSEC) of 5.56 and the validation set presented a determination coefficient of 0.9881 with a standard error of 5.73. It was demonstrated that computer vision technology and colorimetry were effective to estimate the content of DMS in wheat flour and the Tri-step infrared macro-fingerprinting combined with PLS was applicable for rapid and nondestructive fluorescent brightener identification and quantitation.
Evaluation of Candidate Millimeter Wave Sensors for Synthetic Vision
NASA Technical Reports Server (NTRS)
Alexander, Neal T.; Hudson, Brian H.; Echard, Jim D.
1994-01-01
The goal of the Synthetic Vision Technology Demonstration Program was to demonstrate and document the capabilities of current technologies to achieve safe aircraft landing, take off, and ground operation in very low visibility conditions. Two of the major thrusts of the program were (1) sensor evaluation in measured weather conditions on a tower overlooking an unused airfield and (2) flight testing of sensor and pilot performance via a prototype system. The presentation first briefly addresses the overall technology thrusts and goals of the program and provides a summary of MMW sensor tower-test and flight-test data collection efforts. Data analysis and calibration procedures for both the tower tests and flight tests are presented. The remainder of the presentation addresses the MMW sensor flight-test evaluation results, including the processing approach for determination of various performance metrics (e.g., contrast, sharpness, and variability). The variation of the very important contrast metric in adverse weather conditions is described. Design trade-off considerations for Synthetic Vision MMW sensors are presented.
Vision-based body tracking: turning Kinect into a clinical tool.
Morrison, Cecily; Culmer, Peter; Mentis, Helena; Pincus, Tamar
2016-08-01
Vision-based body tracking technologies, originally developed for the consumer gaming market, are being repurposed to form the core of a range of innovative healthcare applications in the clinical assessment and rehabilitation of movement ability. Vision-based body tracking has substantial potential, but there are technical limitations. We use our "stories from the field" to articulate the challenges and offer examples of how these can be overcome. We illustrate that: (i) substantial effort is needed to determine the measures and feedback vision-based body tracking should provide, accounting for the practicalities of the technology (e.g. range) as well as new environments (e.g. home). (ii) Practical considerations are important when planning data capture so that data is analysable, whether finding ways to support a patient or ensuring everyone does the exercise in the same manner. (iii) Home is a place of opportunity for vision-based body tracking, but what we do now in the clinic (e.g. balance tests) or in the home (e.g. play games) will require modifications to achieve capturable, clinically relevant measures. This article articulates how vision-based body tracking works and when it does not to continue to inspire our clinical colleagues to imagine new applications. Implications for Rehabilitation Vision-based body tracking has quickly been repurposed to form the core of innovative healthcare applications in clinical assessment and rehabilitation, but there are clinical as well as practical challenges to make such systems a reality. Substantial effort needs to go into determining what types of measures and feedback vision-based body tracking should provide. This needs to account for the practicalities of the technology (e.g. range) as well as the opportunities of new environments (e.g. the home). Practical considerations need to be accounted for when planning capture in a particular environment so that data is analysable, whether it be finding a chair substitute, ways to support a patient or ensuring everyone does the exercise in the same manner. The home is a place of opportunity with vision-based body tracking, but it would be naïve to think that we can do what we do now in the clinic (e.g. balance tests) or in the home (e.g. play games), without appropriate modifications to what constitutes a practically capturable, clinically relevant measure.
Face adaptation improves gender discrimination.
Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang
2011-01-01
Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.
Remote sensing of vegetation structure using computer vision
NASA Astrophysics Data System (ADS)
Dandois, Jonathan P.
High-spatial resolution measurements of vegetation structure are needed for improving understanding of ecosystem carbon, water and nutrient dynamics, the response of ecosystems to a changing climate, and for biodiversity mapping and conservation, among many research areas. Our ability to make such measurements has been greatly enhanced by continuing developments in remote sensing technology---allowing researchers the ability to measure numerous forest traits at varying spatial and temporal scales and over large spatial extents with minimal to no field work, which is costly for large spatial areas or logistically difficult in some locations. Despite these advances, there remain several research challenges related to the methods by which three-dimensional (3D) and spectral datasets are joined (remote sensing fusion) and the availability and portability of systems for frequent data collections at small scale sampling locations. Recent advances in the areas of computer vision structure from motion (SFM) and consumer unmanned aerial systems (UAS) offer the potential to address these challenges by enabling repeatable measurements of vegetation structural and spectral traits at the scale of individual trees. However, the potential advances offered by computer vision remote sensing also present unique challenges and questions that need to be addressed before this approach can be used to improve understanding of forest ecosystems. For computer vision remote sensing to be a valuable tool for studying forests, bounding information about the characteristics of the data produced by the system will help researchers understand and interpret results in the context of the forest being studied and of other remote sensing techniques. This research advances understanding of how forest canopy and tree 3D structure and color are accurately measured by a relatively low-cost and portable computer vision personal remote sensing system: 'Ecosynth'. Recommendations are made for optimal conditions under which forest structure measurements should be obtained with UAS-SFM remote sensing. Ultimately remote sensing of vegetation by computer vision offers the potential to provide an 'ecologist's eye view', capturing not only canopy 3D and spectral properties, but also seeing the trees in the forest and the leaves on the trees.
Elliott, Amanda F.; McGwin, Gerald; Owsley, Cynthia
2009-01-01
OBJECTIVE To evaluate the effect of vision-enhancing interventions (i.e., cataract surgery or refractive error correction) on physical function and cognitive status in nursing home residents. DESIGN Longitudinal cohort study. SETTING Seventeen nursing homes in Birmingham, AL. PARTICIPANTS A total of 187 English-speaking older adults (>55 years of age). INTERVENTION Participants took part in one of two vision-enhancing interventions: cataract surgery or refractive error correction. Each group was compared against a control group (persons eligible for but who declined cataract surgery, or who received delayed correction of refractive error). MEASUREMENTS Physical function (i.e., ability to perform activities of daily living and mobility) was assessed with a series of self-report and certified nursing assistant ratings at baseline and at 2 months for the refractive error correction group, and at 4 months for the cataract surgery group. The Mini Mental State Exam was also administered. RESULTS No significant differences existed within or between groups from baseline to follow-up on any of the measures of physical function. Mental status scores significantly declined from baseline to follow-up for both the immediate (p= 0.05) and delayed (p< 0.02) refractive error correction groups and for the cataract surgery control group (p= 0.05). CONCLUSION Vision-enhancing interventions did not lead to short-term improvements in physical functioning or cognitive status in this sample of elderly nursing home residents. PMID:19170783
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
Developing theological tools for a strategic engagement with Human Enhancement.
Tomkins, Justin
2014-01-01
The literature on Human Enhancement may indeed have reached a critical mass yet theological engagement with the subject is still thin. Human Enhancement has already been established as a key topic within research and captivating visions of the future have been allied with a depth of philosophical analysis. Some Transhumanists have pointed to a theological dimension to their position and some who have warned against enhancement might be seen as having done so from a perspective shaped by a Judeo-Christian worldview. Nonetheless, in neither of these cases has theology been central to engagement with the enhancement quest.Christian theologians who have begun to open up such an engagement with Human Enhancement include Brent Waters, Robert Song and Celia Deane-Drummond. The work they have already carried out is insightful and important yet due to the scale of the possible engagement, the wealth of Christian theology which might be applied to Human Enhancement remains largely untapped. This paper explores how three key aspects of Christian theology, eschatology, love of God and love of neighbour, provide valuable tools for a theological engagement with Human Enhancement. It is proposed that such theological tools need to be applied to Human Enhancement if the debate is to be resourced with the Christian theological perspective of what it means to be human in our contemporary technological context and if society is to have the choice of maintaining its Christian foundations.
The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.
Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano
2017-12-01
Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.
NASA Technical Reports Server (NTRS)
Bejmuk, Bohdan I.; Williams, Larry
1992-01-01
As a result of limited resources and tight fiscal constraints over the past several years, the defense and aerospace industries have experienced a downturn in business activity. The impact of fewer contracts being awarded has placed a greater emphasis for effectiveness and efficiency on industry contractors. It is clear that a reallocation of resources is required for America to continue to lead the world in space and technology. The key to technological and economic survival is the transforming of existing programs, such as the Space Shuttle Program, into more cost efficient programs so as to divert the savings to other NASA programs. The partnership between Rockwell International and NASA and their joint improvement efforts that resulted in significant streamlining and cost reduction measures to Rockwell International Space System Division's work on the Space Shuttle System Integration Contract is described. This work was a result of an established Cost Effectiveness Enhancement (CEE) Team formed initially in Fiscal Year 1991, and more recently expanded to a larger scale CEE Initiative in 1992. By working closely with the customer in agreeing to contract content, obtaining management endorsement and commitment, and involving the employees in total quality management (TQM) and continuous improvement 'teams,' the initial annual cost reduction target was exceeded significantly. The CEE Initiative helped reduce the cost of the Shuttle Systems Integration contract while establishing a stronger program based upon customer needs, teamwork, quality enhancements, and cost effectiveness. This was accomplished by systematically analyzing, challenging, and changing the established processes, practices, and systems. This examination, in nature, was work intensive due to the depth and breadth of the activity. The CEE Initiative has provided opportunities to make a difference in the way Rockwell and NASA work together - to update the methods and processes of the organizations. The future success of NASA space programs and Rockwell hinges upon the ability to adopt new, more efficient and effective work processes. Efficiency, proficiency, cost effectiveness, and teamwork are a necessity for economic survival. Continuous improvement initiatives like the CEE are, and will continue to be, vehicles by which the road can be traveled with a vision to the future.
NASA Astrophysics Data System (ADS)
Bejmuk, Bohdan I.; Williams, Larry
As a result of limited resources and tight fiscal constraints over the past several years, the defense and aerospace industries have experienced a downturn in business activity. The impact of fewer contracts being awarded has placed a greater emphasis for effectiveness and efficiency on industry contractors. It is clear that a reallocation of resources is required for America to continue to lead the world in space and technology. The key to technological and economic survival is the transforming of existing programs, such as the Space Shuttle Program, into more cost efficient programs so as to divert the savings to other NASA programs. The partnership between Rockwell International and NASA and their joint improvement efforts that resulted in significant streamlining and cost reduction measures to Rockwell International Space System Division's work on the Space Shuttle System Integration Contract is described. This work was a result of an established Cost Effectiveness Enhancement (CEE) Team formed initially in Fiscal Year 1991, and more recently expanded to a larger scale CEE Initiative in 1992. By working closely with the customer in agreeing to contract content, obtaining management endorsement and commitment, and involving the employees in total quality management (TQM) and continuous improvement 'teams,' the initial annual cost reduction target was exceeded significantly. The CEE Initiative helped reduce the cost of the Shuttle Systems Integration contract while establishing a stronger program based upon customer needs, teamwork, quality enhancements, and cost effectiveness. This was accomplished by systematically analyzing, challenging, and changing the established processes, practices, and systems. This examination, in nature, was work intensive due to the depth and breadth of the activity. The CEE Initiative has provided opportunities to make a difference in the way Rockwell and NASA work together - to update the methods and processes of the organizations. The future success of NASA space programs and Rockwell hinges upon the ability to adopt new, more efficient and effective work processes. Efficiency, proficiency, cost effectiveness, and teamwork are a necessity for economic survival. Continuous improvement initiatives like the CEE are, and will continue to be, vehicles by which the road can be traveled with a vision to the future.
Image segmentation for enhancing symbol recognition in prosthetic vision.
Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming
2012-01-01
Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.
The reliability of a VISION COACH task as a measure of psychomotor skills.
Xi, Yubin; Rosopa, Patrick J; Mossey, Mary; Crisler, Matthew C; Drouin, Nathalie; Kopera, Kevin; Brooks, Johnell O
2014-10-01
The VISION COACH™ interactive light board is designed to test and enhance participants' psychomotor skills. The primary goal of this study was to examine the test-retest reliability of the Full Field 120 VISION COACH task. One hundred eleven male and 131 female adult participants completed six trials where they responded to 120 randomly distributed lights displayed on the VISION COACH interactive light board. The mean time required for a participant to complete a trial was 101 seconds. Intraclass correlation coefficients, ranging from 0.962 to 0.987 suggest the VISION COACH Full Field 120 task was a reliable task. Cohen's d's of adjacent pairs of trials suggest learning effects did not negatively affect reliability after the third trial.
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Saiyed, Naseem H.; Swith, Marion Shayne
2005-01-01
When United States President George W. Bush announced the Vision for Space Exploration in January 2004, twelve propulsion and launch system projects were being pursued in the Next Generation Launch Technology (NGLT) Program. These projects underwent a review for near-term relevance to the Vision. Subsequently, five projects were chosen as advanced development projects by NASA s Exploration Systems Mission Directorate (ESMD). These five projects were Auxiliary Propulsion, Integrated Powerhead Demonstrator, Propulsion Technology and Integration, Vehicle Subsystems, and Constellation University Institutes. Recently, an NGLT effort in Vehicle Structures was identified as a gap technology that was executed via the Advanced Development Projects Office within ESMD. For all of these advanced development projects, there is an emphasis on producing specific, near-term technical deliverables related to space transportation that constitute a subset of the promised NGLT capabilities. The purpose of this paper is to provide a brief description of the relevancy review process and provide a status of the aforementioned projects. For each project, the background, objectives, significant technical accomplishments, and future plans will be discussed. In contrast to many of the current ESMD activities, these areas are providing hardware and testing to further develop relevant technologies in support of the Vision for Space Exploration.
Emerging Technologies Look Deeper into the Eyes to Catch Signs of Disease
... Eye Disease Vision Screening World Sight Day Emerging technologies look deeper into the eyes to catch signs ... to eye gazing Adaptive optics (AO) is one technology helping to overcome this problem. It deals with ...
GeoVision Study | Geothermal Technologies | NREL
and technical issues of advanced technologies and potential future impacts and calculating geothermal : Exploration Reservoir development and management Social and environmental impacts Hybrid systems Thermal
Times Past, Times to Come: The Influence of the Past on Visions of the Future.
ERIC Educational Resources Information Center
Masson, Sophie
1997-01-01
Discussion of visions of the future that are based on past history highlights imaginative literature that deals with the human spirit rather than strictly technological innovations. Medieval society, the Roman Empire, mythological atmospheres, and the role of writers are also discussed. (LRW)
Visions of Curriculum, Community, and Science
ERIC Educational Resources Information Center
Brickhouse, Nancy W.; Kittleson, Julie M.
2006-01-01
Although the natural sciences are dedicated to understanding the natural world, they are also dynamic and shaped by cultural values. The sciences and attendant technologies could be very responsive to a population that participates in and uses them responsibly. In this essay, Nancy Brickhouse and Julie Kittleson argue for re-visioning the sciences…
Real-time Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.
2005-01-01
Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.
Real-time enhanced vision system
NASA Astrophysics Data System (ADS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.
2005-05-01
Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.
Quality Control by Artificial Vision
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.
2010-01-01
Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier-basis functions for this task. Image registration is another important task for machine vision. Bingham and Arrowood investigate the implementation and results in applying Fourier phase matching for projection registration, with a particular focus on nondestructive testing using computed tomography. Readers interested in enriching their arsenal of image-processing algorithms for machine-vision tasks should find these papers enriching. Meanwhile, we have four papers dealing with more specific machine-vision tasks. The first one, Yahiaoui et al., is quantitative in nature, using machine vision for real-time passenger counting. Occulsion is a common problem in counting objects and people, and they circumvent this issue with a dense stereovision system, achieving 97 to 99% accuracy in their tests. On the other hand, the second paper by Oswald-Tranta et al. focuses on thermographic crack detection. An infrared camera is used to detect inhomogeneities, which may indicate surface cracks. They describe the various steps in developing fully automated testing equipment aimed at a high throughput. Another paper describing an inspection system is Molleda et al., which handles flatness inspection of rolled products. They employ optical-laser triangulation and 3-D surface reconstruction for this task, showing how these can be achieved in real time. Last but not least, Presles et al. propose a way to monitor the particle-size distribution of batch crystallization processes. This is achieved through a new in situ imaging probe and image-analysis methods. While it is unlikely any reader may be working on these four specific problems at the same time, we are confident that readers will find these papers inspiring and potentially helpful to their own machine-vision system developments.« less
A Vision of Success: How Nutrient Management Will Enhance and Sustain Ecosystem Services
Clean air and water, ample food, renewable fuels, productive fisheries, diverse ecosystems, resilient coasts and watersheds: these are some of the benefits that depend on sustainable nitrogen use and management. Thus, in our vision of the future, uses of reactive nitrogen are suf...
2013-05-15
(left to right) NASA Langley aerospace engineer Bruce Jackson briefs astronauts Rex Walheim and Gregory Johnson about the Synthetic Vision (SV) and Enhanced Vision (EV) systems in a flight simulator at the center's Cockpit Motion Facility. The astronauts were training to land the Dream Chaser spacecraft May 15th 2013. credit NASA/David C. Bowman
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2012-01-01
To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.
Visions for Children: African American Early Childhood Education Program.
ERIC Educational Resources Information Center
Hale-Benson, Janice
The features of an early childhood education demonstration program, Visions for Children, are delineated in this paper. The program was designed to facilitate the intellectual development, boost the academic achievement, and enhance the self-concepts of African-American preschool children. The program implements a curriculum that focuses on…
Driving and Low Vision: An Evidence-Based Review of Rehabilitation
ERIC Educational Resources Information Center
Strong, J. Graham; Jutai, Jeffrey W.; Russell-Minda, Elizabeth; Evans, Mal
2008-01-01
This systematic review of the effectiveness of driver rehabilitation interventions found that driver training programs enhance driving skills and awareness, but further research is needed to determine their effectiveness in improving driving performance of drivers with low vision. More research is also needed to determine the effectiveness of low…
Visions Management: Effective Teaching through Technology.
ERIC Educational Resources Information Center
Larson, Robert W.
In making effective use of technology, instructors must face several challenges, such as deciding which technology is really necessary for effective teaching and working with limited department budgets. In addressing these issues, faculty should be aware of three major trends in communications technology: miniaturization of the media of…
ERIC Educational Resources Information Center
Ch'ien, Evelyn
2011-01-01
This paper describes how a linguistic form, rap, can evolve in tandem with technological advances and manifest human-machine creativity. Rather than assuming that the interplay between machines and technology makes humans robotic or machine-like, the paper explores how the pressure of executing artistic visions using technology can drive…
Night vision goggle stimulation using LCoS and DLP projection technology, which is better?
NASA Astrophysics Data System (ADS)
Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter
2014-06-01
High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.
NASA Technical Reports Server (NTRS)
Kearney, Lara
2004-01-01
In January 2004, the President announced a new Vision for Space Exploration. NASA's Office of Exploration Systems has identified Extravehicular Activity (EVA) as a critical capability for supporting the Vision for Space Exploration. EVA is required for all phases of the Vision, both in-space and planetary. Supporting the human outside the protective environment of the vehicle or habitat and allow ing him/her to perform efficient and effective work requires an integrated EVA "System of systems." The EVA System includes EVA suits, airlocks, tools and mobility aids, and human rovers. At the core of the EVA System is the highly technical EVA suit, which is comprised mainly of a life support system and a pressure/environmental protection garment. The EVA suit, in essence, is a miniature spacecraft, which combines together many different sub-systems such as life support, power, communications, avionics, robotics, pressure systems and thermal systems, into a single autonomous unit. Development of a new EVA suit requires technology advancements similar to those required in the development of a new space vehicle. A majority of the technologies necessary to develop advanced EVA systems are currently at a low Technology Readiness Level of 1-3. This is particularly true for the long-pole technologies of the life support system.
Technological innovation in video-assisted thoracic surgery.
Özyurtkan, Mehmet Oğuzhan; Kaba, Erkan; Toker, Alper
2017-01-01
The popularity of video-assisted thoracic surgery (VATS) which increased worldwide due to the recent innovations in thoracic surgical technics, equipment, electronic devices that carry light and vision and high definition monitors. Uniportal VATS (UVATS) is disseminated widely, creating a drive to develop new techniques and instruments, including new graspers and special staplers with more angulation capacities. During the history of VATS, the classical 10 mm 0° or 30° rigid rod lens system, has been replaced by new thoracoscopes providing a variable angle technology and allowing 0° and 120° range of vision. Besides, the tip of these novel thoracoscopes can be positioned away from the operating side minimize fencing with other thoracoscopic instruments. The curved-tip stapler technology, and better designed endostaplers helped better dissection, precision of control, more secure staple lines. UVATS also contributed to the development of embryonic natural orifice transluminal endoscopic surgery. Three-dimensional VATS systems facilitated faster and more accurate grasping, suturing, and dissection of the tissues by restoring natural 3D vision and the perception of depth. Another innovation in VATS is the energy-based coagulative and tissue fusion technology which may be an alternative to endostaplers.
ERIC Educational Resources Information Center
Thomas, Susan
2016-01-01
The National Education Technology Plan is the flagship educational technology policy document for the United States. The 2016 Plan, "Future Ready Learning: Reimagining the Role of Technology in Education," articulates a vision of equity, active use, and collaborative leadership to make everywhere, all-the-time learning possible. While…
NASA Technical Reports Server (NTRS)
Van Baalen, Mary; Mason, Sara; Foy, Millennia; Wear, Mary; Taiym, Wafa; Moynihan, Shannan; Alexander, David; Hart, Steve; Tarver, William
2015-01-01
Due to recently identified vision changes associated with space flight, JSC Space and Clinical Operations (SCO) implemented broad mission-related vision testing starting in 2009. Optical Coherence Tomography (OCT), 3 Tesla Brain and Orbit MRIs, Optical Biometry were implemented terrestrially for clinical monitoring. While no inflight vision testing was in place, already available onorbit technology was leveraged to facilitate in-flight clinical monitoring, including visual acuity, Amsler grid, tonometry, and ultrasonography. In 2013, on-orbit testing capabilities were expanded to include contrast sensitivity testing and OCT. As these additional testing capabilities have been added, resource prioritization, particularly crew time, is under evaluation.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.
2006-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results showed the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
NASA Astrophysics Data System (ADS)
Hashimoto, Manabu; Fujino, Yozo
Image sensing technologies are expected as useful and effective way to suppress damages by criminals and disasters in highly safe and relieved society. In this paper, we describe current important subjects, required functions, technical trends, and a couple of real examples of developed system. As for the video surveillance, recognition of human trajectory and human behavior using image processing techniques are introduced with real examples about the violence detection for elevators. In the field of facility monitoring technologies as civil engineering, useful machine vision applications such as automatic detection of concrete cracks on walls of a building or recognition of crowded people on bridge for effective guidance in emergency are shown.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
The Next Step: Managing Your District's Technology Operations.
ERIC Educational Resources Information Center
Pereus, Stephen C.
2001-01-01
Discusses benefits and especially risks involved with educational technology: unexpected costs; possible negative effects on student achievement; legal, ethical, and security issues; and resistance to change. Success ensues from providing leadership and vision, updating technology planning, evaluating alternatives, setting standards, involving…
2006-01-01
vision may enhance recognition of conspecifics or be used in mating. While mating in moths is thought to be entirely mediated by olfaction , most tasks are...time, unambiguous evidence for true color vision under scotopic conditions has only recently been acquired (Kelber et al., 2002; Roth and Kelber, 2004...color under starlight and dim moonlight, respectively, raise at least two issues. First, what is the selective advantage of color vision in these
Computer vision challenges and technologies for agile manufacturing
NASA Astrophysics Data System (ADS)
Molley, Perry A.
1996-02-01
Sandia National Laboratories, a Department of Energy laboratory, is responsible for maintaining the safety, security, reliability, and availability of the nuclear weapons stockpile for the United States. Because of the changing national and global political climates and inevitable budget cuts, Sandia is changing the methods and processes it has traditionally used in the product realization cycle for weapon components. Because of the increasing age of the nuclear stockpile, it is certain that the reliability of these weapons will degrade with time unless eventual action is taken to repair, requalify, or renew them. Furthermore, due to the downsizing of the DOE weapons production sites and loss of technical personnel, the new product realization process is being focused on developing and deploying advanced automation technologies in order to maintain the capability for producing new components. The goal of Sandia's technology development program is to create a product realization environment that is cost effective, has improved quality and reduced cycle time for small lot sizes. The new environment will rely less on the expertise of humans and more on intelligent systems and automation to perform the production processes. The systems will be robust in order to provide maximum flexibility and responsiveness for rapidly changing component or product mixes. An integrated enterprise will allow ready access to and use of information for effective and efficient product and process design. Concurrent engineering methods will allow a speedup of the product realization cycle, reduce costs, and dramatically lessen the dependency on creating and testing physical prototypes. Virtual manufacturing will allow production processes to be designed, integrated, and programed off-line before a piece of hardware ever moves. The overriding goal is to be able to build a large variety of new weapons parts on short notice. Many of these technologies that are being developed are also applicable to commercial production processes and applications. Computer vision will play a critical role in the new agile production environment for automation of processes such as inspection, assembly, welding, material dispensing and other process control tasks. Although there are many academic and commercial solutions that have been developed, none have had widespread adoption considering the huge potential number of applications that could benefit from this technology. The reason for this slow adoption is that the advantages of computer vision for automation can be a double-edged sword. The benefits can be lost if the vision system requires an inordinate amount of time for reprogramming by a skilled operator to account for different parts, changes in lighting conditions, background clutter, changes in optics, etc. Commercially available solutions typically require an operator to manually program the vision system with features used for the recognition. In a recent survey, we asked a number of commercial manufacturers and machine vision companies the question, 'What prevents machine vision systems from being more useful in factories?' The number one (and unanimous) response was that vision systems require too much skill to set up and program to be cost effective.
Measuring perceived video quality of MPEG enhancement by people with impaired vision
Fullerton, Matthew; Woods, Russell L.; Vera-Diaz, Fuensanta A.; Peli, Eli
2007-01-01
We used a new method to measure the perceived quality of contrast-enhanced motion video. Patients with impaired vision (n = 24) and normally-sighted subjects (n = 6) adjusted the level of MPEG-based enhancement of 8 videos (4 minutes each) drawn from 4 categories. They selected the level of enhancement that provided the preferred view of the videos, using a reducing-step-size staircase procedure. Most patients made consistent selections of the preferred level of enhancement, indicating an appreciation of and a perceived benefit from the MPEG-based enhancement. The selections varied between patients and were correlated with letter contrast sensitivity, but the selections were not affected by training, experience or video category. We measured just noticeable differences (JNDs) directly for videos, and mapped the image manipulation (enhancement in our case) onto an approximately linear perceptual space. These tools and approaches will be of value in other evaluations of the image quality of motion video manipulations. PMID:18059909
Technology for Work, Home, and Leisure. Tech Use Guide: Using Computer Technology.
ERIC Educational Resources Information Center
Williams, John M.
This guide provides a brief introduction to several types of technological devices useful to individuals with disabilities and illustrates how some individuals are applying technology in the workplace and at home. Devices described include communication aids, low-vision products, voice-activated systems, environmental controls, and aids for…
Reimagining the Role of Technology in Education: 2017 National Education Technology Plan Update
ERIC Educational Resources Information Center
Office of Educational Technology, US Department of Education, 2017
2017-01-01
The National Education Technology Plan (NETP) sets a national vision and plan for learning enabled by technology through building on the work of leading education researchers; district, school, and higher education leaders; classroom teachers; developers; entrepreneurs; and nonprofit organizations. The principles and examples provided in this…
NASA Astrophysics Data System (ADS)
As'ari, M. A.; Sheikh, U. U.
2012-04-01
The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.
Chiao, Chuan-Chin; Wickiser, J Kenneth; Allen, Justine J; Genter, Brock; Hanlon, Roger T
2011-05-31
Camouflage is a widespread phenomenon throughout nature and an important antipredator tactic in natural selection. Many visual predators have keen color perception, and thus camouflage patterns should provide some degree of color matching in addition to other visual factors such as pattern, contrast, and texture. Quantifying camouflage effectiveness in the eyes of the predator is a challenge from the perspectives of both biology and optical imaging technology. Here we take advantage of hyperspectral imaging (HSI), which records full-spectrum light data, to simultaneously visualize color match and pattern match in the spectral and the spatial domains, respectively. Cuttlefish can dynamically camouflage themselves on any natural substrate and, despite their colorblindness, produce body patterns that appear to have high-fidelity color matches to the substrate when viewed directly by humans or with RGB images. Live camouflaged cuttlefish on natural backgrounds were imaged using HSI, and subsequent spectral analysis revealed that most reflectance spectra of individual cuttlefish and substrates were similar, rendering the color match possible. Modeling color vision of potential di- and trichromatic fish predators of cuttlefish corroborated the spectral match analysis and demonstrated that camouflaged cuttlefish show good color match as well as pattern match in the eyes of fish predators. These findings (i) indicate the strong potential of HSI technology to enhance studies of biological coloration and (ii) provide supporting evidence that cuttlefish can produce color-coordinated camouflage on natural substrates despite lacking color vision.
Lumber Scanning System for Surface Defect Detection
D. Earl Kline; Y. Jason Hou; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman
1992-01-01
This paper describes research aimed at developing a machine vision technology to drive automated processes in the hardwood forest products manufacturing industry. An industrial-scale machine vision system has been designed to scan variable-size hardwood lumber for detecting important features that influence the grade and value of lumber such as knots, holes, wane,...
Rationale, Design and Implementation of a Computer Vision-Based Interactive E-Learning System
ERIC Educational Resources Information Center
Xu, Richard Y. D.; Jin, Jesse S.
2007-01-01
This article presents a schematic application of computer vision technologies to e-learning that is synchronous, peer-to-peer-based, and supports an instructor's interaction with non-computer teaching equipments. The article first discusses the importance of these focused e-learning areas, where the properties include accurate bidirectional…
Promoting Socialization in Distance Education
ERIC Educational Resources Information Center
Tucker, Shelia Y.
2012-01-01
Learners enjoy the convenience of being able to take online courses, yet many reports missing the face-to-face contact with their peers. This researcher has sought to tap into the vision of Ferratt & Hall (2009) whereby educators and technology designers are encouraged to extend the vision of online learning to "virtually being there and…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
... named the following respondents: VisionTek Products LLC (``VisionTek'') of Inverness, Illinois; uPI Semiconductor Corp. (``uPI'') of Taiwan; Sapphire Technology Limited (``Sapphire'') of Hong Kong; Advanced Micro...'') initial determination (``ID'') granting uPI's and Sapphire's joint motion to terminate the investigation...
78 FR 66027 - Center for Scientific Review; Amended Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Center for Scientific Review; Amended Notice of Meeting Notice is hereby given of a change in the meeting of the Bioengineering of Neuroscience, Vision and Low Vision Technologies Study Section, October 03, 2013, 08:00 a.m. to October 04...
A clinical information systems strategy for a large integrated delivery network.
Kuperman, G. J.; Spurr, C.; Flammini, S.; Bates, D.; Glaser, J.
2000-01-01
Integrated delivery networks (IDNs) are an emerging class of health care institutions. IDNs are formed from the affiliation of individual health care institutions and are intended to be more efficient in the current fiscal health care environment. To realize efficiencies and support their strategic visions, IDNs rely critically on excellent information technology (IT). Because of its importance to the mission of the IDN, strategic decisions about IT are made by the top leadership of the IDN. At Partners HealthCare System, a large IDN in Boston, MA, a clinical information systems strategy has been created to support the Partners clinical vision. In this paper, we discuss the Partners' structure, clinical vision, and current IT initiatives in place to address the clinical vision. The initiatives are: a clinical data repository, inpatient process support, electronic medical records, a portal strategy, referral applications, knowledge resources, support for product lines, patient computing, confidentiality, and clinical decision support. We address several of the issues encountered in trying to bring excellent information technology to a large IDN. PMID:11079921
JPL Robotics Technology Applicable to Agriculture
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol Gabriel; Kyte, L.
2008-01-01
This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,
ERIC Educational Resources Information Center
Tachino, Lance
1995-01-01
Describes the development of an educational technology plan for the Kamehameha Schools Bishop Estate. A planning committee surveyed parents, teachers, and students regarding educational technology, then created five visions for using technology in education. The paper discusses the importance of teacher training and examines what happens at the…
Telesurgery: Windows of Opportunity
Arora, Sulbha; Allahbadia, Gautam N
2007-01-01
Minimally Invasive Surgery is the most important revolution in surgical technique since the early 1900s. Its development was facilitated by the introduction of miniaturized video cameras with good image reproduction. The marvels of electronic and information technology have strengthened the biochemical and molecular power of diagnosis and the surgical and medical management of gynecology, transforming the very practice of medical science into a reality that could barely be envisaged two decades ago. We now enter the age of Robotics, Telesurgery, and Therapeutic Cloning. This dynamic process of reform continues to deliver practitioners with information, ideas and tools that spell answers to some of the most pressing dilemmas in clinical management. New technology will provide us with better opportunities of vision of the operative field, such as 3-D Endoscopy. Other promising technologies such as incorporation of ultrasonography, magnetic resonance imaging, laser-based technology or assisted optical coherence tomography will not only enhance better visualization of the surgical field, but also discriminate the pathologic tissue from the normal one, enabling the surgeon to excise the pathologic tissue accurately. Pain mapping and photodiagnosis offer a new direction in the diagnosis of microscopic endometriosis. Better detection of the disease results in higher chances of success following treatment. PMID:21475455
Telesurgery: windows of opportunity.
Arora, Sulbha; Allahbadia, Gautam N
2007-01-01
Minimally Invasive Surgery is the most important revolution in surgical technique since the early 1900s. Its development was facilitated by the introduction of miniaturized video cameras with good image reproduction. The marvels of electronic and information technology have strengthened the biochemical and molecular power of diagnosis and the surgical and medical management of gynecology, transforming the very practice of medical science into a reality that could barely be envisaged two decades ago. We now enter the age of Robotics, Telesurgery, and Therapeutic Cloning. This dynamic process of reform continues to deliver practitioners with information, ideas and tools that spell answers to some of the most pressing dilemmas in clinical management. New technology will provide us with better opportunities of vision of the operative field, such as 3-D Endoscopy. Other promising technologies such as incorporation of ultrasonography, magnetic resonance imaging, laser-based technology or assisted optical coherence tomography will not only enhance better visualization of the surgical field, but also discriminate the pathologic tissue from the normal one, enabling the surgeon to excise the pathologic tissue accurately. Pain mapping and photodiagnosis offer a new direction in the diagnosis of microscopic endometriosis. Better detection of the disease results in higher chances of success following treatment.
Flight Deck Technologies to Enable NextGen Low Visibility Surface Operations
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence (Lance) J., III; Arthur, Jarvis (Trey) J.; Kramer, Lynda J.; Norman, Robert M.; Bailey, Randall E.; Jones, Denise R.; Karwac, Jerry R., Jr.; Shelton, Kevin J.; Ellis, Kyle K. E.
2013-01-01
Many key capabilities are being identified to enable Next Generation Air Transportation System (NextGen), including the concept of Equivalent Visual Operations (EVO) . replicating the capacity and safety of today.s visual flight rules (VFR) in all-weather conditions. NASA is striving to develop the technologies and knowledge to enable EVO and to extend EVO towards a Better-Than-Visual operational concept. This operational concept envisions an .equivalent visual. paradigm where an electronic means provides sufficient visual references of the external world and other required flight references on flight deck displays that enable Visual Flight Rules (VFR)-like operational tempos while maintaining and improving safety of VFR while using VFR-like procedures in all-weather conditions. The Langley Research Center (LaRC) has recently completed preliminary research on flight deck technologies for low visibility surface operations. The work assessed the potential of enhanced vision and airport moving map displays to achieve equivalent levels of safety and performance to existing low visibility operational requirements. The work has the potential to better enable NextGen by perhaps providing an operational credit for conducting safe low visibility surface operations by use of the flight deck technologies.
A vision detailing enhancements made to the Clean Water Act 303(d) Program informed by the experience gained over the past two decades in assessing and reporting on water quality and in developing approximately 65,000 TMDLs.
77 FR 42704 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... Vision Sensors, 12 AN/APG-78 Fire Control Radars (FCR) with Radar Electronics Unit (LONGBOW component... Target Acquisition and Designation Sight, 27 AN/AAR-11 Modernized Pilot Night Vision Sensors, 12 AN/APG... enhance the protection of key oil and gas infrastructure and platforms which are vital to U.S. and western...
76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-14
... detected by infrared sensors can be much different from that detected by natural pilot vision. On a dark... by many imaging infrared systems. On the other hand, contrasting colors in visual wavelengths may be... of the EFVS image and the level of EFVS infrared sensor performance could depend significantly on...
Principles of 'servant leadership' and how they can enhance practice.
Waterman, Harold
2011-02-01
This article explores the concept of service in modern health and social care. It examines the principles of servant leadership in the contexts of service, community and vision, in that, if service is what leaders do, community is who they do it for and vision is how the two concepts are brought together.
Thermal imaging as a smartphone application: exploring and implementing a new concept
NASA Astrophysics Data System (ADS)
Yanai, Omer
2014-06-01
Today's world is going mobile. Smartphone devices have become an important part of everyday life for billions of people around the globe. Thermal imaging cameras have been around for half a century and are now making their way into our daily lives. Originally built for military applications, thermal cameras are starting to be considered for personal use, enabling enhanced vision and temperature mapping for different groups of professional individuals. Through a revolutionary concept that turns smartphones into fully functional thermal cameras, we have explored how these two worlds can converge by utilizing the best of each technology. We will present the thought process, design considerations and outcome of our development process, resulting in a low-power, high resolution, lightweight USB thermal imaging device that turns Android smartphones into thermal cameras. We will discuss the technological challenges that we faced during the development of the product, and what are the system design decisions taken during the implementation. We will provide some insights we came across during this development process. Finally, we will discuss the opportunities that this innovative technology brings to the market.
Structural Concepts and Materials for Lunar Exploration Habitats
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Watson, Judith J.; Singhal, Surendra N.
2006-01-01
A new project within the Exploration Systems Mission Directorate s Technology Development Program at NASA involves development of lightweight structures and low temperature mechanisms for Lunar and Mars missions. The Structures and Mechanisms project is to develop advanced structure technology for the primary structure of various pressurized elements needed to implement the Vision for Space Exploration. The goals are to significantly enhance structural systems for man-rated pressurized structures by 1) lowering mass and/or improving efficient volume for reduced launch costs, 2) improving performance to reduce risk and extend life, and 3) improving manufacturing and processing to reduce costs. The targeted application of the technology is to provide for the primary structure of the pressurized elements of the lunar lander for both sortie and outpost missions, and surface habitats for the outpost missions. The paper presents concepts for habitats that support six month (and longer) lunar outpost missions. Both rigid and flexible habitat wall systems are discussed. The challenges of achieving a multi-functional habitat that provides micro-meteoroid, radiation, and thermal protection for explorers are identified.
The organising vision for telehealth and telecare: discourse analysis
Procter, Rob; Wherton, Joe; Sugarhood, Paul; Shaw, Sara
2012-01-01
Objective To (1) map how different stakeholders understand telehealth and telecare technologies and (2) explore the implications for development and implementation of telehealth and telecare services. Design Discourse analysis. Sample 68 publications representing diverse perspectives (academic, policy, service, commercial and lay) on telehealth and telecare plus field notes from 10 knowledge-sharing events. Method Following a familiarisation phase (browsing and informal interviews), we studied a systematic sample of texts in detail. Through repeated close reading, we identified assumptions, metaphors, storylines, scenarios, practices and rhetorical positions. We added successive findings to an emerging picture of the whole. Main findings Telehealth and telecare technologies featured prominently in texts on chronic illness and ageing. There was no coherent organising vision. Rather, four conflicting discourses were evident and engaged only minimally with one another's arguments. Modernist discourse presented a futuristic utopian vision in which assistive technologies, implemented at scale, would enable society to meet its moral obligations to older people by creating a safe ‘smart’ home environment where help was always at hand, while generating efficiency savings. Humanist discourse emphasised the uniqueness and moral worth of the individual and tailoring to personal and family context; it considered that technologies were only sometimes fit for purpose and could create as well as solve problems. Political economy discourse envisaged a techno-economic complex of powerful vested interests driving commodification of healthcare and diversion of public funds into private business. Change management discourse recognised the complicatedness of large-scale technology programmes and emphasised good project management and organisational processes. Conclusion Introduction of telehealth and telecare is hampered because different stakeholders hold different assumptions, values and world views, ‘talk past’ each other and compete for recognition and resources. If investments in these technologies are to bear fruit, more effective inter-stakeholder dialogue must occur to establish an organising vision that better accommodates competing discourses. PMID:22815469
Using an auditory sensory substitution device to augment vision: evidence from eye movements.
Wright, Thomas D; Margolis, Aaron; Ward, Jamie
2015-03-01
Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.
NASA Astrophysics Data System (ADS)
Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.
2002-06-01
A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.
Engineering Social Justice into Traffic Control for Self-Driving Vehicles?
Mladenovic, Milos N; McPherson, Tristram
2016-08-01
The convergence of computing, sensing, and communication technology will soon permit large-scale deployment of self-driving vehicles. This will in turn permit a radical transformation of traffic control technology. This paper makes a case for the importance of addressing questions of social justice in this transformation, and sketches a preliminary framework for doing so. We explain how new forms of traffic control technology have potential implications for several dimensions of social justice, including safety, sustainability, privacy, efficiency, and equal access. Our central focus is on efficiency and equal access as desiderata for traffic control design. We explain the limitations of conventional traffic control in meeting these desiderata, and sketch a preliminary vision for a next-generation traffic control tailored to address better the demands of social justice. One component of this vision is cooperative, hierarchically distributed self-organization among vehicles. Another component of this vision is a priority system enabling selection of priority levels by the user for each vehicle trip in the network, based on the supporting structure of non-monetary credits.
Mills, K C; Spruill, S E; Kanne, R W; Parkman, K M; Zhang, Y
2001-01-01
A computerized task was used in two studies to examine the influence of stimulants, sedatives, and fatigue on single-target and divided-attention responses in different parts of the visual field. The drug effects were evaluated over time with repeated behavioral and subjective measures against ascending and descending drug levels. In the first study, 18 fully rested participants received placebo, alprazolam (0.5 mg), and dextroamphetamine (10 mg). Alprazolam impairs performance, whereas dextroamphetamine induces enhancement and tunnel vision. Study 2 exposed 32 participants to fatigue and no fatigue with a repeated-measures crossover design. Four independent groups subsequently received placebo, dextroamphetamine (10 mg), caffeine (250 mg), or alcohol (.07%). Under fatigue, stimulants have no performance-enhancing effects, whereas impairment from alcohol is severe. Under no fatigue, alcohol has a modest effect, caffeine has no effect, and dextroamphetamine significantly enhances divided-attention performance coincident with tunnel vision. Participants rate all drug effects more stimulating and less sedating while fatigued. Implications for transportation safety are discussed. Actual or potential applications of this research include driver and pilot training.
Campus Technology Innovators Awards 2009
ERIC Educational Resources Information Center
Grush, Mary; Villano, Matt
2009-01-01
The annual Campus Technology Innovators awards recognize higher education institutions that take true initiative--even out-and-out risk--to better serve the campus community via technology. These top-notch university administrators, faculty, and staff demonstrate something more than a "job well done"; their vision and leadership have…
Relationship between the Full Range Leadership Model and Information Technology Tools Usage
ERIC Educational Resources Information Center
Landell, Antonio White
2013-01-01
Due to major technological and social changes, world dynamics have undergone tremendous leadership style and technology transitions. The transformation of information technology tools usage (ITTU) created a new paradigm confronting leaders that can provide the right change of vision to effectively motivate, inspire, and transform others to work at…
The Right Track for Vision Correction
NASA Technical Reports Server (NTRS)
2003-01-01
More and more people are putting away their eyeglasses and contact lenses as a result of laser vision correction surgery. LASIK, the most widely performed version of this surgical procedure, improves vision by reshaping the cornea, the clear front surface of the eye, using an excimer laser. One excimer laser system, Alcon s LADARVision 4000, utilizes a laser radar (LADAR) eye tracking device that gives it unmatched precision. During LASIK surgery, laser During LASIK surgery, laser pulses must be accurately placed to reshape the cornea. A challenge to this procedure is the patient s constant eye movement. A person s eyes make small, involuntary movements known as saccadic movements about 100 times per second. Since the saccadic movements will not stop during LASIK surgery, most excimer laser systems use an eye tracking device that measures the movements and guides the placement of the laser beam. LADARVision s eye tracking device stems from the LADAR technology originally developed through several Small Business Innovation Research (SBIR) contracts with NASA s Johnson Space Center and the U.S. Department of Defense s Ballistic Missile Defense Office (BMDO). In the 1980s, Johnson awarded Autonomous Technologies Corporation a Phase I SBIR contract to develop technology for autonomous rendezvous and docking of space vehicles to service satellites. During Phase II of the Johnson SBIR contract, Autonomous Technologies developed a prototype range and velocity imaging LADAR to demonstrate technology that could be used for this purpose.
Analysis of the resistive network in a bio-inspired CMOS vision chip
NASA Astrophysics Data System (ADS)
Kong, Jae-Sung; Sung, Dong-Kyu; Hyun, Hyo-Young; Shin, Jang-Kyoo
2007-12-01
CMOS vision chips for edge detection based on a resistive circuit have recently been developed. These chips help develop neuromorphic systems with a compact size, high speed of operation, and low power dissipation. The output of the vision chip depends dominantly upon the electrical characteristics of the resistive network which consists of a resistive circuit. In this paper, the body effect of the MOSFET for current distribution in a resistive circuit is discussed with a simple model. In order to evaluate the model, two 160×120 CMOS vision chips have been fabricated by using a standard CMOS technology. The experimental results have been nicely matched with our prediction.
Nose, Atsushi; Yamazaki, Tomohiro; Katayama, Hironobu; Uehara, Shuji; Kobayashi, Masatsugu; Shida, Sayaka; Odahara, Masaki; Takamiya, Kenichi; Matsumoto, Shizunori; Miyashita, Leo; Watanabe, Yoshihiro; Izawa, Takashi; Muramatsu, Yoshinori; Nitta, Yoshikazu; Ishikawa, Masatoshi
2018-04-24
We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.
Bioelectronic retinal prosthesis
NASA Astrophysics Data System (ADS)
Weiland, James D.
2016-05-01
Retinal prosthesis have been translated to clinical use over the past two decades. Currently, two devices have regulatory approval for the treatment of retinitis pigmentosa and one device is in clinical trials for treatment of age-related macular degeneration. These devices provide partial sight restoration and patients use this improved vision in their everyday lives to navigate and to detect large objects. However, significant vision restoration will require both better technology and improved understanding of the interaction between electrical stimulation and the retina. In particular, current retinal prostheses do not provide peripheral visions due to technical and surgical limitations, thus limiting the effectiveness of the treatment. This paper reviews recent results from human implant patients and presents technical approaches for peripheral vision.
Tunnel vision: sharper gradient of spatial attention in autism.
Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I
2013-04-17
Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.
Dissolvable tattoo sensors: from science fiction to a viable technology
NASA Astrophysics Data System (ADS)
Cheng, Huanyu; Yi, Ning
2017-01-01
Early surrealistic painting and science fiction movies have envisioned dissolvable tattoo electronic devices. In this paper, we will review the recent advances that transform that vision into a viable technology, with extended capabilities even beyond the early vision. Specifically, we focus on the discussion of a stretchable design for tattoo sensors and degradable materials for dissolvable sensors, in the form of inorganic devices with a performance comparable to modern electronics. Integration of these two technologies as well as the future developments of bio-integrated devices is also discussed. Many of the appealing ideas behind developments of these devices are drawn from nature and especially biological systems. Thus, bio-inspiration is believed to continue playing a key role in future devices for bio-integration and beyond.
Use, perceptions, and benefits of automotive technologies among aging drivers.
Eby, David W; Molnar, Lisa J; Zhang, Liang; St Louis, Renée M; Zanier, Nicole; Kostyniuk, Lidia P; Stanciu, Sergiu
2016-12-01
Advanced in-vehicle technologies have been proposed as a potential way to keep older adults driving for as long as they can safely do so, by taking into account the common declines in functional abilities experienced by older adults. The purpose of this report was to synthesize the knowledge about older drivers and advanced in-vehicle technologies, focusing on three areas: use (how older drivers use these technologies), perception (what they think about the technologies), and outcomes (the safety and/or comfort benefits of the technologies). Twelve technologies were selected for review and grouped into three categories: crash avoidance systems (lane departure warning, curve speed warning, forward collision warning, blind spot warning, parking assistance); in-vehicle information systems (navigation assistance, intelligent speed adaptation); and other systems (adaptive cruise control, automatic crash notification, night vision enhancement, adaptive headlight, voice activated control). A comprehensive and systematic search was conducted for each technology to collect related publications. 271 articles were included into the final review. Research findings for each of the 12 technologies are synthesized in relation to how older adults use and think about the technologies as well as potential benefits. These results are presented separately for each technology. Can advanced in-vehicle technologies help extend the period over which an older adult can drive safely? This report answers this question with an optimistic "yes." Some of the technologies reviewed in this report have been shown to help older drivers avoid crashes, improve the ease and comfort of driving, and travel to places and at times that they might normally avoid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widergren, Steven E.; Knight, Mark R.; Melton, Ronald B.
The Interoperability Strategic Vision whitepaper aims to promote a common understanding of the meaning and characteristics of interoperability and to provide a strategy to advance the state of interoperability as applied to integration challenges facing grid modernization. This includes addressing the quality of integrating devices and systems and the discipline to improve the process of successfully integrating these components as business models and information technology improve over time. The strategic vision for interoperability described in this document applies throughout the electric energy generation, delivery, and end-use supply chain. Its scope includes interactive technologies and business processes from bulk energy levelsmore » to lower voltage level equipment and the millions of appliances that are becoming equipped with processing power and communication interfaces. A transformational aspect of a vision for interoperability in the future electric system is the coordinated operation of intelligent devices and systems at the edges of grid infrastructure. This challenge offers an example for addressing interoperability concerns throughout the electric system.« less
The hydrogen technology assessment, phase 1
NASA Technical Reports Server (NTRS)
Bain, Addison
1991-01-01
The purpose of this phase 1 report is to begin to form the information base of the economics and energy uses of hydrogen-related technologies on which the members of the National Hydrogen Association (NHA) can build a hydrogen vision of the future. The secondary goal of this report is the development of NHA positions on national research, development, and demonstration opportunities. The third goal, with the aid of the established hydrogen vision and NHA positions, is to evaluate ongoing federal research goals and activities. The evaluations will be performed in a manner that compares the costs associated with using systems that achieve those goals against the cost of performing those tasks today with fossil fuels. From this ongoing activity should emerge an NHA information base, one or more hydrogen visions of the future, and cost and performance targets for hydrogen applications to complete in the market place.
Machine vision methods for use in grain variety discrimination and quality analysis
NASA Astrophysics Data System (ADS)
Winter, Philip W.; Sokhansanj, Shahab; Wood, Hugh C.
1996-12-01
Decreasing cost of computer technology has made it feasible to incorporate machine vision technology into the agriculture industry. The biggest attraction to using a machine vision system is the computer's ability to be completely consistent and objective. One use is in the variety discrimination and quality inspection of grains. Algorithms have been developed using Fourier descriptors and neural networks for use in variety discrimination of barley seeds. RGB and morphology features have been used in the quality analysis of lentils, and probability distribution functions and L,a,b color values for borage dockage testing. These methods have been shown to be very accurate and have a high potential for agriculture. This paper presents the techniques used and results obtained from projects including: a lentil quality discriminator, a barley variety classifier, a borage dockage tester, a popcorn quality analyzer, and a pistachio nut grading system.
Draper Laboratory small autonomous aerial vehicle
NASA Astrophysics Data System (ADS)
DeBitetto, Paul A.; Johnson, Eric N.; Bosse, Michael C.; Trott, Christian A.
1997-06-01
The Charles Stark Draper Laboratory, Inc. and students from Massachusetts Institute of Technology and Boston University have cooperated to develop an autonomous aerial vehicle that won the 1996 International Aerial Robotics Competition. This paper describes the approach, system architecture and subsystem designs for the entry. This entry represents a combination of many technology areas: navigation, guidance, control, vision processing, human factors, packaging, power, real-time software, and others. The aerial vehicle, an autonomous helicopter, performs navigation and control functions using multiple sensors: differential GPS, inertial measurement unit, sonar altimeter, and a flux compass. The aerial transmits video imagery to the ground. A ground based vision processor converts the image data into target position and classification estimates. The system was designed, built, and flown in less than one year and has provided many lessons about autonomous vehicle systems, several of which are discussed. In an appendix, our current research in augmenting the navigation system with vision- based estimates is presented.
Systems Analysis of Remote Piloting/Robotics Technology Applicable to Assault Rafts.
1982-01-01
LTOG 3.936 m n Driver Position Driver is only member of crew seated under armor - seated in front leftI of hull - 3 M17 periscopes - single piece...hatch cover. Vision Data Summary I.D - 3P; H - 82.5* to 165* V - 110 to 220 SC - Is not under armor ; therefore has freedom of vision. Mobility Information... under armor - seated in front left * of hull - 3 Ml7 periscopes - single piece hatch cover. 4Vision Data Summary SD - 3P; H - 82.50 to 1650 V - 110 to
The way of sex: Joseph Needham and Jolan Chang.
Rocha, Leon Antonio
2012-09-01
This paper analyses the understandings of Daoist alchemy and Chinese sexuality of Joseph Needham and his friend and correspondent, the Chinese-Swedish writer Jolan Chang (Chang Chung-lan, 1917-2002). Using the extensive correspondence between the two men, as well as Needham's files on "inner alchemy" deposited at the Needham Research Institute, the paper begins with a partial reconstruction of a 1977 symposium, chaired by Needham, to promote Chang's new book, The Tao of Love and Sex: The Ancient Chinese Way to Ecstasy. Needham and Chang's visions of Chinese sex are then read against excerpts from Science and Civilisation in China, specifically Volume V: Chemistry and Chemical Technology, Part 5: Spagyrical Discovery and Invention: Physiological Alchemy (1983). Three inter-related aspects are explored. First, reading Science and Civilisation in China against materials in the Needham archives offers crucial hints to Needham's historiography and historical practice. Second, the way that Daoist regimens came to be actively reconstructed and repackaged as practices concerned with the enhancement of sexual pleasure and intensity. Third, the investigation of the networks and circulations of assumptions, visions, fantasies about "China". Copyright © 2012 Elsevier Ltd. All rights reserved.
Automatic detection system of shaft part surface defect based on machine vision
NASA Astrophysics Data System (ADS)
Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang
2015-05-01
Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.
Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI
Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel
2012-01-01
Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.
Apprenticeship 2000: Ontario Community Colleges' Vision for the 21st Century.
ERIC Educational Resources Information Center
Association of Colleges of Applied Arts and Technology of Ontario, North York.
In response to the Ministry of Education and Training Discussion Paper on Apprenticeship Reform, the Council of Presidents of the Colleges of Applied Arts and Technology of Ontario presented a new vision for apprenticeship in Ontario. The 21st century apprenticeship system aims to remove barriers and enable workers to successfully adjust and cope…
ERIC Educational Resources Information Center
Supalo, Cary A.; Hill, April A.; Larrick, Carleigh G.
2014-01-01
Hands-on science enrichment experiences can be limited for students with blindness or low vision (BLV). This manuscript describes recent hands-on summer enrichment programs held for BLV students. Also presented are innovative technologies that were developed to provide spoken quantitative feedback for BLV students engaged in hands-on science…
ERIC Educational Resources Information Center
Wilkinson, John
2013-01-01
Humans have always had the vision to one day live on other planets. This vision existed even before the first person was put into orbit. Since the early space missions of putting humans into orbit around Earth, many advances have been made in space technology. We have now sent many space probes deep into the Solar system to explore the planets and…
Maximizing the Impact: The Pivotal Role of Technology in a 21st Century Education System
ERIC Educational Resources Information Center
Vockley, Martha
2007-01-01
All students need a more robust education--and a refreshingly different kind of education--than most are getting today. The vision of learning individuals embrace focuses on teaching students to become critical thinkers, problem solvers and innovators; effective communicators and collaborators; and self-directed learners. This vision responds to…
Both Sides Now: Visualizing and Drawing with the Right and Left Hemispheres of the Brain
ERIC Educational Resources Information Center
Schiferl, E. I.
2008-01-01
Neuroscience research provides new models for understanding vision that challenge Betty Edwards' (1979, 1989, 1999) assumptions about right brain vision and common conventions of "realistic" drawing. Enlisting PET and fMRI technology, neuroscience documents how the brains of normal adults respond to images of recognizable objects and scenes.…
Vision and Reality in Electronic Textbooks: What Publishers Need to Do to Survive
ERIC Educational Resources Information Center
Brown, Byron W.
2012-01-01
Today's electronic textbooks (e-texts) are technologically backward and over-priced. Yet publishers continue to press for contracts with colleges and universities that move toward the mandatory purchase by students of e-texts instead of printed books. The author lays out his vision for digital learning materials, explains the details of the…
Framing ICT, Teachers and Learners in Australian School Education ICT Policy
ERIC Educational Resources Information Center
Jordan, Kathy
2011-01-01
It is well over 20 years since information and communication technologies (ICT) was first included as part of a future vision for Australia's schools. Since this time numerous national policies have been developed, which collectively articulate an official discourse in support of a vision for ICT to be embedded in our schools, and routinely used…
ERIC Educational Resources Information Center
Liebmann, Jeffrey D.
Information technology is changing the workplace. Forecasts range from wondrous visions of future capabilities to dark scenarios of employment loss and dehumanization. Some predict revolutionary impacts, while others conclude that the way we do business will change only gradually if much at all. The less positive visions of the future workplace…
NASA Technical Reports Server (NTRS)
Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.
1992-01-01
This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
Educational Applications of Vision Therapy: A Pilot Study on Children with Autism.
ERIC Educational Resources Information Center
Lovelace, Kelly; Rhodes, Heidi; Chambliss, Catherine
This report discusses the outcomes of a study that explored the feasibility of using vision therapy (VT) as part of an interdisciplinary approach to the education of children with autism. Traditional research on VT has explored its usefulness in helping patients to use both eyes together, improve depth perception, and enhance visual acuity.…
B. F. Skinner's Utopian Vision: Behind and beyond "Walden Two"
ERIC Educational Resources Information Center
Altus, Deborah E.; Morris, Edward K.
2009-01-01
This paper addresses B. F. Skinner's utopian vision for enhancing social justice and human wellbeing in his 1948 novel, "Walden Two." In the first part, we situate the book in its historical, intellectual, and social context of the utopian genre, address critiques of the book's premises and practices, and discuss the fate of intentional…
Vision in Children and Adolescents with Autistic Spectrum Disorder: Evidence for Reduced Convergence
ERIC Educational Resources Information Center
Milne, Elizabeth; Griffiths, Helen; Buckley, David; Scope, Alison
2009-01-01
Evidence of atypical perception in individuals with ASD is mainly based on self report, parental questionnaires or psychophysical/cognitive paradigms. There have been relatively few attempts to establish whether binocular vision is enhanced, intact or abnormal in those with ASD. To address this, we screened visual function in 51 individuals with…
Technology-Rich Schools Up Close
ERIC Educational Resources Information Center
Levin, Barbara B.; Schrum, Lynne
2013-01-01
This article observes that schools that use technology well have key commonalities, including a project-based curriculum and supportive, distributed leadership. The authors' research into tech-rich schools revealed that schools used three strategies to integrate technology successfully. They did so by establishing the vision and culture,…
Bauer, Ben
2015-09-01
Scientific experimentation requires specification and control of independent variables with accurate measurement of dependent variables. In Vision Sciences (here broadly including experimental psychology, cognitive neuroscience, psychophysics, and clinical vision), proper specification and control of stimulus rendering (already a thorny issue) may become more problematic as several newer display technologies replace cathode ray tubes (CRTs) in the lab. The present paper alerts researchers to spatiotemporal differences in display technologies and how these might affect various types of experiments. Parallels are drawn to similar challenges and solutions that arose during the change from cabinet-style tachistoscopes to computer driven CRT tachistoscopes. Technical papers outlining various strengths and limitations of several classes of display devices are introduced as a resource for the reader wanting to select appropriate displays for different presentation requirements. These papers emphasise the need to measure rather than assume display characteristics because manufacturers' specifications and software reports/settings may not correspond with actual performance. This is consistent with the call by several Vision Science and Psychological Science bodies to increase replications and increase detail in Method sections. Finally, several recent tachistoscope-based experiments, which focused on the same question but were implemented with different technologies, are compared for illustrative purposes. (c) 2015 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotrell, Jason; Veers, Paul
2015-09-29
Keynote presentation at the Iowa State Wind Energy Symposium. This presentation examines several cutting-edge technologies and research being performed by the National Renewable Energy Laboratory that is helping achieve the U.S. Department of Energy's Wind Vision.
NASA Technical Reports Server (NTRS)
Delnore, Victor E. (Compiler)
1994-01-01
The Fifth (and Final) Combined Manufacturers' and Technologists' Airborne Windshear Review Meeting was hosted jointly by the NASA Langley Research Center (LaRC) and the Federal Aviation Administration (FAA) in Hampton, Virginia, on September 28-30, 1993. The purpose of the meeting was to report on the highly successful windshear experiments conducted by government, academic institutions, and industry; to transfer the results to regulators, manufacturers, and users; and to set initiatives for future aeronautics technology research. The formal sessions covered recent developments in windshear flight testing; windshear modeling, flight management, and ground-based systems; airborne windshear detection systems; certification and regulatory issues; development and applications of sensors for wake vortex detection; and synthetic and enhanced vision systems.