Sample records for night vision imaging

  1. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  2. Salient contour extraction from complex natural scene in night vision image

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.

  3. Helicopter flights with night-vision goggles: Human factors aspects

    NASA Technical Reports Server (NTRS)

    Brickner, Michael S.

    1989-01-01

    Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.

  4. Evaluation of visual acuity with Gen 3 night vision goggles

    NASA Technical Reports Server (NTRS)

    Bradley, Arthur; Kaiser, Mary K.

    1994-01-01

    Using laboratory simulations, visual performance was measured at luminance and night vision imaging system (NVIS) radiance levels typically encountered in the natural nocturnal environment. Comparisons were made between visual performance with unaided vision and that observed with subjects using image intensification. An Amplified Night Vision Imaging System (ANVIS6) binocular image intensifier was used. Light levels available in the experiments (using video display technology and filters) were matched to those of reflecting objects illuminated by representative night-sky conditions (e.g., full moon, starlight). Results show that as expected, the precipitous decline in foveal acuity experienced with decreasing mesopic luminance levels is effectively shifted to much lower light levels by use of an image intensification system. The benefits of intensification are most pronounced foveally, but still observable at 20 deg eccentricity. Binocularity provides a small improvement in visual acuity under both intensified and unintensified conditions.

  5. Night Vision Goggle Training; Development and Production of Six Video Programs

    DTIC Science & Technology

    1992-11-01

    SUIUECT TERMS Multimedia Video production iS. NUMBER OF PAGES Aeral photography Night vision Videodisc 18 Image Intensification Night vision goggles...reference tool on the squadron or wing demonstrates NVG field of view, field of level. The programs run approximately ten regard, scan techniques, image...training device modalities. These The production of a videodisc that modalities include didactic and video will serve as an NVG audio-visual database

  6. Vision and night driving abilities of elderly drivers.

    PubMed

    Gruber, Nicole; Mosimann, Urs P; Müri, René M; Nef, Tobias

    2013-01-01

    In this article, we review the impact of vision on older people's night driving abilities. Driving is the preferred and primary mode of transport for older people. It is a complex activity where intact vision is seminal for road safety. Night driving requires mesopic rather than scotopic vision, because there is always some light available when driving at night. Scotopic refers to night vision, photopic refers to vision under well-lit conditions, and mesopic vision is a combination of photopic and scotopic vision in low but not quite dark lighting situations. With increasing age, mesopic vision decreases and glare sensitivity increases, even in the absence of ocular diseases. Because of the increasing number of elderly drivers, more drivers are affected by night vision difficulties. Vision tests, which accurately predict night driving ability, are therefore of great interest. We reviewed existing literature on age-related influences on vision and vision tests that correlate or predict night driving ability. We identified several studies that investigated the relationship between vision tests and night driving. These studies found correlations between impaired mesopic vision or increased glare sensitivity and impaired night driving, but no correlation was found among other tests; for example, useful field of view or visual field. The correlation between photopic visual acuity, the most commonly used test when assessing elderly drivers, and night driving ability has not yet been fully clarified. Photopic visual acuity alone is not a good predictor of night driving ability. Mesopic visual acuity and glare sensitivity seem relevant for night driving. Due to the small number of studies evaluating predictors for night driving ability, further research is needed.

  7. Night vision: requirements and possible roadmap for FIR and NIR systems

    NASA Astrophysics Data System (ADS)

    Källhammer, Jan-Erik

    2006-04-01

    A night vision system must increase visibility in situations where only low beam headlights can be used today. As pedestrians and animals have the highest risk increase in night time traffic due to darkness, the ability of detecting those objects should be the main performance criteria, and the system must remain effective when facing the headlights of oncoming vehicles. Far infrared system has been shown to be superior to near infrared system in terms of pedestrian detection distance. Near infrared images were rated to have significantly higher visual clutter compared with far infrared images. Visual clutter has been shown to correlate with reduction in detection distance of pedestrians. Far infrared images are perceived as being more unusual and therefore more difficult to interpret, although the image appearance is likely related to the lower visual clutter. However, the main issue comparing the two technologies should be how well they solve the driver's problem with insufficient visibility under low beam conditions, especially of pedestrians and other vulnerable road users. With the addition of an automatic detection aid, a main issue will be whether the advantage of FIR systems will vanish given NIR systems with well performing automatic pedestrian detection functionality. The first night vision introductions did not generate the sales volumes initially expected. A renewed interest in night vision systems are however to be expected after the release of night vision systems by BMW, Mercedes and Honda, the latter with automatic pedestrian detection.

  8. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  9. Multi-channel automotive night vision system

    NASA Astrophysics Data System (ADS)

    Lu, Gang; Wang, Li-jun; Zhang, Yi

    2013-09-01

    A four-channel automotive night vision system is designed and developed .It is consist of the four active near-infrared cameras and an Mulit-channel image processing display unit,cameras were placed in the automobile front, left, right and rear of the system .The system uses near-infrared laser light source,the laser light beam is collimated, the light source contains a thermoelectric cooler (TEC),It can be synchronized with the camera focusing, also has an automatic light intensity adjustment, and thus can ensure the image quality. The principle of composition of the system is description in detail,on this basis, beam collimation,the LD driving and LD temperature control of near-infrared laser light source,four-channel image processing display are discussed.The system can be used in driver assistance, car BLIS, car parking assist system and car alarm system in day and night.

  10. Night vision: changing the way we drive

    NASA Astrophysics Data System (ADS)

    Klapper, Stuart H.; Kyle, Robert J. S.; Nicklin, Robert L.; Kormos, Alexander L.

    2001-03-01

    A revolutionary new Night Vision System has been designed to help drivers see well beyond their headlights. From luxury automobiles to heavy trucks, Night Vision is helping drivers see better, see further, and react sooner. This paper describes how Night Vision Systems are being used in transportation and their viability for the future. It describes recent improvements to the system currently in the second year of production. It also addresses consumer education and awareness, cost reduction, product reliability, market expansion and future improvements.

  11. Moulded infrared optics making night vision for cars within reach

    NASA Astrophysics Data System (ADS)

    Bourget, Antoine; Guimond, Yann; Franks, John; Van Den Bergh, Marleen

    2005-02-01

    Sustainable mobility is a major public concern, making increased safety one of the major challenges for the car of the future. About half of all serious traffic accidents occur at night, while only a minority of journeys is at night. Reduced visibility is one of the main reasons for these striking statistics and this explains the interest of the automobile industry in Enhanced Night Vision Systems. As an answer to the need for high volume, low cost optics for these applications, Umicore has developed GASIR. This material is transparent in the NEAR and FAR infrared, and is mouldable into high quality finished spherical, aspherical and diffractive lenses. Umicore's GASIR moulded lenses are an ideal solution for thermal imaging for cars (Night Vision) and for sensing systems like pedestrian detection, collision avoidance, occupation detection, intelligent airbag systems etc.

  12. Night vision imaging system design, integration and verification in spacecraft vacuum thermal test

    NASA Astrophysics Data System (ADS)

    Shang, Yonghong; Wang, Jing; Gong, Zhe; Li, Xiyuan; Pei, Yifei; Bai, Tingzhu; Zhen, Haijing

    2015-08-01

    The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn't compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5° during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.

  13. Night Vision Laboratory Static Performance Model for Thermal Viewing Systems

    DTIC Science & Technology

    1975-04-01

    Research and Development Technical Report f ECOM-� • i’.__1’=• =•NIGHT VISION LABORATORY STATIC PERFORMANCE MODEL 1 S1=• : FOR THERMAL VIEWING...resolvable temperature Infrared imaging Minimum detectable temperature1.Detection and recognition performance Night visi,-)n Noise equivalent temperature...modulation transfer function (MTF). The noise charactcristics are specified by the noise equivalent temper- ature difference (NE AT), The next sections

  14. Detection of Special Operations Forces Using Night Vision Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, C.M.

    2001-10-22

    Night vision devices, such image intensifiers and infrared imagers, are readily available to a host of nations, organizations, and individuals through international commerce. Once the trademark of special operations units, these devices are widely advertised to ''turn night into day''. In truth, they cannot accomplish this formidable task, but they do offer impressive enhancement of vision in limited light scenarios through electronically generated images. Image intensifiers and infrared imagers are both electronic devices for enhancing vision in the dark. However, each is based upon a totally different physical phenomenon. Image intensifiers amplify the available light energy whereas infrared imagers detectmore » the thermal energy radiated from all objects. Because of this, each device operates from energy which is present in a different portion of the electromagnetic spectrum. This leads to differences in the ability of each device to detect and/or identify objects. This report is a compilation of the available information on both state-of-the-art image intensifiers and infrared imagers. Image intensifiers developed in the United States, as well as some foreign made image intensifiers, are discussed. Image intensifiers are categorized according to their spectral response and sensitivity using the nomenclature of GEN I, GEN II, and GEN III. As the first generation of image intensifiers, GEN I, were large and of limited performance, this report will deal with only GEN II and GEN III equipment. Infrared imagers are generally categorized according to their spectral response, sensor materials, and related sensor operating temperature using the nomenclature Medium Wavelength Infrared (MWIR) Cooled and Long Wavelength Infrared (LWIR) Uncooled. MWIR Cooled refers to infrared imagers which operate in the 3 to 5 {micro}m wavelength electromagnetic spectral region and require either mechanical or thermoelectric coolers to keep the sensors operating at 77 K. LWIR Uncooled

  15. Night myopia is reduced in binocular vision.

    PubMed

    Chirre, Emmanuel; Prieto, Pedro M; Schwarz, Christina; Artal, Pablo

    2016-06-01

    Night myopia, which is a shift in refraction with light level, has been widely studied but still lacks a complete understanding. We used a new infrared open-view binocular Hartmann-Shack wave front sensor to quantify night myopia under monocular and natural binocular viewing conditions. Both eyes' accommodative response, aberrations, pupil diameter, and convergence were simultaneously measured at light levels ranging from photopic to scotopic conditions to total darkness. For monocular vision, reducing the stimulus luminance resulted in a progression of the accommodative state that tends toward the subject's dark focus or tonic accommodation and a change in convergence following the induced accommodative error. Most subjects presented a myopic shift of accommodation that was mitigated in binocular vision. The impact of spherical aberration on the focus shift was relatively small. Our results in monocular conditions support the hypothesis that night myopia has an accommodative origin as the eye progressively changes its accommodation state with decreasing luminance toward its resting state in total darkness. On the other hand, binocularity restrains night myopia, possibly by using fusional convergence as an additional accommodative cue, thus reducing the potential impact of night myopia on vision at low light levels.

  16. Advanced electro-mechanical micro-shutters for thermal infrared night vision imaging and targeting systems

    NASA Astrophysics Data System (ADS)

    Durfee, David; Johnson, Walter; McLeod, Scott

    2007-04-01

    Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.

  17. Insect photoreceptor adaptations to night vision

    PubMed Central

    Honkanen, Anna; Salmela, Iikka; Weckström, Matti

    2017-01-01

    Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals. This article is part of the themed issue ‘Vision in dim light’. PMID:28193821

  18. Insect photoreceptor adaptations to night vision.

    PubMed

    Honkanen, Anna; Immonen, Esa-Ville; Salmela, Iikka; Heimonen, Kyösti; Weckström, Matti

    2017-04-05

    Night vision is ultimately about extracting information from a noisy visual input. Several species of nocturnal insects exhibit complex visually guided behaviour in conditions where most animals are practically blind. The compound eyes of nocturnal insects produce strong responses to single photons and process them into meaningful neural signals, which are amplified by specialized neuroanatomical structures. While a lot is known about the light responses and the anatomical structures that promote pooling of responses to increase sensitivity, there is still a dearth of knowledge on the physiology of night vision. Retinal photoreceptors form the first bottleneck for the transfer of visual information. In this review, we cover the basics of what is known about physiological adaptations of insect photoreceptors for low-light vision. We will also discuss major enigmas of some of the functional properties of nocturnal photoreceptors, and describe recent advances in methodologies that may help to solve them and broaden the field of insect vision research to new model animals.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).

  19. Development of an Automatic Testing Platform for Aviator's Night Vision Goggle Honeycomb Defect Inspection.

    PubMed

    Jian, Bo-Lin; Peng, Chao-Chung

    2017-06-15

    Due to the direct influence of night vision equipment availability on the safety of night-time aerial reconnaissance, maintenance needs to be carried out regularly. Unfortunately, some defects are not easy to observe or are not even detectable by human eyes. As a consequence, this study proposed a novel automatic defect detection system for aviator's night vision imaging systems AN/AVS-6(V)1 and AN/AVS-6(V)2. An auto-focusing process consisting of a sharpness calculation and a gradient-based variable step search method is applied to achieve an automatic detection system for honeycomb defects. This work also developed a test platform for sharpness measurement. It demonstrates that the honeycomb defects can be precisely recognized and the number of the defects can also be determined automatically during the inspection. Most importantly, the proposed approach significantly reduces the time consumption, as well as human assessment error during the night vision goggle inspection procedures.

  20. Aural-Nondetectability Model Predictions for Night-Vision Goggles across Ambient Lighting Conditions

    DTIC Science & Technology

    2015-12-01

    ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across...ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient...May 2015–30 Sep 2015 4. TITLE AND SUBTITLE Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient Lighting Conditions 5a

  1. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  2. Night vision imaging systems design, integration, and verification in military fighter aircraft

    NASA Astrophysics Data System (ADS)

    Sabatini, Roberto; Richardson, Mark A.; Cantiello, Maurizio; Toscano, Mario; Fiorini, Pietro; Jia, Huamin; Zammit-Mangion, David

    2012-04-01

    This paper describes the developmental and testing activities conducted by the Italian Air Force Official Test Centre (RSV) in collaboration with Alenia Aerospace, Litton Precision Products and Cranfiled University, in order to confer the Night Vision Imaging Systems (NVIS) capability to the Italian TORNADO IDS (Interdiction and Strike) and ECR (Electronic Combat and Reconnaissance) aircraft. The activities consisted of various Design, Development, Test and Evaluation (DDT&E) activities, including Night Vision Goggles (NVG) integration, cockpit instruments and external lighting modifications, as well as various ground test sessions and a total of eighteen flight test sorties. RSV and Litton Precision Products were responsible of coordinating and conducting the installation activities of the internal and external lights. Particularly, an iterative process was established, allowing an in-site rapid correction of the major deficiencies encountered during the ground and flight test sessions. Both single-ship (day/night) and formation (night) flights were performed, shared between the Test Crews involved in the activities, allowing for a redundant examination of the various test items by all participants. An innovative test matrix was developed and implemented by RSV for assessing the operational suitability and effectiveness of the various modifications implemented. Also important was definition of test criteria for Pilot and Weapon Systems Officer (WSO) workload assessment during the accomplishment of various operational tasks during NVG missions. Furthermore, the specific technical and operational elements required for evaluating the modified helmets were identified, allowing an exhaustive comparative evaluation of the two proposed solutions (i.e., HGU-55P and HGU-55G modified helmets). The results of the activities were very satisfactory. The initial compatibility problems encountered were progressively mitigated by incorporating modifications both in the front and

  3. Night Vision Manual for the Flight Surgeon

    DTIC Science & Technology

    1992-08-01

    may cause night blindness are glaucoma, progressive cone/rod dystrophies (e.g., retinitis pigmentosa , Stargardt’s disease), drug toxicity (e.g...Alabama, July 1989. 38. Berson EL, Rabin AR, Mehaffey L. Advances in night vision twchnology: A pocketscope for patients with retinitis pigmentosa ... retinal sensitivity to dim light. Regeneration of the photopigments occurs during dark adaptation. The fully dark-adapted eye, in which photopigment

  4. A Comparison of the AVS-9 and the Panoramic Night Vision Goggles During Rotorcraft Hover and Landing

    NASA Technical Reports Server (NTRS)

    Szoboszlay, Zoltan; Haworth, Loran; Simpson, Carol

    2000-01-01

    A flight test was conducted to assess any differences in pilot-vehicle performance and pilot opinion between the use of a current generation night vision goggle (the AVS-9) and one variant of the prototype panoramic night vision goggle (the PNVGII). The panoramic goggle has more than double the horizontal field-of-view of the AVS-9, but reduced image quality. Overall the panoramic goggles compared well to the AVS-9 goggles. However, pilot comment and data are consistent with the assertion that some of the benefits of additional field-of-view with the panoramic goggles were negated by the reduced image quality of the particular variant of the panoramic goggles tested.

  5. Night vision and electro-optics technology transfer, 1972 - 1981

    NASA Astrophysics Data System (ADS)

    Fulton, R. W.; Mason, G. F.

    1981-09-01

    The purpose of this special report, 'Night Vision and Electro-Optics Technology Transfer 1972-1981,' is threefold: To illustrate, through actual case histories, the potential for exploiting a highly developed and available military technology for solving non-military problems. To provide, in a layman's language, the principles behind night vision and electro-optical devices in order that an awareness may be developed relative to the potential for adopting this technology for non-military applications. To obtain maximum dollar return from research and development investments by applying this technology to secondary applications. This includes, but is not limited to, applications by other Government agencies, state and local governments, colleges and universities, and medical organizations. It is desired that this summary of Technology Transfer activities within Night Vision and Electro-Optics Laboratory (NV/EOL) will benefit those who desire to explore one of the vast technological resources available within the Defense Department and the Federal Government.

  6. A Comparison of the AVS-9 and the Panoramic Night Vision Goggle During Rotorcraft Hover and Landing

    NASA Technical Reports Server (NTRS)

    Szoboszlay, Zoltan; Haworth, Loran; Simpson, Carol; Rutkowski, Michael (Technical Monitor)

    2001-01-01

    The purpose of this flight test was to measure any differences in pilot-vehicle performance and pilot opinion between the use of the current generation AVS-9 Night Vision Goggle and one variant of the prototype Panoramic Night Vision Goggle (the PNV.GII). The PNVGII has more than double the horizontal field-of-view of the AVS-9, but reduced image quality. The flight path of the AH-1S helicopter was used as a measure of pilot-vehicle performance. Also recorded were subjective measures of flying qualities, physical reserves of the pilot, situational awareness, and display usability. Pilot comment and data indicate that the benefits of additional FOV with the PNVGIIs are to some extent negated by the reduced image quality of the PNVGIIs.

  7. Color Vision in Color Display Night Vision Goggles.

    PubMed

    Liggins, Eric P; Serle, William P

    2017-05-01

    Aircrew viewing eyepiece-injected symbology on color display night vision goggles (CDNVGs) are performing a visual task involving color under highly unnatural viewing conditions. Their performance in discriminating different colors and responding to color cues is unknown. Experimental laboratory measurements of 1) color discrimination and 2) visual search performance are reported under adaptation conditions representative of a CDNVG. Color discrimination was measured using a two-alternative forced choice (2AFC) paradigm that probes color space uniformly around a white point. Search times in the presence of different degrees of clutter (distractors in the scene) are measured for different potential symbology colors. The discrimination data support previous data suggesting that discrimination is best for colors close to the adapting point in color space (P43 phosphor in this case). There were highly significant effects of background adaptation (white or green) and test color. The search time data show that saturated colors with the greatest chromatic contrast with respect to the background lead to the shortest search times, associated with the greatest saliency. Search times for the green background were around 150 ms longer than for the white. Desaturated colors, along with those close to a typical CDNVG display phosphor in color space, should be avoided by CDNVG designers if the greatest conspicuity of symbology is desired. The results can be used by CDNVG symbology designers to optimize aircrew performance subject to wider constraints arising from the way color is used in the existing conventional cockpit instruments and displays.Liggins EP, Serle WP. Color vision in color display night vision goggles. Aerosp Med Hum Perform. 2017; 88(5):448-456.

  8. All-CMOS night vision viewer with integrated microdisplay

    NASA Astrophysics Data System (ADS)

    Goosen, Marius E.; Venter, Petrus J.; du Plessis, Monuko; Faure, Nicolaas M.; Janse van Rensburg, Christo; Rademeyer, Pieter

    2014-02-01

    The unrivalled integration potential of CMOS has made it the dominant technology for digital integrated circuits. With the advent of visible light emission from silicon through hot carrier electroluminescence, several applications arose, all of which rely upon the advantages of mature CMOS technologies for a competitive edge in a very active and attractive market. In this paper we present a low-cost night vision viewer which employs only standard CMOS technologies. A commercial CMOS imager is utilized for near infrared image capturing with a 128x96 pixel all-CMOS microdisplay implemented to convey the image to the user. The display is implemented in a standard 0.35 μm CMOS process, with no process alterations or post processing. The display features a 25 μm pixel pitch and a 3.2 mm x 2.4 mm active area, which through magnification presents the virtual image to the user equivalent of a 19-inch display viewed from a distance of 3 meters. This work represents the first application of a CMOS microdisplay in a low-cost consumer product.

  9. Airborne laser-diode-array illuminator assessment for the night vision's airborne mine-detection arid test

    NASA Astrophysics Data System (ADS)

    Stetson, Suzanne; Weber, Hadley; Crosby, Frank J.; Tinsley, Kenneth; Kloess, Edmund; Nevis, Andrew J.; Holloway, John H., Jr.; Witherspoon, Ned H.

    2004-09-01

    The Airborne Littoral Reconnaissance Technologies (ALRT) project has developed and tested a nighttime operational minefield detection capability using commercial off-the-shelf high-power Laser Diode Arrays (LDAs). The Coastal System Station"s ALRT project, under funding from the Office of Naval Research (ONR), has been designing, developing, integrating, and testing commercial arrays using a Cessna airborne platform over the last several years. This has led to the development of the Airborne Laser Diode Array Illuminator wide field-of-view (ALDAI-W) imaging test bed system. The ALRT project tested ALDAI-W at the Army"s Night Vision Lab"s Airborne Mine Detection Arid Test. By participating in Night Vision"s test, ALRT was able to collect initial prototype nighttime operational data using ALDAI-W, showing impressive results and pioneering the way for final test bed demonstration conducted in September 2003. This paper describes the ALDAI-W Arid Test and results, along with processing steps used to generate imagery.

  10. Visual evoked potentials through night vision goggles.

    PubMed

    Rabin, J

    1994-04-01

    Night vision goggles (NVG's) have widespread use in military and civilian environments. NVG's amplify ambient illumination making performance possible when there is insufficient illumination for normal vision. While visual performance through NVG's is commonly assessed by measuring threshold functions such as visual acuity, few attempts have been made to assess vision through NVG's at suprathreshold levels of stimulation. Such information would be useful to better understand vision through NVG's across a range of stimulus conditions. In this study visual evoked potentials (VEP's) were used to evaluate vision through NVG's across a range of stimulus contrasts. The amplitude and latency of the VEP varied linearly with log contrast. A comparison of VEP's recorded with and without NVG's was used to estimate contrast attenuation through the device. VEP's offer an objective, electrophysiological tool to assess visual performance through NVG's at both threshold and suprathreshold levels of visual stimulation.

  11. Visual function at altitude under night vision assisted conditions.

    PubMed

    Vecchi, Diego; Morgagni, Fabio; Guadagno, Anton G; Lucertini, Marco

    2014-01-01

    Hypoxia, even mild, is known to produce negative effects on visual function, including decreased visual acuity and sensitivity to contrast, mostly in low light. This is of special concern when night vision devices (NVDs) are used during flight because they also provide poor images in terms of resolution and contrast. While wearing NVDs in low light conditions, 16 healthy male aviators were exposed to a simulated altitude of 12,500 ft in a hypobaric chamber. Snellen visual acuity decreased in normal light from 28.5 +/- 4.2/20 (normoxia) to 37.2 +/- 7.4/20 (hypoxia) and, in low light, from 33.8 +/- 6.1/20 (normoxia) to 42.2 +/- 8.4/20 (hypoxia), both at a significant level. An association was found between blood oxygen saturation and visual acuity without significance. No changes occurred in terms of sensitivity to contrast. Our data demonstrate that mild hypoxia is capable of affecting visual acuity and the photopic/high mesopic range of NVD-aided vision. This may be due to several reasons, including the sensitivity to hypoxia of photoreceptors and other retinal cells. Contrast sensitivity is possibly preserved under NVD-aided vision due to its dependency on the goggles' gain.

  12. Old Night Vision Meets New

    NASA Image and Video Library

    2017-12-08

    NASA image acquired November 11-12, 2012. On November 12, 2012, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured the top nighttime image of city, village, and highway lights near Delhi, India. For comparison, the lower image shows the same area one night earlier, as observed by the Operational Line Scan (OLS) system on a Defense Meteorological Satellite Program (DMSP) spacecraft. Since the 1960s, the U.S. Air Force has operated DMSP in order to observe clouds and other weather variables in key wavelengths of infrared and visible light. Since 1972, the DMSP satellites have included the Operational Linescan System (OLS), which gives weather forecasters some ability to see in the dark. It has been a highly successful sensor, but it is dependent on older technology with lower resolution than most scientists would like. And for many years, DMSP data were classified. Through improved optics and “smart” sensing technology, the VIIRS “day-night band,” is ten to fifteen times better than the OLS system at resolving the relatively dim lights of human settlements and reflected moonlight. Each VIIRS pixel shows roughly 740 meters (0.46 miles) across, compared to the 3-kilometer footprint (1.86 miles) of DMSP. Beyond the resolution, the new sensor can detect dimmer light sources. And since the VIIRS measurements are fully calibrated (unlike DMSP), scientists now have the precision required to make quantitative measurements of clouds and other features. “In contrast to the Operational Line Scan system, the imagery from the new day-night band is almost like a nearsighted person putting on glasses for the first time and looking at the Earth anew,” says Steve Miller, an atmospheric scientist at Colorado State University. “VIIRS has allowed us to bring this coarse, blurry view of night lights into clearer focus. Now we can see things in such great detail and at such high precision that we’re really talking about a new kind of

  13. Optical Characterization of Wide Field-of-View Night Vision Devices

    DTIC Science & Technology

    1999-01-01

    This paper has been cleared by ASC 99-2354 Optical Characterization of Wide Field-Of-View Night Vision Devices Peter L. Marasco and H. Lee Task Air...the SAFE SocietyÕs 36th Annual Symposium. Task, H.L., Hartman, R., Marasco , P.L., Zobel, A, (1993) Methods for measuring characteristics of night

  14. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  15. Aviator's night vision system (ANVIS) in Operation Enduring Freedom (OEF): user acceptability survey

    NASA Astrophysics Data System (ADS)

    Hiatt, Keith L.; Trollman, Christopher J.; Rash, Clarence E.

    2010-04-01

    In 1973, the U.S. Army adopted night vision devices for use in the aviation environment. These devices are based on the principle of image intensification (I2) and have become the mainstay for the aviator's capability to operate during periods of low illumination, i.e., at night. In the nearly four decades that have followed, a number of engineering advancements have significantly improved the performance of these devices. The current version, using 3rd generation I2 technology is known as the Aviator's Night Vision Imaging System (ANVIS). While considerable experience with performance has been gained during training and peacetime operations, no previous studies have looked at user acceptability and performance issues in a combat environment. This study was designed to compare Army Aircrew experiences in a combat environment to currently available information in the published literature (all peacetime laboratory and field training studies) and to determine if the latter is valid. The purpose of this study was to identify and assess aircrew satisfaction with the ANVIS and any visual performance issues or problems relating to its use in Operation Enduring Freedom (OEF). The study consisted of an anonymous survey (based on previous validated surveys used in the laboratory and training environments) of 86 Aircrew members (64% Rated and 36% Non-rated) of an Aviation Task Force approximately 6 months into their OEF deployment. This group represents an aggregate of >94,000 flight hours of which ~22,000 are ANVIS and ~16,000 during this deployment. Overall user acceptability of ANVIS in a combat environment will be discussed.

  16. Perception-based synthetic cueing for night vision device rotorcraft hover operations

    NASA Astrophysics Data System (ADS)

    Bachelder, Edward N.; McRuer, Duane

    2002-08-01

    Helicopter flight using night-vision devices (NVDs) is difficult to perform, as evidenced by the high accident rate associated with NVD flight compared to day operation. The approach proposed in this paper is to augment the NVD image with synthetic cueing, whereby the cues would emulate position and motion and appear to be actually occurring in physical space on which they are overlaid. Synthetic cues allow for selective enhancement of perceptual state gains to match the task requirements. A hover cue set was developed based on an analogue of a physical target used in a flight handling qualities tracking task, a perceptual task analysis for hover, and fundamentals of human spatial perception. The display was implemented on a simulation environment, constructed using a virtual reality device, an ultrasound head-tracker, and a fixed-base helicopter simulator. Seven highly trained helicopter pilots were used as experimental subjects and tasked to maintain hover in the presence of aircraft positional disturbances while viewing a synthesized NVD environment and the experimental hover cues. Significant performance improvements were observed when using synthetic cue augmentation. This paper demonstrates that artificial magnification of perceptual states through synthetic cueing can be an effective method of improving night-vision helicopter hover operations.

  17. Advanced helmet vision system (AHVS) integrated night vision helmet mounted display (HMD)

    NASA Astrophysics Data System (ADS)

    Ashcraft, Todd W.; Atac, Robert

    2012-06-01

    Gentex Corporation, under contract to Naval Air Systems Command (AIR 4.0T), designed the Advanced Helmet Vision System to provide aircrew with 24-hour, visor-projected binocular night vision and HMD capability. AHVS integrates numerous key technologies, including high brightness Light Emitting Diode (LED)-based digital light engines, advanced lightweight optical materials and manufacturing processes, and innovations in graphics processing software. This paper reviews the current status of miniaturization and integration with the latest two-part Gentex modular helmet, highlights the lessons learned from previous AHVS phases, and discusses plans for qualification and flight testing.

  18. Helmet-mounted pilot night vision systems: Human factors issues

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Brickner, Michael S.

    1989-01-01

    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.

  19. Effects of Extended Hypoxia on Night Vision

    DTIC Science & Technology

    1983-06-01

    Continue on reverse aide it necessary and identify by block number) hypoxia, anoxia , night vision, dark adaptation, extended hypoxia /y虦 SABDST’RACT M=t...and his colleagues, who not only quantified significant aspects of the dark adaptation function due to anoxia (hypoxia) (12,13,14,16), but also...and his co-workers (7) conducted related and very significant research on bright- ness discrimination, and concluded that anoxia acts mainly on the

  20. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    NASA Astrophysics Data System (ADS)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  1. Night Vision Manual for the Flight Surgeon.

    DTIC Science & Technology

    1985-08-01

    by optic nerve and pathways to Brodmann’s occipital areas 17 and 18). Perception occurs - vision Sensitive material ( retinal pigment) must be...clearly may be defined as glare. Glare becomes a problem in patients with opacities of the ocular media or with retinal diseases. 3 FME tN [I.I Sl IN FM...reduction of pupillary area caused by the drug. 3. Retinal causes of abnormal dark adaptation. a. Congenital stationary night blindness. b. etinitis

  2. Improving Night Time Driving Safety Using Vision-Based Classification Techniques.

    PubMed

    Chien, Jong-Chih; Chen, Yong-Sheng; Lee, Jiann-Der

    2017-09-24

    The risks involved in nighttime driving include drowsy drivers and dangerous vehicles. Prominent among the more dangerous vehicles around at night are the larger vehicles which are usually moving faster at night on a highway. In addition, the risk level of driving around larger vehicles rises significantly when the driver's attention becomes distracted, even for a short period of time. For the purpose of alerting the driver and elevating his or her safety, in this paper we propose two components for any modern vision-based Advanced Drivers Assistance System (ADAS). These two components work separately for the single purpose of alerting the driver in dangerous situations. The purpose of the first component is to ascertain that the driver would be in a sufficiently wakeful state to receive and process warnings; this is the driver drowsiness detection component. The driver drowsiness detection component uses infrared images of the driver to analyze his eyes' movements using a MSR plus a simple heuristic. This component issues alerts to the driver when the driver's eyes show distraction and are closed for a longer than usual duration. Experimental results show that this component can detect closed eyes with an accuracy of 94.26% on average, which is comparable to previous results using more sophisticated methods. The purpose of the second component is to alert the driver when the driver's vehicle is moving around larger vehicles at dusk or night time. The large vehicle detection component accepts images from a regular video driving recorder as input. A bi-level system of classifiers, which included a novel MSR-enhanced KAZE-base Bag-of-Features classifier, is proposed to avoid false negatives. In both components, we propose an improved version of the Multi-Scale Retinex (MSR) algorithm to augment the contrast of the input. Several experiments were performed to test the effects of the MSR and each classifier, and the results are presented in experimental results section

  3. Improving Night Time Driving Safety Using Vision-Based Classification Techniques

    PubMed Central

    Chien, Jong-Chih; Chen, Yong-Sheng; Lee, Jiann-Der

    2017-01-01

    The risks involved in nighttime driving include drowsy drivers and dangerous vehicles. Prominent among the more dangerous vehicles around at night are the larger vehicles which are usually moving faster at night on a highway. In addition, the risk level of driving around larger vehicles rises significantly when the driver’s attention becomes distracted, even for a short period of time. For the purpose of alerting the driver and elevating his or her safety, in this paper we propose two components for any modern vision-based Advanced Drivers Assistance System (ADAS). These two components work separately for the single purpose of alerting the driver in dangerous situations. The purpose of the first component is to ascertain that the driver would be in a sufficiently wakeful state to receive and process warnings; this is the driver drowsiness detection component. The driver drowsiness detection component uses infrared images of the driver to analyze his eyes’ movements using a MSR plus a simple heuristic. This component issues alerts to the driver when the driver’s eyes show distraction and are closed for a longer than usual duration. Experimental results show that this component can detect closed eyes with an accuracy of 94.26% on average, which is comparable to previous results using more sophisticated methods. The purpose of the second component is to alert the driver when the driver’s vehicle is moving around larger vehicles at dusk or night time. The large vehicle detection component accepts images from a regular video driving recorder as input. A bi-level system of classifiers, which included a novel MSR-enhanced KAZE-base Bag-of-Features classifier, is proposed to avoid false negatives. In both components, we propose an improved version of the Multi-Scale Retinex (MSR) algorithm to augment the contrast of the input. Several experiments were performed to test the effects of the MSR and each classifier, and the results are presented in experimental results

  4. Peripheral vision horizon display on the single seat night attack A-10

    NASA Technical Reports Server (NTRS)

    Nims, D. F.

    1984-01-01

    The concept of the peripheral vision horizon display (PVHD) held promise for significant reduction in workload for the single seat night attack pilot. For this reason it was incorporated in the single seat night attack (SSNA) A-10. The implementation and results of the PVHD on the SSNA A-10 are discussed as well as the SSNA program. The part the PVHD played in the test and the results and conclusions of that effort are also considered.

  5. Automated spot defect characterization in a field portable night vision goggle test set

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume

    2018-05-01

    This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.

  6. Color vision abnormality as an initial presentation of the complete type of congenital stationary night blindness.

    PubMed

    Tan, Xue; Aoki, Aya; Yanagi, Yasuo

    2013-01-01

    Patients with the complete form of congenital stationary night blindness (CSNB) often have reduced visual acuity, myopia, impaired night vision, and sometimes nystagmus and strabismus, however, they seldom complain of color vision abnormality. A 17-year-old male who was at technical school showed abnormalities in the color perception test for employment, and was referred to our hospital for a detailed examination. He had no family history of color vision deficiency and no other symptoms. During the initial examination, his best-corrected visual acuity was 1.2 in both eyes. His fundus showed no abnormalities except for somewhat yellowish reflex in the fovea of both eyes. Electroretinogram (ERG) showed a good response in cone ERG and 30 Hz flicker ERG, however, the bright flash, mixed rod and cone ERG showed a negative type with a reduced b-wave (positive deflection). There was no response in the rod ERG, either. From the findings of the typical ERG, the patient was diagnosed with complete congenital stationary night blindness. This case underscores the importance of ERG in order to diagnose the cause of a color vision anomaly.

  7. Collaboration between human and nonhuman players in Night Vision Tactical Trainer-Shadow

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Gallogly, James J.

    2016-05-01

    The Night Vision Tactical Trainer - Shadow (NVTT-S) is a U.S. Army-developed training tool designed to improve critical Manned-Unmanned Teaming (MUMT) communication skills for payload operators in Unmanned Aerial Sensor (UAS) crews. The trainer is composed of several Government Off-The-Shelf (GOTS) simulation components and takes the trainee through a series of escalating engagements using tactically relevant, realistically complex, scenarios involving a variety of manned, unmanned, aerial, and ground-based assets. The trainee is the only human player in the game and he must collaborate, from his web-based mock operating station, with various non-human players via spoken natural language over simulated radio in order to execute the training missions successfully. Non-human players are modeled in two complementary layers - OneSAF provides basic background behaviors for entities while NVTT provides higher level models that control entity actions based on intent extracted from the trainee's spoken natural dialog with game entities. Dialog structure is modeled based on Army standards for communication and verbal protocols. This paper presents an architecture that integrates the U.S. Army's Night Vision Image Generator (NVIG), One Semi- Automated Forces (OneSAF), a flight dynamics model, as well as Commercial Off The Shelf (COTS) speech recognition and text to speech products to effect an environment with sufficient entity counts and fidelity to enable meaningful teaching and reinforcement of critical communication skills. It further demonstrates the model dynamics and synchronization mechanisms employed to execute purpose-built training scenarios, and to achieve ad-hoc collaboration on-the-fly between human and non-human players in the simulated environment.

  8. Airborne Use of Night Vision Systems

    NASA Astrophysics Data System (ADS)

    Mepham, S.

    1990-04-01

    Mission Management Department of the Royal Aerospace Establishment has won a Queen's Award for Technology, jointly with GEC Sensors, in recognition of innovation and success in the development and application of night vision technology for fixed wing aircraft. This work has been carried out to satisfy the operational needs of the Royal Air Force. These are seen to be: - Operations in the NATO Central Region - To have a night as well as a day capability - To carry out low level, high speed penetration - To attack battlefield targets, especially groups of tanks - To meet these objectives at minimum cost The most effective way to penetrate enemy defences is at low level and survivability would be greatly enhanced with a first pass attack. It is therefore most important that not only must the pilot be able to fly at low level to the target but also he must be able to detect it in sufficient time to complete a successful attack. An analysis of the average operating conditions in Central Europe during winter clearly shows that high speed low level attacks can only be made for about 20 per cent of the 24 hours. Extending this into good night conditions raises the figure to 60 per cent. Whilst it is true that this is for winter conditions and in summer the situation is better, the overall advantage to be gained is clear. If our aircraft do not have this capability the potential for the enemy to advance his troops and armour without hinderance for considerable periods is all too obvious. There are several solutions to providing such a capability. The one chosen for Tornado GR1 is to use Terrain Following Radar (TFR). This system is a complete 24 hour capability. However it has two main disadvantages, it is an active system which means it can be jammed or homed into, and is useful in attacking pre-planned targets. Second it is an expensive system which precludes fitting to other than a small number of aircraft.

  9. Qualitative evaluations and comparisons of six night-vision colorization methods

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul

    2013-05-01

    Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF < SM < SM-JHM < LUT < JHM < HM. It is anticipated that this work will provide a benchmark for NV

  10. Civil use of night vision goggles within the National Airspace System

    NASA Astrophysics Data System (ADS)

    Winkel, James G.; Faber, Lorelei

    2001-08-01

    When properly employed, Night Vision Goggles (NVGs) improve a pilot's ability to see during periods of darkness. The resultant enhancement in situational awareness achieved when using NVGs, increases light safety during night VFR operations. FAA is constrained with a lack of requisite regulatory and guidance infrastructure to adequately facilitate the civil request for use in NVGs within the National Airspace System (NAS) Appliances and Equipment, is formed and tasked to develop: operational concept and operational requirements for NVG implementation into the NAS, minimum operational performance standards for NVGs, and training guidelines and considerations for NVG operations. This paper provides a historical perspective on use of NVGs within the NAS, the status of SC-196 work in progress, FAA integration of SC-196 committee products and the harmonization effort between EUROCAEs NVG committee and SC- 196.

  11. Lens Systems Incorporating A Zero Power Corrector Objectives And Magnifiers For Night Vision Applications

    NASA Astrophysics Data System (ADS)

    McDowell, M. W.; Klee, H. W.

    1986-02-01

    The use of the zero power corrector concept has been extended to the design of objective lenses and magnifiers suitable for use in night vision goggles. A novel design which can be used as either an f/1.2 objective or an f/2 magnifier is also described.

  12. Multispectral image-fused head-tracked vision system (HTVS) for driving applications

    NASA Astrophysics Data System (ADS)

    Reese, Colin E.; Bender, Edward J.

    2001-08-01

    Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.

  13. Rotary-wing flight test methods used for the evaluation of night vision devices

    NASA Astrophysics Data System (ADS)

    Haworth, Loran A.; Blanken, Christopher J.; Szoboszlay, Zoltan P.

    2001-08-01

    The U.S. Army Aviation mission includes flying helicopters at low altitude, at night, and in adverse weather. Night Vision Devices (NVDs) are used to supplement the pilot's visual cues for night flying. As the military requirement to conduct night helicopter operations has increased, the impact of helicopter flight operations with NVD technology in the Degraded Visual Environment (DVE) became increasingly important to quantify. Aeronautical Design Standard-33 (ADS- 33) was introduced to update rotorcraft handling qualities requirements and to quantify the impact of the NVDs in the DVE. As reported in this paper, flight test methodology in ADS-33 has been used by the handling qualities community to measure the impact of NVDs on task performance in the DVE. This paper provides the background and rationale behind the development of ADS-33 flight test methodology for handling qualities in the DVE, as well as the test methodology developed for human factor assessment of NVDs in the DVE. Lessons learned, shortcomings and recommendations for NVD flight test methodology are provided in this paper.

  14. Broad Band Antireflection Coating on Zinc Sulphide Window for Shortwave infrared cum Night Vision System

    NASA Astrophysics Data System (ADS)

    Upadhyaya, A. S.; Bandyopadhyay, P. K.

    2012-11-01

    In state of art technology, integrated devices are widely used or their potential advantages. Common system reduces weight as well as total space covered by its various parts. In the state of art surveillance system integrated SWIR and night vision system used for more accurate identification of object. In this system a common optical window is used, which passes the radiation of both the regions, further both the spectral regions are separated in two channels. ZnS is a good choice for a common window, as it transmit both the region of interest, night vision (650 - 850 nm) as well as SWIR (0.9 - 1.7 μm). In this work a broad band anti reflection coating is developed on ZnS window to enhance the transmission. This seven layer coating is designed using flip flop design method. After getting the final design, some minor refinement is done, using simplex method. SiO2 and TiO2 coating material combination is used for this work. The coating is fabricated by physical vapour deposition process and the materials were evaporated by electron beam gun. Average transmission of both side coated substrate from 660 to 1700 nm is 95%. This coating also acts as contrast enhancement filter for night vision devices, as it reflect the region of 590 - 660 nm. Several trials have been conducted to check the coating repeatability, and it is observed that transmission variation in different trials is not very much and it is under the tolerance limit. The coating also passes environmental test for stability.

  15. Acoustic Measurement and Model Predictions for the Aural Nondetectability of Two Night-Vision Goggles

    DTIC Science & Technology

    2013-11-01

    Acoustic Measurement and Model Predictions for the Aural Nondetectability of Two Night-Vision Goggles by Jeremy Gaston, Tim Mermagen, and...Goggles Jeremy Gaston, Tim Mermagen, and Kelly Dickerson Human Research and Engineering Directorate, ARL...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jeremy Gaston, Tim Mermagen, and Kelly Dickerson 5d. PROJECT NUMBER 74A 5e. TASK NUMBER 5f. WORK

  16. Hyperstereopsis in night vision devices: basic mechanisms and impact for training requirements

    NASA Astrophysics Data System (ADS)

    Priot, Anne-Emmanuelle; Hourlier, Sylvain; Giraudet, Guillaume; Leger, Alain; Roumes, Corinne

    2006-05-01

    Including night vision capabilities in Helmet Mounted Displays has been a serious challenge for many years. The use of "see through" head mounted image intensifiers systems is particularly challenging as it introduces some peculiar visual characteristics usually referred as "hyperstereopsis". Flight testing of such systems has started in the early nineties, both in US and Europe. While the trials conducted in US yielded quite controversial results, convergent positive ones were obtained from European testing, mainly in UK, Germany and France. Subsequently, work on integrating optically coupled I2 tubes on HMD was discontinued in the US, while European manufacturers developed such HMDs for various rotary wings platforms like the TIGER. Coping with hyperstereopsis raises physiological and cognitive human factors issues. Starting in the sixties, effects of increased interocular separation and adaptation to such unusual vision conditions has been quite extensively studied by a number of authors as Wallach, Schor, Judge and Miles, Fisher and Ciuffreda. A synthetic review of literature on this subject will be presented. According to users' reports, three successive phases will be described for habituation to such devices: initial exposure, building compensation phase and behavioral adjustments phase. An habituation model will be suggested to account for HMSD users' reports and literature data bearing on hyperstereopsis, cue weighting for depth perception, adaptation and learning processes, task cognitive control. Finally, some preliminary results on hyperstereopsis spatial and temporal adaptation coming from the survey of training of TIGER pilots, currently conducted at the French-German Army Aviation Training Center, will be unveiled.

  17. Analysis of Risk Compensation Behavior on Night Vision Enhancement System

    NASA Astrophysics Data System (ADS)

    Hiraoka, Toshihiro; Masui, Junya; Nishikawa, Seimei

    Advanced driver assistance systems (ADAS) such as a forward obstacle collision warning system (FOCWS) and a night vision enhancement system (NVES) aim to decrease driver's mental workload and enhance vehicle safety by provision of useful information to support driver's perception process and judgment process. On the other hand, the risk homeostasis theory (RHT) cautions that an enhanced safety and a reduced risk would cause a risk compensation behavior such as increasing the vehicle velocity. Therefore, the present paper performed the driving simulator experiments to discuss dependence on the NVES and emergence of the risk compensation behavior. Moreover, we verified the side-effects of spontaneous behavioral adaptation derived from the presentation of the fuel-consumption meter on the risk compensation behavior.

  18. Assessing contextual factors that influence acceptance of pedestrian alerts by a night vision system.

    PubMed

    Källhammer, Jan-Erik; Smith, Kip

    2012-08-01

    We investigated five contextual variables that we hypothesized would influence driver acceptance of alerts to pedestrians issued by a night vision active safety system to inform the specification of the system's alerting strategies. Driver acceptance of automotive active safety systems is a key factor to promote their use and implies a need to assess factors influencing driver acceptance. In a field operational test, 10 drivers drove instrumented vehicles equipped with a preproduction night vision system with pedestrian detection software. In a follow-up experiment, the 10 drivers and 25 additional volunteers without experience with the system watched 57 clips with pedestrian encounters gathered during the field operational test. They rated the acceptance of an alert to each pedestrian encounter. Levels of rating concordance were significant between drivers who experienced the encounters and participants who did not. Two contextual variables, pedestrian location and motion, were found to influence ratings. Alerts were more accepted when pedestrians were close to or moving toward the vehicle's path. The study demonstrates the utility of using subjective driver acceptance ratings to inform the design of active safety systems and to leverage expensive field operational test data within the confines of the laboratory. The design of alerting strategies for active safety systems needs to heed the driver's contextual sensitivity to issued alerts.

  19. A real-time monitoring system for night glare protection

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Ni, Xuxiang

    2010-11-01

    When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.

  20. Is More Better? - Night Vision Enhancement System's Pedestrian Warning Modes and Older Drivers.

    PubMed

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers' workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers.

  1. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be

  2. An approach to integrate the human vision psychology and perception knowledge into image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Huang, Xifeng; Ping, Jiang

    2009-07-01

    Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the

  3. 21 CFR 886.5910 - Image intensification vision aid.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Image intensification vision aid. 886.5910 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5910 Image intensification vision aid. (a) Identification. An image intensification vision aid is a battery-powered device intended for...

  4. 21 CFR 886.5910 - Image intensification vision aid.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Image intensification vision aid. 886.5910 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5910 Image intensification vision aid. (a) Identification. An image intensification vision aid is a battery-powered device intended for...

  5. 21 CFR 886.5910 - Image intensification vision aid.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Image intensification vision aid. 886.5910 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5910 Image intensification vision aid. (a) Identification. An image intensification vision aid is a battery-powered device intended for...

  6. 21 CFR 886.5910 - Image intensification vision aid.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Image intensification vision aid. 886.5910 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5910 Image intensification vision aid. (a) Identification. An image intensification vision aid is a battery-powered device intended for...

  7. 21 CFR 886.5910 - Image intensification vision aid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Image intensification vision aid. 886.5910 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5910 Image intensification vision aid. (a) Identification. An image intensification vision aid is a battery-powered device intended for...

  8. A Unified Taxonomic Approach to the Laboratory Assessment of Visionic Devices

    DTIC Science & Technology

    2006-09-01

    the ratification stage with member nations. Marasco and Task 4 presented a large array of tests applicable to image intensification-based visionic...aircraft.” In print. 4. Marasco , P. L., and Task, H. L. 1999. “Optical characterization of wide field-of-view night vision devices,” in

  9. Photographic Assessment of Dark Spots in Night Vision Device Images

    DTIC Science & Technology

    1998-01-01

    Ronchi, V., (1957), Optics, the science of vision, New York: New York University Press. BIOGRAPHY Peter L. Marasco came to the U.S. Air Force in 1991 as a...optical test methods. Mr. Marasco received a BS degree from the University of Rochester in 1991 and an MS degree from the University of Arizona in 1993

  10. The hazard of spatial disorientation during helicopter flight using night vision devices.

    PubMed

    Braithwaite, M G; Douglass, P K; Durnford, S J; Lucas, G

    1998-11-01

    Night Vision Devices (NVDs) provide an enormous advantage to the operational effectiveness of military helicopter flying by permitting flight throughout the night. However, compared with daytime flight, many of the depth perception and orientational cues are severely degraded. These degraded cues predispose aviators to spatial disorientation (SD), which is a serious drawback of these devices. As part of an overall analysis of Army helicopter accidents to assess the impact of SD on military flying, we scrutinized the class A-C mishap reports involving night-aided flight from 1987 to 1995. The accidents were classified according to the role of SD by three independent assessors, with the SD group further analyzed to determine associated factors and possible countermeasures. Almost 43% of all SD-related accidents in this series occurred during flight using NVDs, whereas only 13% of non-SD accidents involved NVDs. An examination of the SD accident rates per 100,000 flying hours revealed a significant difference between the rate for day flying and the rate for flight using NVDs (mean rate for daytime flight = 1.66, mean rate for NVD flight = 9.00, p < 0.001). The most important factors associated with these accidents were related to equipment limitations, distraction from the task, and training or procedural inadequacies. SD remains an important source of attrition of Army aircraft. The more than fivefold increase in risk associated with NVD flight is of serious concern. The associated factors and suggested countermeasures should be urgently addressed.

  11. Registration of Heat Capacity Mapping Mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L. (Principal Investigator)

    1982-01-01

    Neither iterative registration, using drainage intersection maps for control, nor cross correlation techniques were satisfactory in registering day and night HCMM imagery. A procedure was developed which registers the image pairs by selecting control points and mapping the night thermal image to the daytime thermal and reflectance images using an affine transformation on a 1300 by 1100 pixel image. The resulting image registration is accurate to better than two pixels (RMS) and does not exhibit the significant misregistration that was noted in the temperature-difference and thermal-inertia products supplied by NASA. The affine transformation was determined using simple matrix arithmetic, a step that can be performed rapidly on a minicomputer.

  12. Neck muscle activity in fighter pilots wearing night-vision equipment during simulated flight.

    PubMed

    Ang, Björn O; Kristoffersson, Mats

    2013-02-01

    Night-vision goggles (NVG) in jet fighter aircraft appear to increase the risk of neck strain due to increased neck loading. The present aim was, therefore, to evaluate the effect on neck-muscle activity and subjective ratings of head-worn night-vision (NV) equipment in controlled simulated flights. Five experienced fighter pilots twice flew a standardized 2.5-h program in a dynamic flight simulator; one session with NVG and one with standard helmet mockup (control session). Each session commenced with a 1-h simulation at 1 Gz followed by a 1.5-h dynamic flight with repeated Gz profiles varying between 3 and 7 Gz and including aerial combat maneuvers (ACM) at 3-5 Gz. Large head-and-neck movements under high G conditions were avoided. Surface electromyographic (EMG) data was simultaneously measured bilaterally from anterior neck, upper and lower posterior neck, and upper shoulder muscles. EMG activity was normalized as the percentage of pretest maximal voluntary contraction (%MVC). Head-worn equipment (helmet comfort, balance, neck mobility, and discomfort) was rated subjectively immediately after flight. A trend emerged toward greater overall neck muscle activity in NV flight during sustained ACM episodes (10% vs. 8% MVC for the control session), but with no such effects for temporary 3-7 Gz profiles. Postflight ratings for NV sessions emerged as "unsatisfactory" for helmet comfort/neck discomfort. However, this was not significant compared to the control session. Helmet mounted NV equipment caused greater neck muscle activity during sustained combat maneuvers, indicating increased muscle strain due to increased neck loading. In addition, postflight ratings indicated neck discomfort after NV sessions, although not clearly increased compared to flying with standard helmet mockup.

  13. USE OF NIGHT-VISION GOGGLES, LIGHT-TAGS, AND FLUORESCENT POWDER FOR MEASURING MICROHABITAT USE OF NOCTURNAL SMALL MAMMALS

    Treesearch

    WILLIAM F. LAUDENSLAYER; ROBERTA J. FARGO

    1997-01-01

    In 1993 to 1996, dusky-footed woodrats (Neotoma fuscipes) were tracked using night-vision goggles, hght-tags (LED with battery), and fluorescent powder to better understand their microhabitat use. Traclung was conducted in 3 oak woodland study sites in the southern Sierra Nevada, 16 m northeast of Fresno, California. Nightvision goggles were not very useful for direct...

  14. Is More Better? — Night Vision Enhancement System’s Pedestrian Warning Modes and Older Drivers

    PubMed Central

    Brown, Timothy; He, Yefei; Roe, Cheryl; Schnell, Thomas

    2010-01-01

    Pedestrian fatalities as a result of vehicle collisions are much more likely to happen at night than during day time. Poor visibility due to darkness is believed to be one of the causes for the higher vehicle collision rate at night. Existing studies have shown that night vision enhancement systems (NVES) may improve recognition distance, but may increase drivers’ workload. The use of automatic warnings (AW) may help minimize workload, improve performance, and increase safety. In this study, we used a driving simulator to examine performance differences of a NVES with six different configurations of warning cues, including: visual, auditory, tactile, auditory and visual, tactile and visual, and no warning. Older drivers between the ages of 65 and 74 participated in the study. An analysis based on the distance to pedestrian threat at the onset of braking response revealed that tactile and auditory warnings performed the best, while visual warnings performed the worst. When tactile or auditory warnings were presented in combination with visual warning, their effectiveness decreased. This result demonstrated that, contrary to general sense regarding warning systems, multi-modal warnings involving visual cues degraded the effectiveness of NVES for older drivers. PMID:21050616

  15. Low dark current InGaAs detector arrays for night vision and astronomy

    NASA Astrophysics Data System (ADS)

    MacDougal, Michael; Geske, Jon; Wang, Chad; Liao, Shirong; Getty, Jonathan; Holmes, Alan

    2009-05-01

    Aerius Photonics has developed large InGaAs arrays (1K x 1K and greater) with low dark currents for use in night vision applications in the SWIR regime. Aerius will present results of experiments to reduce the dark current density of their InGaAs detector arrays. By varying device designs and passivations, Aerius has achieved a dark current density below 1.0 nA/cm2 at 280K on small-pixel, detector arrays. Data is shown for both test structures and focal plane arrays. In addition, data from cryogenically cooled InGaAs arrays will be shown for astronomy applications.

  16. Integrity Determination for Image Rendering Vision Navigation

    DTIC Science & Technology

    2016-03-01

    identifying an object within a scene, tracking a SIFT feature between frames or matching images and/or features for stereo vision applications. This... object level, either in 2-D or 3-D, versus individual features. There is a breadth of information, largely from the machine vision community...matching or image rendering image correspondence approach is based upon using either 2-D or 3-D object models or templates to perform object detection or

  17. A low-noise 15-μm pixel-pitch 640×512 hybrid InGaAs image sensor for night vision

    NASA Astrophysics Data System (ADS)

    Guellec, Fabrice; Dubois, Sébastien; de Borniol, Eric; Castelein, Pierre; Martin, Sébastien; Guiguet, Romain; Tchagaspanian, Micha"l.; Rouvié, Anne; Bois, Philippe

    2012-03-01

    Hybrid InGaAs focal plane arrays are very interesting for night vision because they can benefit from the nightglow emission in the Short Wave Infrared band. Through a collaboration between III-V Lab and CEA-Léti, a 640x512 InGaAs image sensor with 15μm pixel pitch has been developed. The good crystalline quality of the InGaAs detectors opens the door to low dark current (around 20nA/cm2 at room temperature and -0.1V bias) as required for low light level imaging. In addition, the InP substrate can be removed to extend the detection range towards the visible spectrum. A custom readout IC (ROIC) has been designed in a standard CMOS 0.18μm technology. The pixel circuit is based on a capacitive transimpedance amplifier (CTIA) with two selectable charge-to-voltage conversion gains. Relying on a thorough noise analysis, this input stage has been optimized to deliver low-noise performance in high-gain mode with a reasonable concession on dynamic range. The exposure time can be maximized up to the frame period thanks to a rolling shutter approach. The frame rate can be up to 120fps or 60fps if the Correlated Double Sampling (CDS) capability of the circuit is enabled. The first results show that the CDS is effective at removing the very low frequency noise present on the reference voltage in our test setup. In this way, the measured total dark noise is around 90 electrons in high-gain mode for 8.3ms exposure time. It is mainly dominated by the dark shot noise for a detector temperature settling around 30°C when not cooled. The readout noise measured with shorter exposure time is around 30 electrons for a dynamic range of 71dB in high-gain mode and 108 electrons for 79dB in low-gain mode.

  18. CloudSat Image of a Polar Night Storm Near Antarctica

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Figure 1

    CloudSat image of a horizontal cross-section of a polar night storm near Antarctica. Until now, clouds have been hard to observe in polar regions using remote sensing, particularly during the polar winter or night season. The red colors are indicative of highly reflective particles such as water (rain) or ice crystals, while the blue indicates thinner clouds (such as cirrus). The flat green/blue lines across the bottom represent the ground signal. The vertical scale on the CloudSat Cloud Profiling Radar image is approximately 30 kilometers (19 miles). The blue line below the Cloud Profiling Radar image indicates that the data were taken over water; the brown line below the image indicates the relative elevation of the land surface. The inset image shows the CloudSat track relative to a Moderate Resolution Imaging Spectroradiometer (MODIS) infrared image taken at nearly the same time.

  19. Integrated Imaging and Vision Techniques for Industrial Inspection: A Special Issue on Machine Vision and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep

    2010-06-05

    Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less

  20. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    NASA Astrophysics Data System (ADS)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic

  1. Night Vision

    NASA Image and Video Library

    2016-03-10

    It's hard to see in the dark. Most HiRISE images are are taken when the sun is at least 15 degrees above the horizon. (If you hold your hand at arm's length with fingers together, it's about five degrees wide on average.) However, to see what's going on in winter, we need to look at times and places where the Sun is just barely over the horizon. This image was taken to look at seasonal frost in gullies during southern winter on Mars, with the Sun only about two degrees over the horizon (just before sunset). To make things more difficult, the gullies are on a steep slope facing away from the sun, so they are in deep shadow. Under these conditions, HiRISE takes what are called "bin 4" images. This means that the image shows less detail, but by adding up the light from 16 pixels (a 4x4 square) we can see details in shadows. Even with the reduced resolution, we can see plenty of detail in the gullies, and learn about the seasonal frost. http://photojournal.jpl.nasa.gov/catalog/PIA20480

  2. Image segmentation for enhancing symbol recognition in prosthetic vision.

    PubMed

    Horne, Lachlan; Barnes, Nick; McCarthy, Chris; He, Xuming

    2012-01-01

    Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from poor resolution and dynamic range of induced phosphenes. This can make it difficult for users of prosthetic vision systems to identify symbolic information (such as signs) except in controlled conditions. Using image segmentation techniques from computer vision, we show it is possible to improve the clarity of such symbolic information for users of prosthetic vision implants in uncontrolled conditions. We use image segmentation to automatically divide a natural image into regions, and using a fixation point controlled by the user, select a region to phosphenize. This technique improves the apparent contrast and clarity of symbolic information over traditional phosphenization approaches.

  3. The effects of moon illumination, moon angle, cloud cover, and sky glow on night vision goggle flight performance

    NASA Astrophysics Data System (ADS)

    Loro, Stephen Lee

    This study was designed to examine moon illumination, moon angle, cloud cover, sky glow, and Night Vision Goggle (NVG) flight performance to determine possible effects. The research was a causal-comparative design. The sample consisted of 194 Fort Rucker Initial Entry Rotary Wing NVG flight students being observed by 69 NVG Instructor Pilots. The students participated in NVG flight training from September 1992 through January 1993. Data were collected using a questionnaire. Observations were analyzed using a Kruskal-Wallis one-way analysis of variance and a Wilcox matched pairs signed-ranks test for difference. Correlations were analyzed using Pearson's r. The analyses results indicated that performance at high moon illumination levels is superior to zero moon illumination, and in most task maneuvers, superior to >0%--50% moon illumination. No differences were found in performance at moon illumination levels above 50%. Moon angle had no effect on night vision goggle flight performance. Cloud cover and sky glow have selective effects on different maneuvers. For most task maneuvers, cloud cover does not affect performance. Overcast cloud cover had a significant effect on seven of the 14 task maneuvers. Sky glow did not affect eight out of 14 task maneuvers at any level of sky glow.

  4. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  5. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    PubMed

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  6. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database

    PubMed Central

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-01

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496

  7. Use of a night vision intensifier for direct visualization by eye of far-red and near-infrared fluorescence through an optical microscope.

    PubMed

    Siddiqi, M A; Kilduff, G M; Gearhart, J D

    2003-11-01

    We describe the design, construction and testing of a prototype device that allows the direct visualization by eye of far-red and near-infrared (NIR) fluorescence through an optical microscope. The device incorporates a gallium arsenide (GaAs) image intensifier, typically utilized in low-light or 'night vision' applications. The intensifier converts far-red and NIR light into electrons and then into green light, which is visible to the human eye. The prototype makes possible the direct, real-time viewing by eye of normally invisible far-red and NIR fluorescence from a wide variety of fluorophores, using the full field of view of the microscope to which it is applied. The high sensitivity of the image intensifier facilitates the viewing of a wide variety of photosensitive specimens, including live cells and embryos, at vastly reduced illumination levels in both fluorescence and bright-field microscopy. Modifications to the microscope are not required in order to use the prototype, which is fully compatible with all current fluorescence techniques. Refined versions of the prototype device will have broad research and clinical applications.

  8. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  9. Perceptual adaptation in the use of night vision goggles

    NASA Technical Reports Server (NTRS)

    Durgin, Frank H.; Proffitt, Dennis R.

    1992-01-01

    The image intensification (I sup 2) systems studied for this report were the biocular AN/PVS-7(NVG) and the binocular AN/AVS-6(ANVIS). Both are quite impressive for purposes of revealing the structure of the environment in a fairly straightforward way in extremely low-light conditions. But these systems represent an unusual viewing medium. The perceptual information available through I sup 2 systems is different in a variety of ways from the typical input of everyday vision, and extensive training and practice is required for optimal use. Using this sort of system involves a kind of perceptual skill learning, but is may also involve visual adaptations that are not simply an extension of normal vision. For example, the visual noise evident in the goggles in very low-light conditions results in unusual statistical properties in visual input. Because we had recently discovered a strong and enduring aftereffect of perceived texture density which seemed to be sensitive to precisely the sorts of statistical distortions introduced by I sup 2 systems, it occurred to use that visual noise of this sort might be a very adapting stimulus for texture density and produce an aftereffect that extended into normal vision once the goggles were removed. We have not found any experimental evidence that I sup 2 systems produce texture density aftereffects. The nature of the texture density aftereffect is briefly explained, followed by an accounting of our studies of I sup 2 systems and our most recent work on the texture density aftereffect. A test for spatial frequency adaptation after exposure to NVG's is also reported, as is a study of perceived depth from motion (motion parallax) while wearing the biocular goggles. We conclude with a summary of our findings.

  10. Development and testing of the EVS 2000 enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.

  11. Some examples of image warping for low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Loshin, David S.

    1988-01-01

    NASA has developed an image processor, the Programmable Remapper, for certain functions in machine vision. The Remapper performs a highly arbitrary geometric warping of an image at video rate. It might ultimately be shrunk to a size and cost that could allow its use in a low-vision prosthesis. Coordinate warpings have been developed for retinitis pigmentosa (tunnel vision) and for maculapathy (loss of central field) that are intended to make best use of the patient's remaining viable retina. The rationales and mathematics are presented for some warpings that we will try in clinical studies using the Remapper's prototype.

  12. PixonVision real-time video processor

    NASA Astrophysics Data System (ADS)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  13. IMAGE ENHANCEMENT FOR IMPAIRED VISION: THE CHALLENGE OF EVALUATION

    PubMed Central

    PELI, ELI; WOODS, RUSSELL L

    2009-01-01

    With the aging of the population, the prevalence of eye diseases and thus of vision impairment is increasing. The TV watching habits of people with vision impairments are comparable to normally sighted people1, however their vision loss prevents them from fully benefiting from this medium. For over 20 years we have been developing video image-enhancement techniques designed to assist people with visual impairments, particularly those due to central retinal vision loss. A major difficulty in this endeavor is the lack of evaluation techniques to assess and compare the effectiveness of various enhancement methods. This paper reviews our approaches to image enhancement and the results we have obtained, with special emphasis on the difficulties encountered in the evaluation of the benefits of enhancement and the solutions we have developed to date. PMID:20161188

  14. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  15. A robust color image fusion for low light level and infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Zhang, Xiao-hui; Hu, Qing-ping; Chen, Yong-kang

    2016-09-01

    The low light level and infrared color fusion technology has achieved great success in the field of night vision, the technology is designed to make the hot target of fused image pop out with intenser colors, represent the background details with a nearest color appearance to nature, and improve the ability in target discovery, detection and identification. The low light level images have great noise under low illumination, and that the existing color fusion methods are easily to be influenced by low light level channel noise. To be explicit, when the low light level image noise is very large, the quality of the fused image decreases significantly, and even targets in infrared image would be submerged by the noise. This paper proposes an adaptive color night vision technology, the noise evaluation parameters of low light level image is introduced into fusion process, which improve the robustness of the color fusion. The color fuse results are still very good in low-light situations, which shows that this method can effectively improve the quality of low light level and infrared fused image under low illumination conditions.

  16. Simulating Colour Vision Deficiency from a Spectral Image.

    PubMed

    Shrestha, Raju

    2016-01-01

    People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.

  17. Evaluation of Night Vision Devices for Image Fusion Studies

    DTIC Science & Technology

    2004-12-01

    July 2004. http://www.sensorsmag.com/articles/0400/34/main.shtml Task, Harry L., Hartman, Richard T., Marasco , Peter L., Methods for Measuring...Press, Bellingham, Washington, 1998. Burt, Peter J. & Kolczynski, Raymond J., David Sarnoff Research Center, Enhanced Image Capture through Fusion

  18. What's crucial in night vision goggle simulation?

    NASA Astrophysics Data System (ADS)

    Kooi, Frank L.; Toet, Alexander

    2005-05-01

    Training is required to correctly interpret NVG imagery. Training night operations with simulated intensified imagery has great potential. Compared to direct viewing with the naked eye, intensified imagery is relatively easy to simulate and the cost of real NVG training is high (logistics, risk, civilian sleep deprivation, pollution). On the surface NVG imagery appears to have a structure similar to daylight imagery. However, in actuality its characteristics differ significantly from those of daylight imagery. As a result, NVG imagery frequently induces visual illusions. To achieve realistic training, simulated NVG imagery should at least reproduce the essential visual limitations of real NVG imagery caused by reduced resolution, reduced contrast, limited field-of-view, the absence of color, and the systems sensitivity to nearby infrared radiation. It is particularly important that simulated NVG imagery represents essential NVG visual characteristics, such as the high reflection of chlorophyll and halos. Current real-time simulation software falls short for training purposes because of an incorrect representation of shadow effects. We argue that the development of shading and shadowing merits priority to close the gap between real and simulated NVG flight conditions. Visual conspicuity can be deployed as an efficient metric to measure the 'perceptual distance' between the real NVG and the simulated NVG image.

  19. Three-Dimensional Images For Robot Vision

    NASA Astrophysics Data System (ADS)

    McFarland, William D.

    1983-12-01

    Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.

  20. Visible spectral imager for occultation and nightglow (VISION) for the PICASSO Mission

    NASA Astrophysics Data System (ADS)

    Saari, Heikki; Näsilä, Antti; Holmlund, Christer; Mannila, Rami; Näkki, Ismo; Ojanen, Harri J.; Fussen, Didier; Pieroux, Didier; Demoulin, Philippe; Dekemper, Emmanuel; Vanhellemont, Filip

    2015-10-01

    PICASSO - A PICo-satellite for Atmospheric and Space Science Observations is an ESA project led by the Belgian Institute for Space Aeronomy, in collaboration with VTT, Clyde Space Ltd. (UK), and the Centre Spatial de Liège (BE). VTT Technical Research Centre of Finland Ltd. will deliver the Visible Spectral Imager for Occultation and Nightglow (VISION) for the PICASSO mission. The VISION targets primarily the observation of the Earth's atmospheric limb during orbital Sun occultation. By assessing the radiation absorption in the Chappuis band for different tangent altitudes, the vertical profile of the ozone is retrieved. A secondary objective is to measure the deformation of the solar disk so that stratospheric and mesospheric temperature profiles are retrieved by inversion of the refractive raytracing problem. Finally, occasional full spectral observations of polar auroras are also foreseen. The VISION design realized with commercial of the shelf (CoTS) parts is described. The VISION instrument is small, lightweight (~500 g), Piezo-actuated Fabry-Perot Interferometer (PFPI) tunable spectral imager operating in the visible and near-infrared (430 - 800 nm). The spectral resolution over the whole wavelength range will be better than 10 nm @ FWHM. VISION has is 2.5° x 2.5° total field of view and it delivers maximum 2048 x 2048 pixel spectral images. The sun image size is around 0.5° i.e. ~500 pixels. To enable fast spectral data image acquisition VISION can be operated with programmable image sizes. VTT has previously developed PFPI tunable filter based AaSI Spectral Imager for the Aalto-1 Finnish CubeSat. In VISION the requirements of the spectral resolution and stability are tighter than in AaSI. Therefore the optimization of the of the PFPI gap control loop for the operating temperature range and vacuum conditions has to be improved. VISION optical, mechanical and electrical design is described.

  1. Some Examples Of Image Warping For Low Vision Prosthesis

    NASA Astrophysics Data System (ADS)

    Juday, Richard D.; Loshin, David S.

    1988-08-01

    NASA and Texas Instruments have developed an image processor, the Programmable Remapper 1, for certain functions in machine vision. The Remapper performs a highly arbitrary geometric warping of an image at video rate. It might ultimately be shrunk to a size and cost that could allow its use in a low-vision prosthesis. We have developed coordinate warpings for retinitis pigmentosa (tunnel vision) and for maculapathy (loss of central field) that are intended to make best use of the patient's remaining viable retina. The rationales and mathematics are presented for some warpings that we will try in clinical studies using the Remapper's prototype. (Recorded video imagery was shown at the conference for the maculapathy remapping.

  2. Color image processing and vision system for an automated laser paint-stripping system

    NASA Astrophysics Data System (ADS)

    Hickey, John M., III; Hise, Lawson

    1994-10-01

    Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.

  3. Image enhancement filters significantly improve reading performance for low vision observers

    NASA Technical Reports Server (NTRS)

    Lawton, T. B.

    1992-01-01

    As people age, so do their photoreceptors; many photoreceptors in central vision stop functioning when a person reaches their late sixties or early seventies. Low vision observers with losses in central vision, those with age-related maculopathies, were studied. Low vision observers no longer see high spatial frequencies, being unable to resolve fine edge detail. We developed image enhancement filters to compensate for the low vision observer's losses in contrast sensitivity to intermediate and high spatial frequencies. The filters work by boosting the amplitude of the less visible intermediate spatial frequencies. The lower spatial frequencies. These image enhancement filters not only reduce the magnification needed for reading by up to 70 percent, but they also increase the observer's reading speed by 2-4 times. A summary of this research is presented.

  4. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  5. Design and testing of a dual-band enhanced vision system

    NASA Astrophysics Data System (ADS)

    Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg

    2003-09-01

    An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.

  6. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  7. Development of Air Force aerial spray night operations: High altitude swath characterization

    USDA-ARS?s Scientific Manuscript database

    Multiple trials were conducted from 2006 to 2014 in an attempt to validate aerial spray efficacy at altitudes conducive to night spray operations using night vision goggles (NVG). Higher altitude application of pesticide (>400 feet above ground level [AGL]) suggested that effective vector control mi...

  8. Optimal design of photoreceptor mosaics: why we do not see color at night.

    PubMed

    Manning, Jeremy R; Brainard, David H

    2009-01-01

    While color vision mediated by rod photoreceptors in dim light is possible (Kelber & Roth, 2006), most animals, including humans, do not see in color at night. This is because their retinas contain only a single class of rod photoreceptors. Many of these same animals have daylight color vision, mediated by multiple classes of cone photoreceptors. We develop a general formulation, based on Bayesian decision theory, to evaluate the efficacy of various retinal photoreceptor mosaics. The formulation evaluates each mosaic under the assumption that its output is processed to optimally estimate the image. It also explicitly takes into account the statistics of the environmental image ensemble. Using the general formulation, we consider the trade-off between monochromatic and dichromatic retinal designs as a function of overall illuminant intensity. We are able to demonstrate a set of assumptions under which the prevalent biological pattern represents optimal processing. These assumptions include an image ensemble characterized by high correlations between image intensities at nearby locations, as well as high correlations between intensities in different wavelength bands. They also include a constraint on receptor photopigment biophysics and/or the information carried by different wavelengths that produces an asymmetry in the signal-to-noise ratio of the output of different receptor classes. Our results thus provide an optimality explanation for the evolution of color vision for daylight conditions and monochromatic vision for nighttime conditions. An additional result from our calculations is that regular spatial interleaving of two receptor classes in a dichromatic retina yields performance superior to that of a retina where receptors of the same class are clumped together.

  9. Analysis of Low-Light and Night-Time Stereo-Pair Images for Photogrammetric Reconstruction

    NASA Astrophysics Data System (ADS)

    Santise, M.; Thoeni, K.; Roncella, R.; Diotri, F.; Giacomini, A.

    2018-05-01

    Rockfalls and rockslides represent a significant risk to human lives and infrastructures because of the high levels of energy involved in the phenomena. Generally, these events occur in accordance to specific environmental conditions, such as temperature variations between day and night, that can contribute to the triggering of structural instabilities in the rock-wall and the detachment of blocks and debris. The monitoring and the geostructural characterization of the wall are required for reducing the potential hazard and to improve the management of the risk at the bottom of the slopes affected by such phenomena. In this context, close range photogrammetry is largely used for the monitoring of high-mountain terrains and rock walls in mine sites allowing for periodic survey of rockfalls and wall movements. This work focuses on the analysis of low-light and night-time images of a fixed-base stereo pair photogrammetry system. The aim is to study the reliability of the images acquired over the night to produce digital surface models (DSMs) for change detection. The images are captured by a high-sensitivity DLSR camera using various settings accounting for different values of ISO, aperture and time of exposure. For each acquisition, the DSM is compared to a photogrammetric reference model produced by images captured in optimal illumination conditions. Results show that, with high level of ISO and maintaining the same grade of aperture, extending the exposure time improves the quality of the point clouds in terms of completeness and accuracy of the photogrammetric models.

  10. Vision-based aircraft guidance

    NASA Technical Reports Server (NTRS)

    Menon, P. K.

    1993-01-01

    Early research on the development of machine vision algorithms to serve as pilot aids in aircraft flight operations is discussed. The research is useful for synthesizing new cockpit instrumentation that can enhance flight safety and efficiency. With the present work as the basis, future research will produce low-cost instrument by integrating a conventional TV camera together with off-the=shelf digitizing hardware for flight test verification. Initial focus of the research will be on developing pilot aids for clear-night operations. Latter part of the research will examine synthetic vision issues for poor visibility flight operations. Both research efforts will contribute towards the high-speed civil transport aircraft program. It is anticipated that the research reported here will also produce pilot aids for conducting helicopter flight operations during emergency search and rescue. The primary emphasis of the present research effort is on near-term, flight demonstrable technologies. This report discusses pilot aids for night landing and takeoff and synthetic vision as an aid to low visibility landing.

  11. Flight instruments and helmet-mounted SWIR imaging systems

    NASA Astrophysics Data System (ADS)

    Robinson, Tim; Green, John; Jacobson, Mickey; Grabski, Greg

    2011-06-01

    Night vision technology has experienced significant advances in the last two decades. Night vision goggles (NVGs) based on gallium arsenide (GaAs) continues to raise the bar for alternative technologies. Resolution, gain, sensitivity have all improved; the image quality through these devices is nothing less than incredible. Panoramic NVGs and enhanced NVGs are examples of recent advances that increase the warfighter capabilities. Even with these advances, alternative night vision devices such as solid-state indium gallium arsenide (InGaAs) focal plane arrays are under development for helmet-mounted imaging systems. The InGaAs imaging system offers advantages over the existing NVGs. Two key advantages are; (1) the new system produces digital image data, and (2) the new system is sensitive to energy in the shortwave infrared (SWIR) spectrum. While it is tempting to contrast the performance of these digital systems to the existing NVGs, the advantage of different spectral detection bands leads to the conclusion that the technologies are less competitive and more synergistic. It is likely, by the end of the decade, pilots within a cockpit will use multi-band devices. As such, flight decks will need to be compatible with both NVGs and SWIR imaging systems. Insertion of NVGs in aircraft during the late 70's and early 80's resulted in many "lessons learned" concerning instrument compatibility with NVGs. These "lessons learned" ultimately resulted in specifications such as MIL-L-85762A and MIL-STD-3009. These specifications are now used throughout industry to produce NVG-compatible illuminated instruments and displays for both military and civilian applications. Inserting a SWIR imaging device in a cockpit will require similar consideration. A project evaluating flight deck instrument compatibility with SWIR devices is currently ongoing; aspects of this evaluation are described in this paper. This project is sponsored by the Air Force Research Laboratory (AFRL).

  12. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  13. Ultraviolet vision may be widespread in bats

    USGS Publications Warehouse

    Gorresen, P. Marcos; Cryan, Paul; Dalton, David C.; Wolf, Sandy; Bonaccorso, Frank

    2015-01-01

    Insectivorous bats are well known for their abilities to find and pursue flying insect prey at close range using echolocation, but they also rely heavily on vision. For example, at night bats use vision to orient across landscapes, avoid large obstacles, and locate roosts. Although lacking sharp visual acuity, the eyes of bats evolved to function at very low levels of illumination. Recent evidence based on genetics, immunohistochemistry, and laboratory behavioral trials indicated that many bats can see ultraviolet light (UV), at least at illumination levels similar to or brighter than those before twilight. Despite this growing evidence for potentially widespread UV vision in bats, the prevalence of UV vision among bats remains unknown and has not been studied outside of the laboratory. We used a Y-maze to test whether wild-caught bats could see reflected UV light and whether such UV vision functions at the dim lighting conditions typically experienced by night-flying bats. Seven insectivorous species of bats, representing five genera and three families, showed a statistically significant ‘escape-toward-the-light’ behavior when placed in the Y-maze. Our results provide compelling evidence of widespread dim-light UV vision in bats.

  14. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering

    NASA Astrophysics Data System (ADS)

    Barnes, Nick; Scott, Adele F.; Lieby, Paulette; Petoe, Matthew A.; McCarthy, Chris; Stacey, Ashley; Ayton, Lauren N.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Lovell, Nigel H.; McDermott, Hugh J.; Walker, Janine G.; BVA Consortium,the

    2016-06-01

    Objective. One strategy to improve the effectiveness of prosthetic vision devices is to process incoming images to ensure that key information can be perceived by the user. This paper presents the first comprehensive results of vision function testing for a suprachoroidal retinal prosthetic device utilizing of 20 stimulating electrodes. Further, we investigate whether using image filtering can improve results on a light localization task for implanted participants compared to minimal vision processing. No controlled implanted participant studies have yet investigated whether vision processing methods that are not task-specific can lead to improved results. Approach. Three participants with profound vision loss from retinitis pigmentosa were implanted with a suprachoroidal retinal prosthesis. All three completed multiple trials of a light localization test, and one participant completed multiple trials of acuity tests. The visual representations used were: Lanczos2 (a high quality Nyquist bandlimited downsampling filter); minimal vision processing (MVP); wide view regional averaging filtering (WV); scrambled; and, system off. Main results. Using Lanczos2, all three participants successfully completed a light localization task and obtained a significantly higher percentage of correct responses than using MVP (p≤slant 0.025) or with system off (p\\lt 0.0001). Further, in a preliminary result using Lanczos2, one participant successfully completed grating acuity and Landolt C tasks, and showed significantly better performance (p=0.004) compared to WV, scrambled and system off on the grating acuity task. Significance. Participants successfully completed vision tasks using a 20 electrode suprachoroidal retinal prosthesis. Vision processing with a Nyquist bandlimited image filter has shown an advantage for a light localization task. This result suggests that this and targeted, more advanced vision processing schemes may become important components of retinal prostheses

  15. Cellular phone use while driving at night.

    PubMed

    Vivoda, Jonathon M; Eby, David W; St Louis, Renée M; Kostyniuk, Lidia P

    2008-03-01

    Use of a cellular phone has been shown to negatively affect one's attention to the driving task, leading to an increase in crash risk. At any given daylight hour, about 6% of US drivers are actively talking on a hand-held cell phone. However, previous surveys have focused only on cell phone use during the day. Driving at night has been shown to be a riskier activity than driving during the day. The purpose of the current study was to assess the rate of hand-held cellular phone use while driving at night, using specialized night vision equipment. In 2006, two statewide direct observation survey waves of nighttime cellular phone use were conducted in Indiana utilizing specialized night vision equipment. Combined results of driver hand-held cellular phone use from both waves are presented in this manuscript. The rates of nighttime cell phone use were similar to results found in previous daytime studies. The overall rate of nighttime hand-held cellular phone use was 5.8 +/- 0.6%. Cellular phone use was highest for females and for younger drivers. In fact, the highest rate observed during the study (of 11.9%) was for 16-to 29-year-old females. The high level of cellular phone use found within the young age group, coupled with the increased crash risk associated with cellular phone use, nighttime driving, and for young drivers in general, suggests that this issue may become an important transportation-related concern.

  16. Helicopter cockpit seat side and trapezius muscle metabolism with night vision goggles.

    PubMed

    Harrison, Michael F; Neary, J Patrick; Albert, Wayne J; Veillette, Dan W; McKenzie, Neil P; Croll, James C

    2007-10-01

    Documented neck strain among military helicopter aircrew is becoming more frequent and many militaries use helicopters that provide pilots with the option of sitting in the left or right cockpit seat during missions. The purpose of this study was to use near infrared spectroscopy (NIRS) to investigate the physiological changes in trapezius muscle oxygenation and blood volume during night vision goggle (NVG) flights as a function of left and right cockpit seating. There were 25 pilots who were monitored during NVG flight simulator missions (97.7 +/- 16.1 min). Bilateral NIRS probes attached to the trapezius muscles at C7 level recorded total oxygenation index (TOI, %), total hemoglobin (tHb), oxyhemoglobin (Hbo2), and deoxyhemo-globin (HHb). No significant differences existed between variables for pilots seated in the right cockpit seat as compared with the pilots seated in the left cockpit seat in either trapezius muscle (pTOI = 0.72; ptHb = 0.72; pHbo2 = 0.57; pHHb = 0.21). Alternating cockpit seats on successive missions is not a means to decrease metabolic stress for helicopter pilots using NVG. This suggests that cockpit layout and location of essential instruments with respect to the horizontal and the increased head supported mass of the NVG may be important factors influencing metabolic stress of the trapezius muscle.

  17. Light, Imaging, Vision: An interdisciplinary undergraduate course

    NASA Astrophysics Data System (ADS)

    Nelson, Philip

    Students in physical and life science, and in engineering, need to know about the physics and biology of light. In the 21st century, it has become increasingly clear that the quantum nature of light is essential both for the latest imaging modalities and even to advance our knowledge of fundamental processes, such as photosynthesis and human vision. But many optics courses remain rooted in classical physics, with photons as an afterthought. I'll describe a new undergraduate course, for students in several science and engineering majors, that takes students from the rudiments of probability theory to modern methods like fluorescence imaging and Förster resonance energy transfer. After a digression into color vision, students then see how the Feynman principle explains the apparently wavelike phenomena associated to light, including applications like diffraction limit, subdiffraction imaging, total internal reflection and TIRF microscopy. Then we see how scientists documented the single-quantum sensitivity of the eye seven decades earlier than `ought' to have been possible, and finally close with the remarkable signaling cascade that delivers such outstanding performance. A new textbook embodying this course will be published by Princeton University Press in Spring 2017. Partially supported by the United States National Science Foundation under Grant PHY-1601894.

  18. Diffractive-optical correlators: chances to make optical image preprocessing as intelligent as human vision

    NASA Astrophysics Data System (ADS)

    Lauinger, Norbert

    2004-10-01

    The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.

  19. Early Detection of Breast Cancer by Using Handycam Camera Manipulation as Thermal Camera Imaging with Images Processing Method

    NASA Astrophysics Data System (ADS)

    Riantana, R.; Arie, B.; Adam, M.; Aditya, R.; Nuryani; Yahya, I.

    2017-02-01

    One important thing to pay attention for detecting breast cancer is breast temperature changes. Indications symptoms of breast tissue abnormalities marked by a rise in temperature of the breast. Handycam in night vision mode interferences by external infrared can penetrate into the skin better and can make an infrared image becomes clearer. The program is capable to changing images from a camcorder into a night vision thermal image by breaking RGB into Grayscale matrix structure. The matrix rearranged in the new matrix with double data type so that it can be processed into contour color chart to differentiate the distribution of body temperature. In this program are also features of contrast scale setting of the image is processed so that the color can be set as desired. There was Also a contrast adjustment feature inverse scale that is useful to reverse the color scale so that colors can be changed opposite. There is improfile function used to retrieves the intensity values of pixels along a line what we want to show the distribution of intensity in a graph of relationship between the intensity and the pixel coordinates.

  20. Registration of heat capacity mapping mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L.

    1982-01-01

    Registration of thermal images is complicated by distinctive differences in the appearance of day and night features needed as control in the registration process. These changes are unlike those that occur between Landsat scenes and pose unique constraints. Experimentation with several potentially promising techniques has led to selection of a fairly simple scheme for registration of data from the experimental thermal satellite HCMM using an affine transformation. Two registration examples are provided.

  1. Design of direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging.

    PubMed

    Wang, Lei; Shao, Zhengzheng; Tang, Wusheng; Liu, Jiying; Nie, Qianwen; Jia, Hui; Dai, Suian; Zhu, Jubo; Li, Xiujian

    2017-10-20

    A direct-vision Amici prism is a desired dispersion element in the value of spectrometers and spectral imaging systems. In this paper, we focus on designing a direct-vision cyclo-olefin-polymer double Amici prism for spectral imaging systems. We illustrate a designed structure: E48R/N-SF4/E48R, from which we obtain 13 deg dispersion across the visible spectrum, which is equivalent to 700 line pairs/mm grating. We construct a simulative spectral imaging system with the designed direct-vision cyclo-olefin-polymer double Amici prism in optical design software and compare its imaging performance to a glass double Amici prism in the same system. The results of spot-size RMS demonstrate that the plastic prism can serve as well as their glass competitors and have better spectral resolution.

  2. Image model: new perspective for image processing and computer vision

    NASA Astrophysics Data System (ADS)

    Ziou, Djemel; Allili, Madjid

    2004-05-01

    We propose a new image model in which the image support and image quantities are modeled using algebraic topology concepts. The image support is viewed as a collection of chains encoding combination of pixels grouped by dimension and linking different dimensions with the boundary operators. Image quantities are encoded using the notion of cochain which associates values for pixels of given dimension that can be scalar, vector, or tensor depending on the problem that is considered. This allows obtaining algebraic equations directly from the physical laws. The coboundary and codual operators, which are generic operations on cochains allow to formulate the classical differential operators as applied for field functions and differential forms in both global and local forms. This image model makes the association between the image support and the image quantities explicit which results in several advantages: it allows the derivation of efficient algorithms that operate in any dimension and the unification of mathematics and physics to solve classical problems in image processing and computer vision. We show the effectiveness of this model by considering the isotropic diffusion.

  3. The vision guidance and image processing of AGV

    NASA Astrophysics Data System (ADS)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  4. Day, night and all-weather security surveillance automation synergy from combining two powerful technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morellas, Vassilios; Johnson, Andrew; Johnston, Chris

    2006-07-01

    Thermal imaging is rightfully a real-world technology proven to bring confidence to daytime, night-time and all weather security surveillance. Automatic image processing intrusion detection algorithms are also a real world technology proven to bring confidence to system surveillance security solutions. Together, day, night and all weather video imagery sensors and automated intrusion detection software systems create the real power to protect early against crime, providing real-time global homeland protection, rather than simply being able to monitor and record activities for post event analysis. These solutions, whether providing automatic security system surveillance at airports (to automatically detect unauthorized aircraft takeoff andmore » landing activities) or at high risk private, public or government facilities (to automatically detect unauthorized people or vehicle intrusion activities) are on the move to provide end users the power to protect people, capital equipment and intellectual property against acts of vandalism and terrorism. As with any technology, infrared sensors and automatic image intrusion detection systems for global homeland security protection have clear technological strengths and limitations compared to other more common day and night vision technologies or more traditional manual man-in-the-loop intrusion detection security systems. This paper addresses these strength and limitation capabilities. False Alarm (FAR) and False Positive Rate (FPR) is an example of some of the key customer system acceptability metrics and Noise Equivalent Temperature Difference (NETD) and Minimum Resolvable Temperature are examples of some of the sensor level performance acceptability metrics. (authors)« less

  5. Identifying local structural states in atomic imaging by computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  6. Identifying local structural states in atomic imaging by computer vision

    DOE PAGES

    Laanait, Nouamane; Ziatdinov, Maxim; He, Qian; ...

    2016-11-02

    The availability of atomically resolved imaging modalities enables an unprecedented view into the local structural states of materials, which manifest themselves by deviations from the fundamental assumptions of periodicity and symmetry. Consequently, approaches that aim to extract these local structural states from atomic imaging data with minimal assumptions regarding the average crystallographic configuration of a material are indispensable to advances in structural and chemical investigations of materials. Here, we present an approach to identify and classify local structural states that is rooted in computer vision. This approach introduces a definition of a structural state that is composed of both localmore » and non-local information extracted from atomically resolved images, and is wholly untethered from the familiar concepts of symmetry and periodicity. Instead, this approach relies on computer vision techniques such as feature detection, and concepts such as scale-invariance. We present the fundamental aspects of local structural state extraction and classification by application to simulated scanning transmission electron microscopy images, and analyze the robustness of this approach in the presence of common instrumental factors such as noise, limited spatial resolution, and weak contrast. Finally, we apply this computer vision-based approach for the unsupervised detection and classification of local structural states in an experimental electron micrograph of a complex oxides interface, and a scanning tunneling micrograph of a defect engineered multilayer graphene surface.« less

  7. Effects of age and illumination on night driving: a road test.

    PubMed

    Owens, D Alfred; Wood, Joanne M; Owens, Justin M

    2007-12-01

    This study investigated the effects of drivers' age and low light on speed, lane keeping, and visual recognition of typical roadway stimuli. Poor visibility, which is exacerbated by age-related changes in vision, is a leading contributor to fatal nighttime crashes. There is little evidence, however, concerning the extent to which drivers recognize and compensate for their visual limitations at night. Young, middle-aged, and elder participants drove on a closed road course in day and night conditions at a "comfortable" speed without speedometer information. During night tests, headlight intensity was varied over a range of 1.5 log units using neutral density filters. Average speed and recognition of road signs decreased significantly as functions of increased age and reduced illumination. Recognition of pedestrians at night was significantly enhanced by retroreflective markings of limb joints as compared with markings of the torso, and this benefit was greater for middle-aged and elder drivers. Lane keeping showed nonlinear effects of lighting, which interacted with task conditions and drivers' lateral bias, indicating that older drivers drove more cautiously in low light. Consistent with the hypothesis that drivers misjudge their visual abilities at night, participants of all age groups failed to compensate fully for diminished visual recognition abilities in low light, although older drivers behaved more cautiously than the younger groups. These findings highlight the importance of educating all road users about the limitations of night vision and provide new evidence that retroreflective markings of the limbs can be of great benefit to pedestrians' safety at night.

  8. Overview of machine vision methods in x-ray imaging and microtomography

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Zolotov, Denis; Chukalina, Marina; Nikolaev, Dmitry; Gladkov, Andrey; Ingacheva, Anastasia; Yakimchuk, Ivan; Asadchikov, Victor

    2018-04-01

    Digital X-ray imaging became widely used in science, medicine, non-destructive testing. This allows using modern digital images analysis for automatic information extraction and interpretation. We give short review of scientific applications of machine vision in scientific X-ray imaging and microtomography, including image processing, feature detection and extraction, images compression to increase camera throughput, microtomography reconstruction, visualization and setup adjustment.

  9. A Summary of Proceedings for the Advanced Deployable Day/Night Simulation Symposium

    DTIC Science & Technology

    2009-07-01

    initiated to design , develop, and deliver transportable visual simulations that jointly provide night-vision and high-resolution daylight capability. The...Deployable Day/Night Simulation (ADDNS) Technology Demonstration Project was initiated to design , develop, and deliver transportable visual...was Dr. Richard Wildes (York University); Mr. Vitaly Zholudev (Department of Computer Science, York University), Mr. X. Zhu (Neptec Design Group), and

  10. Blueberry effects on dark vision and recovery after photobleaching: placebo-controlled crossover studies.

    PubMed

    Kalt, Wilhelmina; McDonald, Jane E; Fillmore, Sherry A E; Tremblay, Francois

    2014-11-19

    Clinical evidence for anthocyanin benefits in night vision is controversial. This paper presents two human trials investigating blueberry anthocyanin effects on dark adaptation, functional night vision, and vision recovery after retinal photobleaching. One trial, S2 (n = 72), employed a 3 week intervention and a 3 week washout, two anthocyanin doses (271 and 7.11 mg cyanidin 3-glucoside equivalents (C3g eq)), and placebo. The other trial, L1 (n = 59), employed a 12 week intervention and an 8 week washout and tested one dose (346 mg C3g eq) and placebo. In both S2 and L1 neither dark adaptation nor night vision was improved by anthocyanin intake. However, in both trials anthocyanin consumption hastened the recovery of visual acuity after photobleaching. In S2 both anthocyanin doses were effective (P = 0.014), and in L1 recovery was improved at 8 weeks (P = 0.027) and 12 weeks (P = 0.030). Although photobleaching recovery was hastened by anthocyanins, it is not known whether this improvement would have an impact on everyday vision.

  11. A large-scale solar dynamics observatory image dataset for computer vision applications.

    PubMed

    Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A

    2017-01-01

    The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.

  12. Jupiter Night and Day

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Day and night side narrow angle images taken on January 1, 2001 illustrating storms visible on the day side which are the sources of visible lightning when viewed on the night side. The images have been enhanced in contrast. Note the two day-side occurrences of high clouds, in the upper and lower parts of the image, are coincident with lightning storms seen on the darkside. The storms occur at 34.5 degrees and 23.5 degrees North latitude, within one degree of the latitudes at which similar lightning features were detected by the Galileo spacecraft. The images were taken at different times. The storms' longitudinal separation changes from one image to the next because the winds carrying them blow at different speeds at the two latitudes.

  13. Jupiter Night and Day

    NASA Image and Video Library

    2001-01-23

    Day and night side narrow angle images taken on January 1, 2001 illustrating storms visible on the day side which are the sources of visible lightning when viewed on the night side. The images have been enhanced in contrast. Note the two day-side occurrences of high clouds, in the upper and lower parts of the image, are coincident with lightning storms seen on the darkside. The storms occur at 34.5 degrees and 23.5 degrees North latitude, within one degree of the latitudes at which similar lightning features were detected by the Galileo spacecraft. The images were taken at different times. The storms' longitudinal separation changes from one image to the next because the winds carrying them blow at different speeds at the two latitudes. http://photojournal.jpl.nasa.gov/catalog/PIA02878

  14. Day/night whole sky imagers for 24-h cloud and sky assessment: history and overview.

    PubMed

    Shields, Janet E; Karr, Monette E; Johnson, Richard W; Burden, Art R

    2013-03-10

    A family of fully automated digital whole sky imagers (WSIs) has been developed at the Marine Physical Laboratory over many years, for a variety of research and military applications. The most advanced of these, the day/night whole sky imagers (D/N WSIs), acquire digital imagery of the full sky down to the horizon under all conditions from full sunlight to starlight. Cloud algorithms process the imagery to automatically detect the locations of cloud for both day and night. The instruments can provide absolute radiance distribution over the full radiance range from starlight through daylight. The WSIs were fielded in 1984, followed by the D/N WSIs in 1992. These many years of experience and development have resulted in very capable instruments and algorithms that remain unique. This article discusses the history of the development of the D/N WSIs, system design, algorithms, and data products. The paper cites many reports with more detailed technical documentation. Further details of calibration, day and night algorithms, and cloud free line-of-sight results will be discussed in future articles.

  15. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  16. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  17. Restoration of vision after transplantation of photoreceptors.

    PubMed

    Pearson, R A; Barber, A C; Rizzi, M; Hippert, C; Xue, T; West, E L; Duran, Y; Smith, A J; Chuang, J Z; Azam, S A; Luhmann, U F O; Benucci, A; Sung, C H; Bainbridge, J W; Carandini, M; Yau, K-W; Sowden, J C; Ali, R R

    2012-05-03

    Cell transplantation is a potential strategy for treating blindness caused by the loss of photoreceptors. Although transplanted rod-precursor cells are able to migrate into the adult retina and differentiate to acquire the specialized morphological features of mature photoreceptor cells, the fundamental question remains whether transplantation of photoreceptor cells can actually improve vision. Here we provide evidence of functional rod-mediated vision after photoreceptor transplantation in adult Gnat1−/− mice, which lack rod function and are a model of congenital stationary night blindness. We show that transplanted rod precursors form classic triad synaptic connections with second-order bipolar and horizontal cells in the recipient retina. The newly integrated photoreceptor cells are light-responsive with dim-flash kinetics similar to adult wild-type photoreceptors. By using intrinsic imaging under scotopic conditions we demonstrate that visual signals generated by transplanted rods are projected to higher visual areas, including V1. Moreover, these cells are capable of driving optokinetic head tracking and visually guided behaviour in the Gnat1−/− mouse under scotopic conditions. Together, these results demonstrate the feasibility of photoreceptor transplantation as a therapeutic strategy for restoring vision after retinal degeneration.

  18. A note on image degradation, disability glare, and binocular vision

    NASA Astrophysics Data System (ADS)

    Rajaram, Vandana; Lakshminarayanan, Vasudevan

    2013-08-01

    Disability glare due to scattering of light causes a reduction in visual performance due to a luminous veil over the scene. This causes problem such as contrast detection. In this note, we report a study of the effect of this veiling luminance on human stereoscopic vision. We measured the effect of glare on the horopter measured using the apparent fronto-parallel plane (AFPP) criterion. The empirical longitudinal horopter measured using the AFPP criterion was analyzed using the so-called analytic plot. The analytic plot parameters were used for quantitative measurement of binocular vision. Image degradation plays a major effect on binocular vision as measured by the horopter. Under the conditions tested, it appears that if vision is sufficiently degraded then the addition of disability glare does not seem to significantly cause any further compromise in depth perception as measured by the horopter.

  19. A Most Rare Vision: Improvisations on "A Midsummer Night's Dream."

    ERIC Educational Resources Information Center

    Hakaim, Charles J., Jr.

    1993-01-01

    Describes one teacher's methods for introducing to secondary English students the concepts of improvisation, experimentation, and innovation. Discusses numerous techniques for fostering such skills when working with William Shakespeare's "A Midsummer Night's Dream." (HB)

  20. Measuring noise equivalent irradiance of a digital short-wave infrared imaging system using a broadband source to simulate the night spectrum

    NASA Astrophysics Data System (ADS)

    Green, John R.; Robinson, Timothy

    2015-05-01

    There is a growing interest in developing helmet-mounted digital imaging systems (HMDIS) for integration into military aircraft cockpits. This interest stems from the multiple advantages of digital vs. analog imaging such as image fusion from multiple sensors, data processing to enhance the image contrast, superposition of non-imaging data over the image, and sending images to remote location for analysis. There are several properties an HMDIS must have in order to aid the pilot during night operations. In addition to the resolution, image refresh rate, dynamic range, and sensor uniformity over the entire Focal Plane Array (FPA); the imaging system must have the sensitivity to detect the limited night light available filtered through cockpit transparencies. Digital sensor sensitivity is generally measured monochromatically using a laser with a wavelength near the peak detector quantum efficiency, and is generally reported as either the Noise Equivalent Power (NEP) or Noise Equivalent Irradiance (NEI). This paper proposes a test system that measures NEI of Short-Wave Infrared (SWIR) digital imaging systems using a broadband source that simulates the night spectrum. This method has a few advantages over a monochromatic method. Namely, the test conditions provide spectrum closer to what is experienced by the end-user, and the resulting NEI may be compared directly to modeled night glow irradiance calculation. This comparison may be used to assess the Technology Readiness Level of the imaging system for the application. The test system is being developed under a Cooperative Research and Development Agreement (CRADA) with the Air Force Research Laboratory.

  1. An object tracking method based on guided filter for night fusion image

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoyan; Wang, Yuedong; Han, Lei

    2016-01-01

    Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.

  2. Development of VIPER: a simulator for assessing vision performance of warfighters

    NASA Astrophysics Data System (ADS)

    Familoni, Jide; Thompson, Roger; Moyer, Steve; Mueller, Gregory; Williams, Tim; Nguyen, Hung-Quang; Espinola, Richard L.; Sia, Rose K.; Ryan, Denise S.; Rivers, Bruce A.

    2016-05-01

    Background: When evaluating vision, it is important to assess not just the ability to read letters on a vision chart, but also how well one sees in real life scenarios. As part of the Warfighter Refractive Eye Surgery Program (WRESP), visual outcomes are assessed before and after refractive surgery. A Warfighter's ability to read signs and detect and identify objects is crucial, not only when deployed in a military setting, but also in their civilian lives. Objective: VIPER, a VIsion PERformance simulator was envisioned as actual video-based simulated driving to test warfighters' functional vision under realistic conditions. Designed to use interactive video image controlled environments at daytime, dusk, night, and with thermal imaging vision, it simulates the experience of viewing and identifying road signs and other objects while driving. We hypothesize that VIPER will facilitate efficient and quantifiable assessment of changes in vision and measurement of functional military performance. Study Design: Video images were recorded on an isolated 1.1 mile stretch of road with separate target sets of six simulated road signs and six objects of military interest, separately. The video footage were integrated with customdesigned C++ based software that presented the simulated drive to an observer on a computer monitor at 10, 20 or 30 miles/hour. VIPER permits the observer to indicate when a target is seen and when it is identified. Distances at which the observer recognizes and identifies targets are automatically logged. Errors in recognition and identification are also recorded. This first report describes VIPER's development and a preliminary study to establish a baseline for its performance. In the study, nine soldiers viewed simulations at 10 miles/hour and 30 miles/hour, run in randomized order for each participant seated at 36 inches from the monitor. Relevance: Ultimately, patients are interested in how their vision will affect their ability to perform daily

  3. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    NASA Astrophysics Data System (ADS)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  4. Pleiades Visions

    NASA Astrophysics Data System (ADS)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  5. Night image of New York City as seen from STS-59 Endeavour

    NASA Image and Video Library

    1994-04-20

    STS059-50-003 (9-20 April 1994) --- This 35mm night image of the New York City metropolitan area was captured by the crew of the STS-59 crew during the Space Radar Laboratory (SRL) mission. Scientists studying film from the Space Shuttle Endeavour feel this is the best nocturnal view of this region from the manned space program.

  6. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  7. Facial identification in very low-resolution images simulating prosthetic vision.

    PubMed

    Chang, M H; Kim, H S; Shin, J H; Park, K S

    2012-08-01

    Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.

  8. Melas Chasma, Day and Night.

    NASA Image and Video Library

    2002-12-07

    This image is a mosaic of day and night infrared images of Melas Chasma taken by NASA Mars Odyssey spacecraft. The daytime temperature images are shown in black and white, superimposed on the Martian topography.

  9. Relationship between fatigue of generation II image intensifier and input illumination

    NASA Astrophysics Data System (ADS)

    Chen, Qingyou

    1995-09-01

    If there is fatigue for an image intesifier, then it has an effect on the imaging property of the night vision system. In this paper, using the principle of Joule Heat, we derive a mathematical formula for the generated heat of semiconductor photocathode. We describe the relationship among the various parameters in the formula. We also discuss reasons for the fatigue of Generation II image intensifier caused by bigger input illumination.

  10. High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired.

    PubMed

    Moshtael, Howard; Aslam, Tariq; Underwood, Ian; Dhillon, Baljean

    2015-08-01

    Recent advances in digital image processing provide promising methods for maximizing the residual vision of the visually impaired. This paper seeks to introduce this field to the readership and describe its current state as found in the literature. A systematic search revealed 37 studies that measure the value of image processing techniques for subjects with low vision. The techniques used are categorized according to their effect and the principal findings are summarized. The majority of participants preferred enhanced images over the original for a wide range of enhancement types. Adapting the contrast and spatial frequency content often improved performance at object recognition and reading speed, as did techniques that attenuate the image background and a technique that induced jitter. A lack of consistency in preference and performance measures was found, as well as a lack of independent studies. Nevertheless, the promising results should encourage further research in order to allow their widespread use in low-vision aids.

  11. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  12. Image processing for a tactile/vision substitution system using digital CNN.

    PubMed

    Lin, Chien-Nan; Yu, Sung-Nien; Hu, Jin-Cheng

    2006-01-01

    In view of the parallel processing and easy implementation properties of CNN, we propose to use digital CNN as the image processor of a tactile/vision substitution system (TVSS). The digital CNN processor is used to execute the wavelet down-sampling filtering and the half-toning operations, aiming to extract important features from the images. A template combination method is used to embed the two image processing functions into a single CNN processor. The digital CNN processor is implemented on an intellectual property (IP) and is implemented on a XILINX VIRTEX II 2000 FPGA board. Experiments are designated to test the capability of the CNN processor in the recognition of characters and human subjects in different environments. The experiments demonstrates impressive results, which proves the proposed digital CNN processor a powerful component in the design of efficient tactile/vision substitution systems for the visually impaired people.

  13. Definition of display/control requirements for assault transport night/adverse weather capability

    NASA Technical Reports Server (NTRS)

    Milelli, R. J.; Mowery, G. W.; Pontelandolfo, C.

    1982-01-01

    A Helicopter Night Vision System was developed to improve low-altitude night and/or adverse weather assult transport capabilities. Man-in-the-loop simulation experiments were performed to define the minimum display and control requirements for the assult transport mission and investigate forward looking infrared sensor requirements, along with alternative displays such as panel mounted displays (PMD) helmet mounted displays (HMD), and integrated control display units. Also explored were navigation requirements, pilot/copilot interaction, and overall cockpit arrangement. Pilot use of an HMD and copilot use of a PMD appear as both the preferred and most effective night navigation combination.

  14. Infrared Imaging Sharpens View in Critical Situations

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Innovative Engineering and Consulting (IEC) Infrared Systems, a leading developer of thermal imaging systems and night vision equipment, received a Glenn Alliance for Technology Exchange (GATE) award, half of which was in the form of additional NASA assistance for new product development. IEC Infrared Systems worked with electrical and optical engineers from Glenn's Diagnostics and Data Systems Branch to develop a commercial infrared imaging system that could differentiate the intensity of heat sources better than other commercial systems. The research resulted in two major thermal imaging solutions: NightStalkIR and IntrudIR Alert. These systems are being used in the United States and abroad to help locate personnel stranded in emergency situations, defend soldiers on the battlefield abroad, and protect high-value facilities and operations. The company is also applying its advanced thermal imaging techniques to medical and pharmaceutical product development with a Cleveland-based pharmaceutical company.

  15. A self-report critical incident assessment tool for army night vision goggle helicopter operations.

    PubMed

    Renshaw, Peter F; Wiggins, Mark W

    2007-04-01

    The present study sought to examine the utility of a self-report tool that was designed as a partial substitute for a face-to-face cognitive interview for critical incidents involving night vision goggles (NVGs). The use of NVGs remains problematic within the military environment, as these devices have been identified as a factor in a significant proportion of aircraft accidents and incidents. The self-report tool was structured to identify some of the cognitive features of human performance that were associated with critical incidents involving NVGs. The tool incorporated a number of different levels of analysis, ranging from specific behavioral responses to broader cognitive constructs. Reports were received from 30 active pilots within the Australian Army using the NVG Critical Incident Assessment Tool (NVGCIAT). The results revealed a correspondence between specific types of NVG-related errors and elements of the Human Factors Analysis and Classification System (HFACS). In addition, uncertainty emerged as a significant factor associated with the critical incidents that were recalled by operators. These results were broadly consistent with previous research and provide some support for the utility of subjective assessment tools as a means of extracting critical incident-related data when face-to-face cognitive interviews are not possible. In some circumstances, the NVGCIAT might be regarded as a substitute cognitive interview protocol with some level of diagnosticity.

  16. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  17. 2001 Mars Odyssey Images Earth (Visible and Infrared)

    NASA Technical Reports Server (NTRS)

    2001-01-01

    2001 Mars Odyssey's Thermal Emission Imaging System (THEMIS) acquired these images of the Earth using its visible and infrared cameras as it left the Earth. The visible image shows the thin crescent viewed from Odyssey's perspective. The infrared image was acquired at exactly the same time, but shows the entire Earth using the infrared's 'night-vision' capability. Invisible light the instrument sees only reflected sunlight and therefore sees nothing on the night side of the planet. In infrared light the camera observes the light emitted by all regions of the Earth. The coldest ground temperatures seen correspond to the nighttime regions of Antarctica; the warmest temperatures occur in Australia. The low temperature in Antarctica is minus 50 degrees Celsius (minus 58 degrees Fahrenheit); the high temperature at night in Australia 9 degrees Celsius(48.2 degrees Fahrenheit). These temperatures agree remarkably well with observed temperatures of minus 63 degrees Celsius at Vostok Station in Antarctica, and 10 degrees Celsius in Australia. The images were taken at a distance of 3,563,735 kilometers (more than 2 million miles) on April 19,2001 as the Odyssey spacecraft left Earth.

  18. The Stellar Imager (SI) "Vision Mission"

    NASA Technical Reports Server (NTRS)

    Carpenter, K.; Danchi, W.; Leitner, J.; Liu, A.; Lyon, R.; Mazzuca, L.; Moe, R.; Chenette, D.; Schrijver, C.; Kilston, S.

    2004-01-01

    The Stellar Imager (SI) is a Vision Mission in the Sun-Earth Connection (SEC) NASA Roadmap, conceived for the purpose of understanding the effects of stellar magnetic fields, the dynamos that generate them, and the internal structure and dynamics of the stars in which they exist. The ultimate goal is to achieve the best possible forecasting of solar/stellar activity and its impact on life in the Universe. The science goals of SI require an ultra-high angular resolution, at ultraviolet wavelengths, on the order of 100 micro-arcsec and baselines on the order of 0.5 km. These requirements call for a large, multi-spacecraft (greater than 20) imaging interferometer, utilizing precision formation flying in a stable environment, such as in a Lissajous orbit around the Sun-Earth L2 point. In this paper, we present an update on the ongoing SI mission concept and technology development studies.

  19. The Stellar Imager (SI) "Vision Mission"

    NASA Technical Reports Server (NTRS)

    Carpenter, K.; Danchi, W.; Leitner, J.; Liu, A.; Lyon, R.; Mazzuca, L.; Moe, R.; Chenette, D.; Schrijver, C.; Kilston, S.

    2004-01-01

    The Stellar Imager (SI) is a Vision Mission in the Sun-Earth Connection (SEC) NASA Roadmap, conceived for the purpose of understanding the effects of stellar magnetic fields, the dynamos that generate them, and the internal structure and dynamics of the stars in which they exist. The ultimate goal is to achieve the best possible forecasting of solar/stellar activity and its impact on life in the Universe. The science goals of SI require an ultra-high angular resolution, a t ultraviolet wavelengths, on the order of 100 micro-arcsec and baselines on the order of 0.5 km. These requirements call for a large, multi-spacecraft (>20) imaging interferometer, utilizing precision formation flying in a stable environment, such as in a Lissajous orbit around the Sun-Earth L2 point. In this paper, we present an update on the ongoing SI mission concept and technology development studies.

  20. A diurnal animation of thermal images from a day-night pair

    USGS Publications Warehouse

    Watson, K.

    2000-01-01

    Interpretation of thermal images is often complicated because the physical property information is contained in both the spatial and temporal variations of the data and thermal models are necessary to extract and display this information. A linearized radiative transfer solution to the surface flux has been used to derive a function that is invariant with respect to thermal inertia. This relationship makes it possible to predict the temperature variation at any time in the diurnal cycle using only two distinct measurements (e.g., noon and midnight). An animation can then be constructed from a pair of day-night images to view both the spatial and temporal temperature changes throughout the diurnal cycle. A more complete solution for the invariant function, using the method of Laplace transforms and based on the linearized solution, was introduced. These results indicate that the linear model does not provide a sufficiently accurate estimate. Using standard conditions (latitude 30??, solar declination 0??, acquisition times at noon and midnight), this new relationship was used to predict temperature throughout the diurnal cycle to an rms error of 0.2??C, which is close to the system noise of most thermal scanners. The method was further extended to include the primary effects of topographic slope with similar accuracy. The temperature was computed at 48 equally spaced times in the diurnal cycle with this algorithm using a co-registered day and night TIMS (Thermal Infrared Multispectral Scanner) data pair (330 pixels, 450 lilies) acquired of the Carlin, Nevada, area and a co-registered DEM (Digital Elevation Model). (Any reader can view the results by downloading the animation file from an identified tip site). The results illustrate the power of animation to display subtle temporal and spatial temperature changes, which can provide clues to structural controls and material property differences. This 'visual change' approach could significantly increase the use of

  1. Machine-Vision Aids for Improved Flight Operations

    NASA Technical Reports Server (NTRS)

    Menon, P. K.; Chatterji, Gano B.

    1996-01-01

    The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.

  2. Arsia Mons by Day and Night

    NASA Image and Video Library

    2004-06-22

    Released 22 June 2004 This pair of images shows part of Arsia Mons. Day/Night Infrared Pairs The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top. Infrared image interpretation Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark. Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images. Image information: IR instrument. Latitude -19.6, Longitude 241.9 East (118.1 West). 100 meter/pixel resolution. http://photojournal.jpl.nasa.gov/catalog/PIA06399

  3. Crater Ejecta by Day and Night

    NASA Image and Video Library

    2004-06-24

    Released 24 June 2004 This pair of images shows a crater and its ejecta. Day/Night Infrared Pairs The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top. Infrared image interpretation Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark. Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images. Image information: IR instrument. Latitude -9, Longitude 164.2 East (195.8 West). 100 meter/pixel resolution. http://photojournal.jpl.nasa.gov/catalog/PIA06445

  4. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  5. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  6. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  7. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  8. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  9. Limits of colour vision in dim light.

    PubMed

    Kelber, Almut; Lind, Olle

    2010-09-01

    Humans and most vertebrates have duplex retinae with multiple cone types for colour vision in bright light, and one single rod type for achromatic vision in dim light. Instead of comparing signals from multiple spectral types of photoreceptors, such species use one highly sensitive receptor type thus improving the signal-to-noise ratio at night. However, the nocturnal hawkmoth Deilephila elpenor, the nocturnal bee Xylocopa tranquebarica and the nocturnal gecko Tarentola chazaliae can discriminate colours at extremely dim light intensities. To be able to do so, they sacrifice spatial and temporal resolution in favour of colour vision. We review what is known about colour vision in dim light, and compare colour vision thresholds with the optical sensitivity of the photoreceptors in selected animal species with lens and compound eyes. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  10. Human low vision image warping - Channel matching considerations

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Smith, Alan T.; Loshin, David S.

    1992-01-01

    We are investigating the possibility that a video image may productively be warped prior to presentation to a low vision patient. This could form part of a prosthesis for certain field defects. We have done preliminary quantitative studies on some notions that may be valid in calculating the image warpings. We hope the results will help make best use of time to be spent with human subjects, by guiding the selection of parameters and their range to be investigated. We liken a warping optimization to opening the largest number of spatial channels between the pixels of an input imager and resolution cells in the visual system. Some important effects are not quantified that will require human evaluation, such as local 'squashing' of the image, taken as the ratio of eigenvalues of the Jacobian of the transformation. The results indicate that the method shows quantitative promise. These results have identified some geometric transformations to evaluate further with human subjects.

  11. High dynamic range vision sensor for automotive applications

    NASA Astrophysics Data System (ADS)

    Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois

    2005-02-01

    A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.

  12. Flight Testing of Night Vision Systems in Rotorcraft (Test en vol de systemes de vision nocturne a bord des aeronefs a voilure tournante)

    DTIC Science & Technology

    2007-07-01

    SAS System Analysis and Studies Panel • SCI Systems Concepts and Integration Panel • SET Sensors and Electronics Technology Panel These...Daylight Readability 4-2 4.1.4 Night-Time Readability 4-2 4.1.5 NVIS Radiance 4-2 4.1.6 Human Factors Analysis 4-3 4.1.7 Flight Tests 4-3 4.1.7.1...position is shadowing. Moonlight creates shadows during night-time just as sunlight does during the day. Understanding what cannot be seen in night-time

  13. Color vision test

    MedlinePlus

    ... present from birth) color vision problems: Achromatopsia -- complete color blindness , seeing only shades of gray Deuteranopia -- difficulty telling ... Vision test - color; Ishihara color vision test Images Color blindness tests References Bowling B. Hereditary fundus dystrophies. In: ...

  14. Compact survey and inspection day/night image sensor suite for small unmanned aircraft systems (EyePod)

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.

    2010-04-01

    EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.

  15. Hurricane Isaac by Night

    NASA Image and Video Library

    2017-12-08

    NASA image acquired August 29, 2012 1:57 a.m EDT Annotated view here: bit.ly/RsFT9Y Hurricane Isaac lit up by moonlight as it spins over the city of New Orleans, La. at 1:57 am central daylight savings time the morning of August 29, 2012. The Suomi National Polar-orbiting Partnership (NPP) satellite captured these images with its Visible Infrared Imaging Radiometer Suite (VIIRS). The "day-night band" of VIIRS detects light in a range of wavelengths from green to near-infrared and uses light intensification to enable the detection of dim signals. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration and the Department of Defense. Image Credit: NASA/NOAA, Earth Observatory NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day Night Band data. Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. Training for Night Operations - Research Challenges and Opportunities

    DTIC Science & Technology

    2012-08-08

    Training and Simulation Conference Exhibits and Visuals by Type ……………..……... 1 Figure 2: Radiant Sensitivity of Intensifiers Currently Used in Night Vision...wavelengths across Figure 2. Radiant Sensitivity of Intensifiers the entire radiant sensitivity band. In either case, in order to be Currently...the projector/display technology and NVGs used. NVGs are very sensitive to any light within their band of radiant sensitivity . The intensifier

  17. Night Side Jovian Aurora

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Jovian aurora on the night side of the planet. The upper bright arc is auroral emission seen 'edge on' above the planetary limb with the darkness of space as a background. The lower bright arc is seen against the dark clouds of Jupiter. The aurora is easier to see on the night side of Jupiter because it is fainter than the clouds when they are illuminated by sunlight. Jupiter's north pole is out of view to the upper right. The images were taken in the clear filter (visible light) and are displayed in shades of blue.

    As on Earth, the auroral emission is caused by electrically charged particles striking the upper atmosphere from above. The particles travel along the magnetic field lines of the planet, but their origin is not fully understood. The field lines where the aurora is most intense cross the Jovian equator at large distances (many Jovian radii) from the planet. The faint background throughout the image is scattered light in the camera. This stray light comes from the sunlit portion of Jupiter, which is out of the image to the right. In multispectral observations the aurora appears red, consistent with glow from atomic hydrogen in Jupiter's atmosphere. Galileo's unique perspective allows it to view the night side of the planet at short range, revealing details that cannot be seen from Earth. These detailed features are time dependent, and can be followed in sequences of Galileo images.

    North is at the top of the picture. A grid of planetocentric latitude and west longitude is overlain on the images. The images were taken on November 5, 1997 at a range of 1.3 million kilometers by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the

  18. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  19. Night-Vision Goggle Visual Performance During 12 Hours at 10,000 ft Altitude at Night Conditions

    DTIC Science & Technology

    2008-03-01

    relative ta the use .of head-phanes (OR= 1.74, CI= 0.64-4.71), attendance ta cancerts (OR= 2.20, CI= 0.62-7.82), .or matar sparts (OR= 1.02, CI= 0.21-4.77...performance of low-grade hypaxia exposu re at 10,000 ft duri~g 12 haurs night condltlans. Methods: Hypabaric expasures in a dark environment simulating a...visian gaggle at an altitude .of 10,000 ft In a dark envlranment are described. [418] ROLL, PITCH, AND YAW OFTHE HEAD AS IT TRACKS VISUAL AND

  20. Hurricane Isaac by Night [annotated

    NASA Image and Video Library

    2017-12-08

    NASA image acquired August 29, 2012 1:57 a.m EDT Hurricane Isaac lit up by moonlight as it spins over the city of New Orleans, La. at 1:57 am central daylight savings time the morning of August 29, 2012. The Suomi National Polar-orbiting Partnership (NPP) satellite captured these images with its Visible Infrared Imaging Radiometer Suite (VIIRS). The "day-night band" of VIIRS detects light in a range of wavelengths from green to near-infrared and uses light intensification to enable the detection of dim signals. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration and the Department of Defense. Image Credit: NASA/NOAA, Earth Observatory NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day Night Band data. Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  2. Machine vision for digital microfluidics

    NASA Astrophysics Data System (ADS)

    Shin, Yong-Jun; Lee, Jeong-Bong

    2010-01-01

    Machine vision is widely used in an industrial environment today. It can perform various tasks, such as inspecting and controlling production processes, that may require humanlike intelligence. The importance of imaging technology for biological research or medical diagnosis is greater than ever. For example, fluorescent reporter imaging enables scientists to study the dynamics of gene networks with high spatial and temporal resolution. Such high-throughput imaging is increasingly demanding the use of machine vision for real-time analysis and control. Digital microfluidics is a relatively new technology with expectations of becoming a true lab-on-a-chip platform. Utilizing digital microfluidics, only small amounts of biological samples are required and the experimental procedures can be automatically controlled. There is a strong need for the development of a digital microfluidics system integrated with machine vision for innovative biological research today. In this paper, we show how machine vision can be applied to digital microfluidics by demonstrating two applications: machine vision-based measurement of the kinetics of biomolecular interactions and machine vision-based droplet motion control. It is expected that digital microfluidics-based machine vision system will add intelligence and automation to high-throughput biological imaging in the future.

  3. Fusion of Night Vision and Thermal Images

    DTIC Science & Technology

    2006-12-01

    with the walls of the MCP channels. Thus, a thin metal oxide coating commonly known as an ion barrier film is added to the input side of the MCP to...with film ion barrier to filmless gated tubes. An important improvement for Gen 4 products is a greater target identification range and higher target...Metal Seals with S-25 Cathode Mircro-channel plate Ceramic/Metal Seals with GaAS Cathode Mircro-channel plate with ion barrier film Ceramic

  4. Night Side of Titan

    NASA Image and Video Library

    1999-02-23

    NASA Voyager 2 obtained this wide-angle image of the night side of Titan on Aug. 25, 1979. This is a view of Titan extended atmosphere. the bright orangish ring being caused by the atmosphere scattering of the incident sunlight.

  5. Visual summation in night-flying sweat bees: a theoretical study.

    PubMed

    Theobald, Jamie Carroll; Greiner, Birgit; Wcislo, William T; Warrant, Eric J

    2006-07-01

    Bees are predominantly diurnal; only a few groups fly at night. An evolutionary limitation that bees must overcome to inhabit dim environments is their eye type: bees possess apposition compound eyes, which are poorly suited to vision in dim light. Here, we theoretically examine how nocturnal bees Megalopta genalis fly at light levels usually reserved for insects bearing more sensitive superposition eyes. We find that neural summation should greatly increase M. genalis's visual reliability. Predicted spatial summation closely matches the morphology of laminal neurons believed to mediate such summation. Improved reliability costs acuity, but dark adapted bees already suffer optical blurring, and summation further degrades vision only slightly.

  6. Müller cells separate between wavelengths to improve day vision with minimal effect upon night vision

    NASA Astrophysics Data System (ADS)

    Labin, Amichai M.; Safuri, Shadi K.; Ribak, Erez N.; Perlman, Ido

    2014-07-01

    Vision starts with the absorption of light by the retinal photoreceptors—cones and rods. However, due to the ‘inverted’ structure of the retina, the incident light must propagate through reflecting and scattering cellular layers before reaching the photoreceptors. It has been recently suggested that Müller cells function as optical fibres in the retina, transferring light illuminating the retinal surface onto the cone photoreceptors. Here we show that Müller cells are wavelength-dependent wave-guides, concentrating the green-red part of the visible spectrum onto cones and allowing the blue-purple part to leak onto nearby rods. This phenomenon is observed in the isolated retina and explained by a computational model, for the guinea pig and the human parafoveal retina. Therefore, light propagation by Müller cells through the retina can be considered as an integral part of the first step in the visual process, increasing photon absorption by cones while minimally affecting rod-mediated vision.

  7. Light, Imaging, Vision: An interdisciplinary undergraduate course

    NASA Astrophysics Data System (ADS)

    Nelson, Philip

    2015-03-01

    The vertebrate eye is fantastically sensitive instrument, capable of registering the absorption of a single photon, and yet generating very low noise. Using eyes as a common thread helps motivate undergraduates to learn a lot of physics, both fundamental and applied to scientific imaging and neuroscience. I'll describe an undergraduate course, for students in several science and engineering majors, that takes students from the rudiments of probability theory to the quantum character of light, including modern experimental methods like fluorescence imaging and Förster resonance energy transfer. After a digression into color vision, we then see how the Feynman principle explains the apparently wavelike phenomena associated to light, including applications like diffraction, subdiffraction imaging, total internal reflection and TIRF microscopy. Then we see how scientists documented the single-quantum sensitivity of the eye seven decades earlier than ``ought'' to have been possible, and finally close with the remarkable signaling cascade that delivers such outstanding performance. Parts of this story are now embodied in a new textbook (WH Freeman and Co, 1/2015); additional course materials are available upon request. Work supported by NSF Grants EF-0928048 and DMR-0832802.

  8. Spatial vision processes: From the optical image to the symbolic structures of contour information

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1988-01-01

    The significance of machine and natural vision is discussed together with the need for a general approach to image acquisition and processing aimed at recognition. An exploratory scheme is proposed which encompasses the definition of spatial primitives, intrinsic image properties and sampling, 2-D edge detection at the smallest scale, the construction of spatial primitives from edges, and the isolation of contour information from textural information. Concepts drawn from or suggested by natural vision at both perceptual and physiological levels are relied upon heavily to guide the development of the overall scheme. The scheme is intended to provide a larger context in which to place the emerging technology of detector array focal-plane processors. The approach differs from many recent efforts in edge detection and image coding by emphasizing smallest scale edge detection as a foundation for multi-scale symbolic processing while diminishing somewhat the importance of image convolutions with multi-scale edge operators. Cursory treatments of information theory illustrate that the direct application of this theory to structural information in images could not be realized.

  9. Melas Chasma, Day and Night.

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This image is a mosaic of day and night infrared images of Melas Chasma taken by the camera system on NASA's Mars Odyssey spacecraft. The daytime temperature images are shown in black and white, superimposed on the martian topography. A single nighttime temperature image is superimposed in color. The daytime temperatures range from approximately -35 degrees Celsius (-31 degrees Fahrenheit) in black to -5 degrees Celsius (23 degrees Fahrenheit) in white. Overlapping landslides and individual layers in the walls of Melas Chasma can be seen in this image. The landslides flowed over 100 kilometers (62 miles) across the floor of Melas Chasma, producing deposits with ridges and grooves of alternating warm and cold materials that can still be seen. The temperature differences in the daytime images are due primarily to lighting effects, where sunlit slopes are warm (bright) and shadowed slopes are cool (dark). The nighttime temperature differences are due to differences in the abundance of rocky materials that retain their heat at night and stay relatively warm (red). Fine grained dust and sand (blue) cools off more rapidly at night. These images were acquired using the thermal infrared imaging system infrared Band 9, centered at 12.6 micrometers.

    Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the 2001 Mars Odyssey mission for NASA's Office of Space Science in Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson and NASA's Johnson Space Center, Houston, operate the science instruments. Additional science partners are located at the Russian Aviation and Space Agency and at Los Alamos National Laboratories, New Mexico. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL. Aviation and Space Agency and at Los Alamos National

  10. Using Computer Vision Techniques to Locate Objects in an Image

    DTIC Science & Technology

    1988-09-01

    Sujata Kakarla J. Wakeley A. S. Maida Snf DTIC SL7CTE0 ;r’!•,,/ )N ATMT~~c.N T" A TICIINICAL REPORT " SR 10 •: 1"R! _ IrIi) The Pennsylvania State...University APPLIED RESEARCH LABORATORY P. 0. Box 30 State College, PA 16804 USING COMPUTER VISION TECHNIQUES TO LOCATE OBJECTS IN AN IMAGE by Sujata Kakarla J...in an Image 12 PERSONAL AUTHOR(S) Sujata Kakarla, J. Wakelev, A. S. Maida 𔃽a TYPE OF REPORT 13b TIME COVERED 14 DATE OF REPORT (Y ar, Month, Day) 5

  11. Pedestrian detection in infrared image using HOG and Autoencoder

    NASA Astrophysics Data System (ADS)

    Chen, Tianbiao; Zhang, Hao; Shi, Wenjie; Zhang, Yu

    2017-11-01

    In order to guarantee the safety of driving at night, vehicle-mounted night vision system was used to detect pedestrian in front of cars and send alarm to prevent the potential dangerous. To decrease the false positive rate (FPR) and increase the true positive rate (TPR), a pedestrian detection method based on HOG and Autoencoder (HOG+Autoencoder) was presented. Firstly, the HOG features of input images were computed and encoded by Autoencoder. Then the encoded features were classified by Softmax. In the process of training, Autoencoder was trained unsupervised. Softmax was trained with supervision. Autoencoder and Softmax were stacked into a model and fine-tuned by labeled images. Experiment was conducted to compare the detection performance between HOG and HOG+Autoencoder, using images collected by vehicle-mounted infrared camera. There were 80000 images for training set and 20000 for the testing set, with a rate of 1:3 between positive and negative images. The result shows that when TPR is 95%, FPR of HOG+Autoencoder is 0.4%, while the FPR of HOG is 5% with the same TPR.

  12. A new colorimetrically-calibrated automated video-imaging protocol for day-night fish counting at the OBSEA coastal cabled observatory.

    PubMed

    del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni

    2013-10-30

    Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented "3D Thin-Plate Spline" warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results

  13. A New Colorimetrically-Calibrated Automated Video-Imaging Protocol for Day-Night Fish Counting at the OBSEA Coastal Cabled Observatory

    PubMed Central

    del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni

    2013-01-01

    Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results

  14. Physiological effects of night vision goggle counterweights on neck musculature of military helicopter pilots.

    PubMed

    Harrison, Michael F; Neary, J Patrick; Albert, Wayne J; Veillette, Major Dan W; Forcest, Canadian; McKenzie, Neil P; Croll, James C

    2007-08-01

    Increased helmet-mounted mass and specific neck postures have been found to be a cause of increased muscular activity and stress. However, pilots who use night vision goggles (NVG) frequently use counterweight (CW) equipment such as a lead mass that is attached to the back of the flight helmet to provide balance to counter the weight of the NVG equipment mounted to the front of the flight helmet. It is proposed that this alleviates this stress. However, no study has yet investigated the physiological effects of CW during an extended period of time during which the pilots performed normal operational tasks. Thirty-one Canadian Forces pilots were monitored on consecutive days during a day and a NVG mission in a CH-146 flight simulator. Near infrared spectroscopy probes were attached bilaterally to the trapezius muscles and hemodynamics, i.e., total oxygenation index, total hemoglobin, oxyhemoglobin, and deoxyhemoglobin, were monitored for the duration of the mission. Pilots either wore CW (n = 25) or did not wear counterweights (nCW, n = 6) as per their usual operational practice. Levene's statistical tests were conducted to test for homogeneity and only total oxygenation index returned a significant result (p < or = 0.05). For the near infrared spectroscopy variables, significant differences were found to exist between CW and nCW pilots for total hemoglobin, deoxyhemoglobin, and oxyhemoglobin during NVG flights. The CW pilots displayed less metabolic and hemodynamic stress during simulated missions as compared to the nCW pilots. The results of this study would suggest that the use of CW equipment during NVG missions in military helicopter pilots does minimize the metabolic and hemodynamic responses of the trapezius muscles.

  15. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    DTIC Science & Technology

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and

  16. Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images

    PubMed Central

    Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor

    2012-01-01

    Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available. PMID:23112602

  17. Multi-sensor fusion of infrared and electro-optic signals for high resolution night images.

    PubMed

    Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor

    2012-01-01

    Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.

  18. Establishing Mobility Measures to Assess the Effectiveness of Night Vision Devices: Results of a Pilot Study

    ERIC Educational Resources Information Center

    Zebehazy, Kim T.; Zimmerman, George J.; Bowers, Alex R.; Luo, Gang; Peli, Eli

    2005-01-01

    In addition to their restricted peripheral fields, persons with retinitis pigmentosa (RP) report significant problems seeing in low levels of illumination, which causes difficulty with night travel. Several devices have been developed to support the visual needs of persons who have night blindness. These devices include wide-angle flashlights,…

  19. A Multiscale Vision Model applied to analyze EIT images of the solar corona

    NASA Astrophysics Data System (ADS)

    Portier-Fozzani, F.; Vandame, B.; Bijaoui, A.; Maucherat, A. J.; EIT Team

    2001-07-01

    The large dynamic range provided by the SOHO/EIT CCD (1 : 5000) is needed to observe the large EUV zoom of coronal structures from coronal homes up to flares. Histograms show that often a wide dynamic range is present in each image. Extracting hidden structures in the background level requires specific techniques such as the use of the Multiscale Vision Model (MVM, Bijaoui et al., 1998). This method, based on wavelet transformations optimizes detection of various size objects, however complex they may be. Bijaoui et al. built the Multiscale Vision Model to extract small dynamical structures from noise, mainly for studying galaxies. In this paper, we describe requirements for the use of this method with SOHO/EIT images (calibration, size of the image, dynamics of the subimage, etc.). Two different areas were studied revealing hidden structures: (1) classical coronal mass ejection (CME) formation and (2) a complex group of active regions with its evolution. The aim of this paper is to define carefully the constraints for this new method of imaging the solar corona with SOHO/EIT. Physical analysis derived from multi-wavelength observations will later complete these first results.

  20. TWAN: The World at Night

    NASA Astrophysics Data System (ADS)

    Tafreshi, Babak A.

    2011-06-01

    The World at Night (TWAN) is a global program to produce, collect, and present stunning photographs and time-lapse videos of the world's most beautiful and historic sites against the night-time backdrop of stars, planets, and celestial events. TWAN is a bridge between art, science and humanity to bring the message of peace, concealed in the sky. Organised by ``Astronomers Without Borders'', the project consist of world's best night sky photographers in over countries and coordinators, regional event organisers, and consultants. TWAN was also designated as a Special Project of the International Year of Astronomy 2009. While the project's global exhibitions and educational events peaked during IYA2009, TWAN is planned for long term in several phases and will continue to create and exhibit images in the next years.

  1. Vision therapy in adults with convergence insufficiency: clinical and functional magnetic resonance imaging measures.

    PubMed

    Alvarez, Tara L; Vicci, Vincent R; Alkan, Yelda; Kim, Eun H; Gohel, Suril; Barrett, Anna M; Chiaravalloti, Nancy; Biswal, Bharat B

    2010-12-01

    This research quantified clinical measurements and functional neural changes associated with vision therapy in subjects with convergence insufficiency (CI). Convergence and divergence 4° step responses were compared between 13 control adult subjects with normal binocular vision and four CI adult subjects. All CI subjects participated in 18 h of vision therapy. Clinical parameters quantified throughout the therapy included: nearpoint of convergence, recovery point of convergence, positive fusional vergence at near, near dissociated phoria, and eye movements that were quantified using peak velocity. Neural correlates of the CI subjects were quantified with functional magnetic resonance imaging scans comparing random vs. predictable vergence movements using a block design before and after vision therapy. Images were quantified by measuring the spatial extent of activation and the average correlation within five regions of interests (ROI). The ROIs were the dorsolateral prefrontal cortex, a portion of the frontal lobe, part of the parietal lobe, the cerebellum, and the brain stem. All measurements were repeated 4 months to 1 year post-therapy in three of the CI subjects. Convergence average peak velocities to step stimuli were significantly slower (p = 0.016) in CI subjects compared with controls; however, significant differences in average peak velocities were not observed for divergence step responses (p = 0.30). The investigation of CI subjects participating in vision therapy showed that the nearpoint of convergence, recovery point of convergence, and near dissociated phoria significantly decreased. Furthermore, the positive fusional vergence, average peak velocity from 4° convergence steps, and the amount of functional activity within the frontal areas, cerebellum, and brain stem significantly increased. Several clinical and cortical parameters were significantly correlated. Convergence peak velocity was significantly slower in CI subjects compared with controls

  2. Sexual interaction or a solitary action: young Swedish men's ideal images of sexual situations in relationships and in one-night stands.

    PubMed

    Elmerstig, Eva; Wijma, Barbro; Sandell, Kerstin; Berterö, Carina

    2014-10-01

    It seems that traditional gender norms influence young women's and men's sexuality differently. However, little attention has been paid to ideal images of sexual situations. This study identifies young heterosexual men's ideal images of sexual situations and their expectations of themselves in sexual situations. The present study employs a qualitative design. Twelve Swedish men (aged 16-20) participated in individual in-depth qualitative interviews. The interviews were transcribed verbatim and analysed using the constant comparative method from grounded theory. Our study revealed that the young men's conceptions of normal sexual situations were divided into two parts: sexual situations in relationships, and sexual situations in one-night stands. Their ideal image, "a balanced state of emotional and physical pleasure", was influenced by the presence/absence of intimacy, the partner's response, and their own performance. The greatest opportunities to experience intimacy and the partner's response were found during sexual situations in relationships. In one-night stands, the men wanted to make a good impression by performing well, and behaved according to masculine stereotypes. Stereotyped masculinity norms regulate young heterosexual men's sexuality, particularly in one-night stands. Sexual health promotion should emphasize the presence of these masculinity norms, which probably involve costs in relation to young men's sexual wellbeing. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Progress in computer vision.

    NASA Astrophysics Data System (ADS)

    Jain, A. K.; Dorai, C.

    Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.

  4. Automatic Welding System of Aluminum Pipe by Monitoring Backside Image of Molten Pool Using Vision Sensor

    NASA Astrophysics Data System (ADS)

    Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo

    An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.

  5. Auroras light up the Antarctic night

    NASA Image and Video Library

    2012-12-05

    NASA acquired July 15, 2012 On July 15, 2012, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured this nighttime view of the aurora australis, or “southern lights,” over Antartica’s Queen Maud Land and the Princess Ragnhild Coast. The image was captured by the VIIRS “day-night band,” which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe signals such as city lights, auroras, wildfires, and reflected moonlight. In the case of the image above, the sensor detected the visible auroral light emissions as energetic particles rained down from Earth’s magnetosphere and into the gases of the upper atmosphere. The slightly jagged appearance of the auroral lines is a function of the rapid dance of the energetic particles at the same time that the satellite is moving and the VIIRS sensor is scanning. The yellow box in the top image depicts the area shown in the lower close-up image. Light from the aurora was bright enough to illuminate the ice edge between the ice shelf and the Southern Ocean. At the time, Antarctica was locked in midwinter darkness and the Moon was a waning crescent that provided little light. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Mike Carlowicz. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to

  6. A computer vision for animal ecology.

    PubMed

    Weinstein, Ben G

    2018-05-01

    A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.

  7. Zernike analysis of all-sky night brightness maps.

    PubMed

    Bará, Salvador; Nievas, Miguel; Sánchez de Miguel, Alejandro; Zamorano, Jaime

    2014-04-20

    All-sky night brightness maps (calibrated images of the night sky with hemispherical field-of-view (FOV) taken at standard photometric bands) provide useful data to assess the light pollution levels at any ground site. We show that these maps can be efficiently described and analyzed using Zernike circle polynomials. The relevant image information can be compressed into a low-dimensional coefficients vector, giving an analytical expression for the sky brightness and alleviating the effects of noise. Moreover, the Zernike expansions allow us to quantify in a straightforward way the average and zenithal sky brightness and its variation across the FOV, providing a convenient framework to study the time course of these magnitudes. We apply this framework to analyze the results of a one-year campaign of night sky brightness measurements made at the UCM observatory in Madrid.

  8. Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision

    NASA Astrophysics Data System (ADS)

    Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.

    2018-01-01

    The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.

  9. Quality Control by Artificial Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Edmond Y.; Gleason, Shaun Scott; Niel, Kurt S.

    2010-01-01

    Computational technology has fundamentally changed many aspects of our lives. One clear evidence is the development of artificial-vision systems, which have effectively automated many manual tasks ranging from quality inspection to quantitative assessment. In many cases, these machine-vision systems are even preferred over manual ones due to their repeatability and high precision. Such advantages come from significant research efforts in advancing sensor technology, illumination, computational hardware, and image-processing algorithms. Similar to the Special Section on Quality Control by Artificial Vision published two years ago in Volume 17, Issue 3 of the Journal of Electronic Imaging, the present one invited papersmore » relevant to fundamental technology improvements to foster quality control by artificial vision, and fine-tuned the technology for specific applications. We aim to balance both theoretical and applied work pertinent to this special section theme. Consequently, we have seven high-quality papers resulting from the stringent peer-reviewing process in place at the Journal of Electronic Imaging. Some of the papers contain extended treatment of the authors work presented at the SPIE Image Processing: Machine Vision Applications conference and the International Conference on Quality Control by Artificial Vision. On the broad application side, Liu et al. propose an unsupervised texture image segmentation scheme. Using a multilayer data condensation spectral clustering algorithm together with wavelet transform, they demonstrate the effectiveness of their approach on both texture and synthetic aperture radar images. A problem related to image segmentation is image extraction. For this, O'Leary et al. investigate the theory of polynomial moments and show how these moments can be compared to classical filters. They also show how to use the discrete polynomial-basis functions for the extraction of 3-D embossed digits, demonstrating superiority over Fourier

  10. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    PubMed Central

    García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704

  11. Complete vision-based traffic sign recognition supported by an I2V communication system.

    PubMed

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  12. On-Chip Imaging of Schistosoma haematobium Eggs in Urine for Diagnosis by Computer Vision

    PubMed Central

    Linder, Ewert; Grote, Anne; Varjo, Sami; Linder, Nina; Lebbad, Marianne; Lundin, Mikael; Diwan, Vinod; Hannuksela, Jari; Lundin, Johan

    2013-01-01

    Background Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive “mini-microscopes”. Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. Methods/Principal Findings Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. Conclusions/Significance As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases. PMID:24340107

  13. Real-time millimeter-wave imaging radiometer for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.

    1994-07-01

    ThermoTrex Corporation (TTC) has developed an imaging radiometer, the passive microwave camera (PMC), that uses an array of frequency-scanned antennas coupled to a multi-channel acousto-optic (Bragg cell) spectrum analyzer to form visible images of a scene through acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output of the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. One application of this system could be its incorporation into an enhanced vision system to provide pilots with a clear view of the runway during fog and other adverse weather conditions. The unique PMC system architecture will allow compact large-aperture implementations because of its flat antenna sensor. Other potential applications include air traffic control, all-weather area surveillance, fire detection, and security. This paper describes the architecture of the TTC PMC and shows examples of images acquired with the system.

  14. LED light design method for high contrast and uniform illumination imaging in machine vision.

    PubMed

    Wu, Xiaojun; Gao, Guangming

    2018-03-01

    In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.

  15. The So-called 'Face on Mars' at Night

    NASA Technical Reports Server (NTRS)

    2003-01-01

    [figure removed for brevity, see original site]

    This pair of THEMIS infrared images shows the so-called 'face on Mars' landform viewed during both the day and night. The nighttime THEMIS IR image was acquired on Oct. 24, 2002; the daytime image was originally released on July 24, 2002. Both images are of THEMIS's 9th IR band (12.57 microns), and they have been geometrically projected for image registration. The 'face on Mars' is located in the northern plains of Mars near 40o N, 10o W (350 o E). This knob can be seen in the daytime image because of the temperature differences between the sunlit (warm and bright) and shadowed (cold and dark) slopes. The temperature in the daytime scene ranges from -50 oC (darkest) to -15 oC (brightest). At night many of the hills and knobs in this region are difficult to detect because the effects of heating and shadowing on the slopes are no longer present. The temperatures at night vary from approximately -90 oC (darkest) to -75 oC (warmest). The nighttime temperature differences are due primarily to differences in the abundance of rocky materials that retain their heat at night and stay warm. Fine grained dust and sand cools of more rapidly at night. The circular rims and eject of many of the craters in this region are warm at night, showing that rocks are still present on the steep walls inside the craters and in the ejecta material that was blasted out when the craters formed. Some craters have cold (dark) material on their floors in the night IR image, indicating that fine-grained material is accumulating within the craters. Many knobs and hills, including the 'face' have rocky (warm at night) material on their slopes and ridges.

    The THEMIS infrared camera provides an excellent regional view of Mars - these images cover an area 32 kilometers (20 miles) by approximately 50 kilometers (30 miles) at a resolution of 100 meters per picture element ('pixel'). The scenes are tilted differently because the Odyssey orbit is

  16. Research on HDR image fusion algorithm based on Laplace pyramid weight transform with extreme low-light CMOS

    NASA Astrophysics Data System (ADS)

    Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan

    2015-10-01

    Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.

  17. Leading Vision

    ERIC Educational Resources Information Center

    Fawcett, Gay

    2004-01-01

    The current educational landscape makes it imperative that a vision statement become more than a fine-sounding statement that is laminated, hung on the wall, and quickly forgotten. If educators do not have a clear image of the future they wish to create, then someone will be ready to create it for them. But with a clear vision of the future, a…

  18. Vision Therapy in Adults with Convergence Insufficiency: Clinical and Functional Magnetic Resonance Imaging Measures

    PubMed Central

    Alvarez, Tara L.; Vicci, Vincent R.; Alkan, Yelda; Kim, Eun H.; Gohel, Suril; Barrett, Anna M.; Chiaravalloti, Nancy; Biswal, Bharat B.

    2011-01-01

    Purpose This research quantified clinical measurements and functional neural changes associated with vision therapy in subjects with convergence insufficiency (CI). Methods Convergence and divergence 4° step responses were compared between 13 control adult subjects with normal binocular vision and four CI adult subjects. All CI subjects participated in 18 h of vision therapy. Clinical parameters quantified throughout the therapy included: nearpoint of convergence, recovery point of convergence, positive fusional vergence at near, near dissociated phoria, and eye movements that were quantified using peak velocity. Neural correlates of the CI subjects were quantified with functional magnetic resonance imaging scans comparing random vs. predictable vergence movements using a block design before and after vision therapy. Images were quantified by measuring the spatial extent of activation and the average correlation within five regions of interests (ROI). The ROIs were the dorsolateral prefrontal cortex, a portion of the frontal lobe, part of the parietal lobe, the cerebellum, and the brain stem. All measurements were repeated 4 months to 1 year post-therapy in three of the CI subjects. Results Convergence average peak velocities to step stimuli were significantly slower (p = 0.016) in CI subjects compared with controls; however, significant differences in average peak velocities were not observed for divergence step responses (p = 0.30). The investigation of CI subjects participating in vision therapy showed that the nearpoint of convergence, recovery point of convergence, and near dissociated phoria significantly decreased. Furthermore, the positive fusional vergence, average peak velocity from 4° convergence steps, and the amount of functional activity within the frontal areas, cerebellum, and brain stem significantly increased. Several clinical and cortical parameters were significantly correlated. Conclusions Convergence peak velocity was significantly slower in

  19. Inexpensive, Near-Infrared Imaging of Artwork Using a Night-Vision Webcam for Chemistry-of-Art Courses

    ERIC Educational Resources Information Center

    Smith, Gregory D.; Nunan, Elizabeth; Walker, Claire; Kushel, Dan

    2009-01-01

    Imaging of artwork is an important aspect of art conservation, technical art history, and art authentication. Many forms of near-infrared (NIR) imaging are used by conservators, archaeologists, forensic scientists, and technical art historians to examine the underdrawings of paintings, to detect damages and restorations, to enhance faded or…

  20. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  1. Lomonosov Crater, Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 16 June 2004 This pair of images shows part of Lomonosov Crater.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 64.9, Longitude 350.7 East (9.3 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through

  2. Ares Valles: Night and Day

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 15 June 2004 This pair of images shows part of the Ares Valles region.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 3.6, Longitude 339.9 East (20.1 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released

  3. Channel by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 17 June 2004 This pair of images shows part of a small channel.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 19.8, Longitude 141.5 East (218.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through

  4. The Stellar Imager (SI)"Vision Mission"

    NASA Technical Reports Server (NTRS)

    Carpenter, Ken; Danchi, W.; Leitner, J.; Liu, A.; Lyon, R.; Mazzuca, L.; Moe, R.; Chenette, D.; Karovska, M.; Allen, R.

    2004-01-01

    The Stellar Imager (SI) is a "Vision" mission in the Sun-Earth Connection (SEC) Roadmap, conceived for the purpose of understanding the effects of stellar magnetic fields, the dynamos that generate them, and the internal structure and dynamics of the stars in which they exist. The ultimate goal is to achieve the best possible forecasting of solar/stellar magnetic activity and its impact on life in the Universe. The science goals of SI require an ultra-high angular resolution, at ultraviolet wavelengths, on the order of 100 micro-arcsec and thus baselines on the order of 0.5 km. These requirements call for a large, multi-spacecraft (less than 20) imaging interferometer, utilizing precision formation flying in a stable environment, such as in a Lissajous orbit around the Sun-Earth L2 point. SI's resolution will make it an invaluable resource for many other areas of astrophysics, including studies of AGN s, supernovae, cataclysmic variables, young stellar objects, QSO's, and stellar black holes. ongoing mission concept and technology development studies for SI. These studies are designed to refine the mission requirements for the science goals, define a Design Reference Mission, perform trade studies of selected major technical and architectural issues, improve the existing technology roadmap, and explore the details of deployment and operations, as well as the possible roles of astronauts and/or robots in construction and servicing of the facility.

  5. Design of a reading test for low-vision image warping

    NASA Astrophysics Data System (ADS)

    Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. Shane

    1993-08-01

    NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision -- maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer- generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.

  6. Design of a reading test for low vision image warping

    NASA Technical Reports Server (NTRS)

    Loshin, David S.; Wensveen, Janice; Juday, Richard D.; Barton, R. S.

    1993-01-01

    NASA and the University of Houston College of Optometry are examining the efficacy of image warping as a possible prosthesis for at least two forms of low vision - maculopathy and retinitis pigmentosa. Before incurring the expense of reducing the concept to practice, one would wish to have confidence that a worthwhile improvement in visual function would result. NASA's Programmable Remapper (PR) can warp an input image onto arbitrary geometric coordinate systems at full video rate, and it has recently been upgraded to accept computer-generated video text. We have integrated the Remapper with an SRI eye tracker to simulate visual malfunction in normal observers. A reading performance test has been developed to determine if the proposed warpings yield an increase in visual function; i.e., reading speed. We will describe the preliminary experimental results of this reading test with a simulated central field defect with and without remapped images.

  7. Event Detection Using Mobile Phone Mass GPS Data and Their Reliavility Verification by Dmsp/ols Night Light Image

    NASA Astrophysics Data System (ADS)

    Yuki, Akiyama; Satoshi, Ueyama; Ryosuke, Shibasaki; Adachi, Ryuichiro

    2016-06-01

    In this study, we developed a method to detect sudden population concentration on a certain day and area, that is, an "Event," all over Japan in 2012 using mass GPS data provided from mobile phone users. First, stay locations of all phone users were detected using existing methods. Second, areas and days where Events occurred were detected by aggregation of mass stay locations into 1-km-square grid polygons. Finally, the proposed method could detect Events with an especially large number of visitors in the year by removing the influences of Events that occurred continuously throughout the year. In addition, we demonstrated reasonable reliability of the proposed Event detection method by comparing the results of Event detection with light intensities obtained from the night light images from the DMSP/OLS night light images. Our method can detect not only positive events such as festivals but also negative events such as natural disasters and road accidents. These results are expected to support policy development of urban planning, disaster prevention, and transportation management.

  8. Dynamic image fusion and general observer preference

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2010-04-01

    Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.

  9. Night firing range performance following photorefractive keratectomy and laser in situ keratomileusis.

    PubMed

    Bower, Kraig S; Burka, Jenna M; Subramanian, Prem S; Stutzman, Richard D; Mines, Michael J; Rabin, Jeff C

    2006-06-01

    To investigate the effect of laser refractive surgery on night weapons firing. Firing range performance was measured at baseline and postoperatively following photorefractive keratectomy and laser in situ keratomileusis. Subjects fired the M-16A2 rifle with night vision goggles (NVG) at starlight, and with iron sight (simulated dusk). Scores, before and after surgery, were compared for both conditions. No subject was able to acquire the target using iron sight without correction before surgery. After surgery, the scores without correction (95.9 +/- 4.7) matched the preoperative scores with correction (94.3 +/- 4.0; p = 0.324). Uncorrected NVG scores after surgery (96.4 +/- 3.1) exceeded the corrected scores before surgery (91.4 +/- 10.2), but this trend was not statistically significant (p = 0.063). Night weapon firing with both the iron sight and the NVG sight improved after surgery. This study supports the operational benefits of refractive surgery in the military.

  10. Modeling the target acquisition performance of active imaging systems

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Jacobs, Eddie L.; Halford, Carl E.; Vollmerhausen, Richard; Tofsted, David H.

    2007-04-01

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.

  11. Modeling the target acquisition performance of active imaging systems.

    PubMed

    Espinola, Richard L; Jacobs, Eddie L; Halford, Carl E; Vollmerhausen, Richard; Tofsted, David H

    2007-04-02

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.

  12. Enhanced Night Visibility Series, Volume XII : Overview of Phase II and Development of Phase III Experimental Plan

    DOT National Transportation Integrated Search

    2005-12-01

    This volume provides an overview of the six studies that compose Phase II of the Enhanced Night Visibility project and the experimental plan for its third and final portion, Phase III. The Phase II studies evaluated up to 12 vision enhancement system...

  13. Earth Observations taken with ESA NightPod hardware

    NASA Image and Video Library

    2012-12-08

    ISS034-E-005935 (8 Dec. 2012) --- A nighttime view of Liege, Belgium is featured in this image photographed by an Expedition 34 crew member on the International Space Station. To paraphrase the old expression, “all roads lead to Liege” – or at least one could get that impression from this nighttime photograph. The brightly lit core of the Liege urban area appears to lie at the center of a network of roadways—traceable by continuous orange lighting—extending outwards into the rural, and relatively dark, Belgium countryside. For a sense of scale the distance from left to right is approximately 70 kilometers. The region at upper left to the southeast of Verviers includes agricultural fields and forest; hence it appears almost uniformly dark at night. The image was taken using the European Space Agency’s Nodding mechanism, also known as the NightPod. NightPod is an electro-mechanical mount system designed to compensate digital cameras for the motion of the space station relative to Earth. The primary mission goal was to take high-resolution, long exposure digital imagery of Earth from the station’s Cupola, particularly cities at night. While the official NightPod mission has been completed, the mechanism remains onboard for crew members to use. Liege is the third most populous metropolitan region in Belgium (after Brussels and Antwerp); it includes 52 municipalities, including the nearby city of Seraing.

  14. The theoretical simulation on electrostatic distribution of 1st proximity region in proximity focusing low-light-level image intensifier

    NASA Astrophysics Data System (ADS)

    Zhang, Liandong; Bai, Xiaofeng; Song, De; Fu, Shencheng; Li, Ye; Duanmu, Qingduo

    2015-03-01

    Low-light-level night vision technology is magnifying low light level signal large enough to be seen by naked eye, which uses the photons - photoelectron as information carrier. Until the micro-channel plate was invented, it has been possibility for the realization of high performance and miniaturization of low-light-level night vision device. The device is double-proximity focusing low-light-level image intensifier which places a micro-channel plate close to photocathode and phosphor screen. The advantages of proximity focusing low-light-level night vision are small size, light weight, small power consumption, no distortion, fast response speed, wide dynamic range and so on. It is placed parallel to each other for Micro-channel plate (both sides of it with metal electrode), the photocathode and the phosphor screen are placed parallel to each other. The voltage is applied between photocathode and the input of micro-channel plate when image intensifier works. The emission electron excited by photo on the photocathode move towards to micro-channel plate under the electric field in 1st proximity focusing region, and then it is multiplied through the micro-channel. The movement locus of emission electrons can be calculated and simulated when the distributions of electrostatic field equipotential lines are determined in the 1st proximity focusing region. Furthermore the resolution of image tube can be determined. However the distributions of electrostatic fields and equipotential lines are complex due to a lot of micro-channel existing in the micro channel plate. This paper simulates electrostatic distribution of 1st proximity region in double-proximity focusing low-light-level image intensifier with the finite element simulation analysis software Ansoft maxwell 3D. The electrostatic field distributions of 1st proximity region are compared when the micro-channel plates' pore size, spacing and inclination angle ranged. We believe that the electron beam movement

  15. Application of remote thermal imaging and night vision technology to improve endangered wildlife resource management with minimal animal distress and hazard to humans

    NASA Astrophysics Data System (ADS)

    Lavers, C.; Franks, K.; Floyd, M.; Plowman, A.

    2005-01-01

    Advanced electromagnetic sensor systems more commonly associated with the hightech military battlefield may be applied to remote surveillance of wildlife. The first comprehensive study of a wide global variety of Near Infra Red (NIR) and thermal wildlife portraits are presented with this technology: for mammals, birds and other animals. The paper illustrates the safety aspects afforded to zoo staff and personnel in the wild during the day and night from potentially lethal and aggressive animals, and those difficult to approach normally. Such remote sensing systems are non-invasive and provide minimal disruption and distress to animals both in captivity and in the wild. We present some of the veterinarian advantages of such all weather day and night systems to identify sickness and injuries at an early diagnostic stage, as well as age related effects and mammalian cancer. Animals have very different textured surfaces, reflective and emissive properties in the NIR and thermal bands than when compared with the visible spectrum. Some surface features may offer biomimetic materials design advantages.

  16. Microsaccadic sampling of moving image information provides Drosophila hyperacute vision

    PubMed Central

    Solanki, Narendra; Rien, Diana; Jaciuch, David; Dongre, Sidhartha Anil; Blanchard, Florence; de Polavieja, Gonzalo G; Hardie, Roger C; Takalo, Jouni

    2017-01-01

    Small fly eyes should not see fine image details. Because flies exhibit saccadic visual behaviors and their compound eyes have relatively few ommatidia (sampling points), their photoreceptors would be expected to generate blurry and coarse retinal images of the world. Here we demonstrate that Drosophila see the world far better than predicted from the classic theories. By using electrophysiological, optical and behavioral assays, we found that R1-R6 photoreceptors’ encoding capacity in time is maximized to fast high-contrast bursts, which resemble their light input during saccadic behaviors. Whilst over space, R1-R6s resolve moving objects at saccadic speeds beyond the predicted motion-blur-limit. Our results show how refractory phototransduction and rapid photomechanical photoreceptor contractions jointly sharpen retinal images of moving objects in space-time, enabling hyperacute vision, and explain how such microsaccadic information sampling exceeds the compound eyes’ optical limits. These discoveries elucidate how acuity depends upon photoreceptor function and eye movements. PMID:28870284

  17. Oxygenation state and twilight vision at 2438 m.

    PubMed

    Connolly, Desmond M

    2011-01-01

    Under twilight viewing conditions, hypoxia, equivalent to breathing air at 3048 m (10,000 ft), compromises low contrast acuity, dynamic contrast sensitivity, and chromatic sensitivity. Selected past experiments have been repeated under milder hypoxia, equivalent to altitude exposure below 2438 m (8000 ft), to further define the influence of oxygenation state on mesopic vision. To assess photopic and mesopic visual function, 12 subjects each undertook three experiments using the Contrast Acuity Assessment test, the Frequency Doubling Perimeter, and the Color Assessment and Diagnosis (CAD) test. Experiments were conducted near sea level breathing 15.2% oxygen (balance nitrogen) and 100% oxygen, representing mild hypobaric hypoxia at 2438 m (8000 ft) and the benefit of supplementary oxygen, respectively. Oxygenation state was a statistically significant determinant of visual performance on all three visual parameters at mesopic, but not photopic, luminance. Mesopic sensitivity was greater with supplementary oxygen, but the magnitude of each hypoxic decrement was slight. Hypoxia elevated mesopic contrast acuity thresholds by approximately 4%; decreased mesopic dynamic contrast sensitivity by approximately 2 dB; and extended mean color ellipse axis length by approximately one CAD unit at mesopic luminance (that is, hypoxia decreased chromatic sensitivity). The results indicate that twilight vision may be susceptible to conditions of altered oxygenation at upper-to-mid mesopic luminance with relevance to contemporary night flying, including using night vision devices. Supplementary oxygen should be considered when optimal visual performance is mission-critical during flight above 2438 m (8000 ft) in dim light.

  18. Vision 20/20: Single photon counting x-ray detectors in medical imaging

    PubMed Central

    Taguchi, Katsuyuki; Iwanczyk, Jan S.

    2013-01-01

    Photon counting detectors (PCDs) with energy discrimination capabilities have been developed for medical x-ray computed tomography (CT) and x-ray (XR) imaging. Using detection mechanisms that are completely different from the current energy integrating detectors and measuring the material information of the object to be imaged, these PCDs have the potential not only to improve the current CT and XR images, such as dose reduction, but also to open revolutionary novel applications such as molecular CT and XR imaging. The performance of PCDs is not flawless, however, and it seems extremely challenging to develop PCDs with close to ideal characteristics. In this paper, the authors offer our vision for the future of PCD-CT and PCD-XR with the review of the current status and the prediction of (1) detector technologies, (2) imaging technologies, (3) system technologies, and (4) potential clinical benefits with PCDs. PMID:24089889

  19. Photometric Assessment of Night Sky Quality over Chaco Culture National Historical Park

    NASA Astrophysics Data System (ADS)

    Hung, Li-Wei; Duriscoe, Dan M.; White, Jeremy M.; Meadows, Bob; Anderson, Sharolyn J.

    2018-06-01

    The US National Park Service (NPS) characterizes night sky conditions over Chaco Culture National Historical Park using measurements in the park and satellite data. The park is located near the geographic center of the San Juan Basin of northwestern New Mexico and the adjacent Four Corners state. In the park, we capture a series of night sky images in V-band using our mobile camera system on nine nights from 2001 to 2016 at four sites. We perform absolute photometric calibration and determine the image placement to obtain multiple 45-million-pixel mosaic images of the entire night sky. We also model the regional night sky conditions in and around the park based on 2016 VIIRS satellite data. The average zenith brightness is 21.5 mag/arcsec2, and the whole sky is only ~16% brighter than the natural conditions. The faintest stars visible to naked eyes have magnitude of approximately 7.0, reaching the sensitivity limit of human eyes. The main impacts to Chaco’s night sky quality are the light domes from Albuquerque, Rio Rancho, Farmington, Bloomfield, Gallup, Santa Fe, Grants, and Crown Point. A few of these light domes exceed the natural brightness of the Milky Way. Additionally, glare sources from oil and gas development sites are visible along the north and east horizons. Overall, the night sky quality at Chaco Culture National Historical Park is very good. The park preserves to a large extent the natural illumination cycles, providing a refuge for crepuscular and nocturnal species. During clear and dark nights, visitors have an opportunity to see the Milky Way from nearly horizon to horizon, complete constellations, and faint astronomical objects and natural sources of light such as the Andromeda Galaxy, zodiacal light, and airglow.

  20. Calculation of day and night emittance values

    NASA Technical Reports Server (NTRS)

    Kahle, Anne B.

    1986-01-01

    In July 1983, the Thermal Infrared Multispectral Scanner (TIMS) was flown over Death Valley, California on both a midday and predawn flight within a two-day period. The availability of calibrated digital data permitted the calculation of day and night surface temperature and surface spectral emittance. Image processing of the data included panorama correction and calibration to radiance using the on-board black bodies and the measured spectral response of each channel. Scene-dependent isolated-point noise due to bit drops, was located by its relatively discontinuous values and replaced by the average of the surrounding data values. A method was developed in order to separate the spectral and temperature information contained in the TIMS data. Night and day data sets were processed. The TIMS is unique in allowing collection of both spectral emittance and thermal information in digital format with the same airborne scanner. For the first time it was possible to produce day and night emittance images of the same area, coregistered. These data add to an understanding of the physical basis for the discrimination of difference in surface materials afforded by TIMS.

  1. Dynamic Vision for Control

    DTIC Science & Technology

    2006-07-27

    unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry

  2. Night Sweats

    MedlinePlus

    Symptoms Night sweats By Mayo Clinic Staff Night sweats are repeated episodes of extreme perspiration that may soak your nightclothes or ... these episodes are usually not labeled as night sweats and typically aren't a sign of a ...

  3. Arsia Mons by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 22 June 2004 This pair of images shows part of Arsia Mons.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -19.6, Longitude 241.9 East (118.1 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the

  4. Albor Tholus by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 21 June 2004 This pair of images shows part of Albor Tholus.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 17.6, Longitude 150.3 East (209.7 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through

  5. Noctus Labyrinthus by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 25 June 2004 This pair of images shows part of Noctus Labyrinthus.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -9.6, Longitude 264.5 East (95.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released

  6. Ius Chasma by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 18 June 2004 This pair of images shows part of Ius Chasma.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -1, Longitude 276 East (84 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the

  7. Crater Ejecta by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 24 June 2004 This pair of images shows a crater and its ejecta.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -9, Longitude 164.2 East (195.8 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through

  8. Gusev Crater by Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 23 June 2004 This pair of images shows part of Gusev Crater.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -14.5, Longitude 175.5 East (184.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through

  9. Meridiani Crater in Day and Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 14 June 2004 This pair of images shows crater ejecta in the Terra Meridiani region.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude -1.6, Longitude 4.1 East (355.9 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will

  10. Day And Night In Terra Meridiani

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 11 June 2004 This pair of images shows part of the Terra Meridiani region.

    Day/Night Infrared Pairs

    The image pairs presented focus on a single surface feature as seen in both the daytime and nighttime by the infrared THEMIS camera. The nighttime image (right) has been rotated 180 degrees to place north at the top.

    Infrared image interpretation

    Daytime: Infrared images taken during the daytime exhibit both the morphological and thermophysical properties of the surface of Mars. Morphologic details are visible due to the effect of sun-facing slopes receiving more energy than antisun-facing slopes. This creates a warm (bright) slope and cool (dark) slope appearance that mimics the light and shadows of a visible wavelength image. Thermophysical properties are seen in that dust heats up more quickly than rocks. Thus dusty areas are bright and rocky areas are dark.

    Nighttime: Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    Image information: IR instrument. Latitude 1.3, Longitude 0.5 East (359.5 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released

  11. Visions of our Planet's Atmosphere, Land & Oceans

    NASA Technical Reports Server (NTRS)

    Hasler, Arthur F.

    2002-01-01

    at night observed by the "night-vision" DMSP military satellite. The presentation will be made using the latest HDTV and video projection technology that is now done from a laptop computer through an entirely digital path.

  12. Low Vision Enhancement System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  13. Endotracheal Intubation With and Without Night Vision Goggles in a Helicopter and Emergency Room Setting: A Manikin Study.

    PubMed

    Gellerfors, Mikael; Svensén, Christer; Linde, Joacim; Lossius, Hans Morten; Gryth, Dan

    2015-09-01

    Securing the airway by endotracheal intubation (ETI) is a key issue in prehospital critical care. Night vision goggles (NVG) are used by personnel operating in low-light environments. We examined the feasibility of an anesthesiologist performed ETI using NVG in a helicopter setting. Twelve anesthesiologists performed ETI on a manikin in an emergency room (ER) setting and two helicopter settings, with randomization to either rotary wing daylight (RW-D) or rotary wing in total darkness using binocular NVG (RW-NVG). Primary endpoint was intubation time. Secondary endpoints included success rate, Cormack-Lehane (CL) score, and subjective difficulty according to the Visual Analoge Scale (VAS). The median intubation time was shorter for the RW-D compared to the RW-NVG setting (16.5 seconds vs. 30.0 seconds; p = 0,03). We found no difference in median intubation time for the ER and RW-D settings (16.8 seconds vs. 16.5 seconds; p = 0.91). For all scenarios, success rate was 100%. CL and VAS varied between the ER setting (CL 1.8, VAS 2.8), RW-D setting (CL 2.0, VAS 3.0), and RW-NVG setting (CL 3.0, VAS 6.5). This study suggests that anesthesiologists successfully and quickly can perform ETI in a helicopter setting both in daylight and in darkness using binocular NVG, but with shorter intubation times in daylight. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  14. Computer vision in cell biology.

    PubMed

    Danuser, Gaudenz

    2011-11-23

    Computer vision refers to the theory and implementation of artificial systems that extract information from images to understand their content. Although computers are widely used by cell biologists for visualization and measurement, interpretation of image content, i.e., the selection of events worth observing and the definition of what they mean in terms of cellular mechanisms, is mostly left to human intuition. This Essay attempts to outline roles computer vision may play and should play in image-based studies of cellular life. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. 640x512 pixel InGaAs FPAs for short-wave infrared and visible light imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiumei; Yang, Bo; Huang, Songlei; Wei, Yang; Li, Xue; Zhu, Xianliang; Li, Tao; Chen, Yu; Gong, Haimei

    2017-08-01

    The spectral irradiance of moonlight and air glow is mainly in the wavelength region from visible to short-wave infrared (SWIR) band. The imaging over the wavelength range of visible to SWIR is of great significance for applications such as civil safety, night vision, and agricultural sorting. In this paper, 640×512 visible-SWIR InGaAs focal plane arrays (FPAs) were studied for night vision and SWIR imaging. A special epitaxial wafer structure with etch-stop layer was designed and developed. Planar-type 640×512 InGaAs detector arrays were fabricated. The photosensitive arrays were bonded with readout circuit through Indium bumps by flip-chip process. Then, the InP substrate was removed by mechanical thinning and chemical wet etching. The visible irradiance can reach InGaAs absorption layer and then to be detected. As a result, the detection spectrum of the InGaAs FPAs has been extended toward visible spectrum from 0.5μm to 1.7μm. The quantum efficiency is approximately 15% at 0.5μm, 30% at 0.7μm, 50% at 0.8μm, 90% at 1.55μm. The average peak detectivity is higher than 2×1012 cm·Hz1/2/W at room temperature with an integrated time of 10 ms. The Visible-SWIR InGaAs FPAs were applied to an imaging system for SWIR and visible light imaging.

  16. An Emphasis on Perception: Teaching Image Formation Using a Mechanistic Model of Vision.

    ERIC Educational Resources Information Center

    Allen, Sue; And Others

    An effective way to teach the concept of image is to give students a model of human vision which incorporates a simple mechanism of depth perception. In this study two almost identical versions of a curriculum in geometrical optics were created. One used a mechanistic, interpretive eye model, and in the other the eye was modeled as a passive,…

  17. Agnosic vision is like peripheral vision, which is limited by crowding.

    PubMed

    Strappini, Francesca; Pelli, Denis G; Di Pace, Enrico; Martelli, Marialuisa

    2017-04-01

    Visual agnosia is a neuropsychological impairment of visual object recognition despite near-normal acuity and visual fields. A century of research has provided only a rudimentary account of the functional damage underlying this deficit. We find that the object-recognition ability of agnosic patients viewing an object directly is like that of normally-sighted observers viewing it indirectly, with peripheral vision. Thus, agnosic vision is like peripheral vision. We obtained 14 visual-object-recognition tests that are commonly used for diagnosis of visual agnosia. Our "standard" normal observer took these tests at various eccentricities in his periphery. Analyzing the published data of 32 apperceptive agnosia patients and a group of 14 posterior cortical atrophy (PCA) patients on these tests, we find that each patient's pattern of object recognition deficits is well characterized by one number, the equivalent eccentricity at which our standard observer's peripheral vision is like the central vision of the agnosic patient. In other words, each agnosic patient's equivalent eccentricity is conserved across tests. Across patients, equivalent eccentricity ranges from 4 to 40 deg, which rates severity of the visual deficit. In normal peripheral vision, the required size to perceive a simple image (e.g., an isolated letter) is limited by acuity, and that for a complex image (e.g., a face or a word) is limited by crowding. In crowding, adjacent simple objects appear unrecognizably jumbled unless their spacing exceeds the crowding distance, which grows linearly with eccentricity. Besides conservation of equivalent eccentricity across object-recognition tests, we also find conservation, from eccentricity to agnosia, of the relative susceptibility of recognition of ten visual tests. These findings show that agnosic vision is like eccentric vision. Whence crowding? Peripheral vision, strabismic amblyopia, and possibly apperceptive agnosia are all limited by crowding, making it

  18. Crown-of-thorns starfish have true image forming vision.

    PubMed

    Petie, Ronald; Garm, Anders; Hall, Michael R

    2016-01-01

    Photoreceptors have evolved numerous times giving organisms the ability to detect light and respond to specific visual stimuli. Studies into the visual abilities of the Asteroidea (Echinodermata) have recently shown that species within this class have a more developed visual sense than previously thought and it has been demonstrated that starfish use visual information for orientation within their habitat. Whereas image forming eyes have been suggested for starfish, direct experimental proof of true spatial vision has not yet been obtained. The behavioural response of the coral reef inhabiting crown-of-thorns starfish (Acanthaster planci) was tested in controlled aquarium experiments using an array of stimuli to examine their visual performance. We presented starfish with various black-and-white shapes against a mid-intensity grey background, designed such that the animals would need to possess true spatial vision to detect these shapes. Starfish responded to black-and-white rectangles, but no directional response was found to black-and-white circles, despite equal areas of black and white. Additionally, we confirmed that starfish were attracted to black circles on a white background when the visual angle is larger than 14°. When changing the grey tone of the largest circle from black to white, we found responses to contrasts of 0.5 and up. The starfish were attracted to the dark area's of the visual stimuli and were found to be both attracted and repelled by the visual targets. For crown-of-thorns starfish, visual cues are essential for close range orientation towards objects, such as coral boulders, in the wild. These visually guided behaviours can be replicated in aquarium conditions. Our observation that crown-of-thorns starfish respond to black-and-white shapes on a mid-intensity grey background is the first direct proof of true spatial vision in starfish and in the phylum Echinodermata.

  19. High-Speed Camera and High-Vision Camera Observations of TLEs from Jet Aircraft in Winter Japan and in Summer US

    NASA Astrophysics Data System (ADS)

    Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high

  20. The Polar Night Nitric Oxide Experiment

    NASA Image and Video Library

    2017-12-08

    The Polar Night Nitric Oxide or PolarNOx experiment from Virginia Tech is launched aboard a NASA Black Brant IX sounding rocket at 8:45 a.m. EST, Jan. 27, from the Poker Flat Research Range in Alaska. PolarNOx is measuring nitric oxide in the polar night sky. Nitric oxide in the polar night sky is created by auroras. Under appropriate conditions it can be transported to the stratosphere where it may destroy ozone resulting in possible changes in stratospheric temperature and wind and may even impact the circulation at Earth’s surface. Credit: NASA/Wallops/Jamie Adkins NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  2. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  3. Terahertz standoff imaging testbed design and performance for concealed weapon and device identification model development

    NASA Astrophysics Data System (ADS)

    Franck, Charmaine C.; Lee, Dave; Espinola, Richard L.; Murrill, Steven R.; Jacobs, Eddie L.; Griffin, Steve T.; Petkie, Douglas T.; Reynolds, Joe

    2007-04-01

    This paper describes the design and performance of the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's (NVESD), active 0.640-THz imaging testbed, developed in support of the Defense Advanced Research Project Agency's (DARPA) Terahertz Imaging Focal-Plane Technology (TIFT) program. The laboratory measurements and standoff images were acquired during the development of a NVESD and Army Research Laboratory terahertz imaging performance model. The imaging testbed is based on a 12-inch-diameter Off-Axis Elliptical (OAE) mirror designed with one focal length at 1 m and the other at 10 m. This paper will describe the design considerations of the OAE-mirror, dual-capability, active imaging testbed, as well as measurement/imaging results used to further develop the model.

  4. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  5. Reinforcement learning in computer vision

    NASA Astrophysics Data System (ADS)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  6. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    PubMed

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. A Multiscale Vision Model and Applications to Astronomical Image and Data Analyses

    NASA Astrophysics Data System (ADS)

    Bijaoui, A.; Slezak, E.; Vandame, B.

    Many researches were carried out on the automated identification of the astrophy sical sources, and their relevant measurements. Some vision models have been developed for this task, their use depending on the image content. We have developed a multiscale vision model (MVM) \\cite{BR95} well suited for analyzing complex structures such like interstellar clouds, galaxies, or cluster of galaxies. Our model is based on a redundant wavelet transform. For each scale we detect significant wavelet coefficients by application of a decision rule based on their probability density functions (PDF) under the hypothesis of a uniform distribution. In the case of a Poisson noise, this PDF can be determined from the autoconvolution of the wavelet function histogram \\cite{SLB93}. We may also apply Anscombe's transform, scale by scale in order to take into account the integrated number of events at each scale \\cite{FSB98}. Our aim is to compute an image of all detected structural features. MVM allows us to build oriented trees from the neighbouring of significant wavelet coefficients. Each tree is also divided into subtrees taking into account the maxima along the scale axis. This leads to identify objects in the scale space, and then to restore their images by classical inverse methods. This model works only if the sampling is correct at each scale. It is not generally the case for the orthogonal wavelets, so that we apply the so-called `a trous algorithm \\cite{BSM94} or a specific pyramidal one \\cite{RBV98}. It leads to ext ract superimposed objets of different size, and it gives for each of them a separate image, from which we can obtain position, flux and p attern parameters. We have applied these methods to different kinds of images, photographic plates, CCD frames or X-ray images. We have only to change the statistical rule for extr acting significant coefficients to adapt the model from an image class to another one. We have also applied this model to extract clusters

  8. A computer vision system for diagnosing scoliosis using moiré images.

    PubMed

    Batouche, M; Benlamri, R; Kholladi, M K

    1996-07-01

    For young people, scoliosis deformities are an evolving process which must be detected and treated as early as possible. The moiré technique is simple, inexpensive, not aggressive and especially convenient for detecting spinal deformations. Doctors make their diagnosis by analysing the symmetry of fringes obtained by such techniques. In this paper, we present a computer vision system for help diagnosing spinal deformations using noisy moiré images of the human back. The approach adopted in this paper consists of extracting fringe contours from moiré images, then localizing some anatomical features (the spinal column, lumbar hollow and shoulder blades) which are crucial for 3D surface generation carried out using Mota's relaxation operator. Finally, rules furnished by doctors are used to derive the kind of spinal deformation and to yield the diagnosis. The proposed system has been tested on a set of noisy moiré images, and the experimental result have shown its robustness and reliability for the recognition of most scoliosis deformities.

  9. An imaging system based on laser optical feedback for fog vision applications

    NASA Astrophysics Data System (ADS)

    Belin, E.; Boucher, V.

    2008-08-01

    The Laboratoire Régional des Ponts et Chaussées d'Angers - LRPC of Angers is currently studying the feasability of applying an optical technique based on the principle of the laser optical feedback to long distance fog vision. Optical feedback set up allows the creation of images on roadsigns. To create artificial fog conditions we used a vibrating cell that produces a micro-spray of water according to the principle of acoustic cavitation. To scale the sensitivity of the system under duplicatible conditions we also used optical densities linked to first-sight visibility distances. The current system produces, in a few seconds, 200 × 200 pixel images of a roadsign seen through dense artificial fog.

  10. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  11. Simplified Night Sky Display System

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy P.

    2010-01-01

    A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.

  12. Martian Highlands at Night in Infrared

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This nighttime temperature image from the camera system on NASA's Mars Odyssey spacecraft shows the ancient, heavily cratered surface of the highlands between Isidis and Elysium Planitia. The image is entered near 9 degrees north latitude, 109 degrees east longitude, and covers an area approximately 32 kilometers (20 miles) wide by 120 kilometers (75 miles) long. The bright 'splashes' extending outward from the three large craters are the remnants of the rocky material thrown out when the impact occurred. The nighttime temperature differences are due primarily to differences in the abundance of rocky materials that retain their heat at night and stay relatively warm. Fine grained dust and sand cool off more rapidly at night. The circular rims of the craters in this region are warm at night, showing that rocks are still present on the steep walls inside the craters. The 'splash' ejecta patterns are also warmer than their surroundings, and are covered by material that was blasted out when the craters formed. The temperatures in this scene vary from approximately -105 degrees Celsius (-157 degrees Fahrenheit)(darkest) to -75 degrees Celsius (-103 degrees Fahrenheit) (lightest). This image was acquired using the instrument's infrared Band 9, centered at 12.6 micrometers. North is toward the left in this image.

    The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the 2001 Mars Odyssey mission for NASA's Office of Space Science in Washington, D.C. Investigators at Arizona State University in Tempe, the University of Arizona in Tucson and NASA's Johnson Space Center, Houston, operate the science instruments. Additional science partners are located at the Russian Aviation and Space Agency and at Los Alamos National Laboratories, New Mexico. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin

  13. Intelligent imaging systems for automotive applications

    NASA Astrophysics Data System (ADS)

    Thompson, Chris; Huang, Yingping; Fu, Shan

    2004-03-01

    In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.

  14. Night Sky Brightness at San Pedro Martir Observatory

    NASA Astrophysics Data System (ADS)

    Plauchu-Frayn, I.; Richer, M. G.; Colorado, E.; Herrera, J.; Córdova, A.; Ceseña, U.; Ávila, F.

    2017-03-01

    We present optical UBVRI zenith night sky brightness measurements collected on 18 nights during 2013 to 2016 and SQM measurements obtained daily over 20 months during 2014 to 2016 at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir (OAN-SPM) in México. The UBVRI data is based upon CCD images obtained with the 0.84 m and 2.12 m telescopes, while the SQM data is obtained with a high-sensitivity, low-cost photometer. The typical moonless night sky brightness at zenith averaged over the whole period is U = 22.68, B = 23.10, V = 21.84, R = 21.04, I = 19.36, and SQM = 21.88 {mag} {{arcsec}}-2, once corrected for zodiacal light. We find no seasonal variation of the night sky brightness measured with the SQM. The typical night sky brightness values found at OAN-SPM are similar to those reported for other astronomical dark sites at a similar phase of the solar cycle. We find a trend of decreasing night sky brightness with decreasing solar activity during period of the observations. This trend implies that the sky has become darker by Δ U = 0.7, Δ B = 0.5, Δ V = 0.3, Δ R=0.5 mag arcsec-2 since early 2014 due to the present solar cycle.

  15. Night Vision

    NASA Astrophysics Data System (ADS)

    Rowan-Robinson, Michael

    2013-05-01

    Preface; 1. Introduction; 2. William Herschel opens up the invisible universe; 3. 1800-1950: slow progress - the moon, planets, bright stars, and the discovery of interstellar dust; 4. Dying stars shrouded in dust and stars being born: the emergence of infrared astronomy in the 60s and 70s; 5. Birth of far infrared and submillimetre astronomy: clouds of dust and molecules in our Galaxy; 6. The cosmic microwave background, echo of the Big Bang; 7. The Infrared Astronomical Satellite and the opening up of extragalactic infrared astronomy: starbursts and active galactic nuclei; 8. The Cosmic Background Explorer and the ripples, the Wilkinson Microwave Anisotropy Explorer, and dark energy; 9. Giant ground-based infrared and submillimetre telescopes; 10. The Infrared Space Observatory and the Spitzer Space Telescope: the star-formation history of the universe and infrared galaxy populations; 11. Our dusty Solar System, debris disks and the search for exoplanets; 12. The future: pioneering space missions and giant ground-based telescopes; Notes; Credits for illustrations; Further reading; Bibliography; Glossary; Index of names; Index.

  16. Direct Imaging of Stellar Surfaces: Results from the Stellar Imager (SI) Vision Mission Study

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth; Schrijver, Carolus; Karovska, Margarita

    2006-01-01

    The Stellar Imager (SI) is a UV-Optical, Space-Based Interferometer designed to enable 0.1 milli-arcsecond (mas) spectral imaging of stellar surfaces and stellar interiors (via asteroseismology) and of the Universe in general. SI is identified as a "Flagship and Landmark Discovery Mission'' in the 2005 Sun Solar System Connection (SSSC) Roadmap and as a candidate for a "Pathways to Life Observatory'' in the Exploration of the Universe Division (EUD) Roadmap (May, 2005). The ultra-sharp images of the Stellar Imager will revolutionize our view of many dynamic astrophysical processes: The 0.1 mas resolution of this deep-space telescope will transform point sources into extended sources, and snapshots into evolving views. SI's science focuses on the role of magnetism in the Universe, particularly on magnetic activity on the surfaces of stars like the Sun. SI's prime goal is to enable long-term forecasting of solar activity and the space weather that it drives in support of the Living With a Star program in the Exploration Era. SI will also revolutionize our understanding of the formation of planetary systems, of the habitability and climatology of distant planets, and of many magneto-hydrodynamically controlled processes in the Universe. In this paper we will discuss the results of the SI Vision Mission Study, elaborating on the science goals of the SI Mission and a mission architecture that could meet those goals.

  17. Functional brain imaging of a complex navigation task following one night of total sleep deprivation

    NASA Technical Reports Server (NTRS)

    Strangman, Gary; Thompson, John H.; Strauss, Monica M.; Marshburn, Thomas H.; Sutton, Jeffrey P.

    2006-01-01

    Study Objectives: To assess the cerebral effects associated with sleep deprivation in a simulation of a complex, real-world, high-risk task. Design and Interventions: A two-week, repeated measures, cross-over experimental protocol, with counterbalanced orders of normal sleep (NS) and total sleep deprivation (TSD). Setting: Each subject underwent functional magnetic resonance imaging (fMRI) while performing a dual-joystick, 3D sensorimotor navigation task (simulated orbital docking). Scanning was performed twice per subject, once following a night of normal sleep (NS), and once following a single night of total sleep deprivation (TSD). Five runs (eight 24s docking trials each) were performed during each scanning session. Participants: Six healthy, young, right-handed volunteers (2 women; mean age 20) participated. Measurements and Results: Behavioral performance on multiple measures was comparable in the two sleep conditions. Neuroimaging results within sleep conditions revealed similar locations of peak activity for NS and TSD, including left sensorimotor cortex, left precuneus (BA 7), and right visual areas (BA 18/19). However, cerebral activation following TSD was substantially larger and exhibited higher amplitude modulations from baseline. When directly comparing NS and TSD, most regions exhibited TSD>NS activity, including multiple prefrontal cortical areas (BA 8/9,44/45,47), lateral parieto-occipital areas (BA 19/39, 40), superior temporal cortex (BA 22), and bilateral thalamus and amygdala. Only left parietal cortex (BA 7) demonstrated NS>TSD activity. Conclusions: The large network of cerebral differences between the two conditions, even with comparable behavioral performance, suggests the possibility of detecting TSD-induced stress via functional brain imaging techniques on complex tasks before stress-induced failures.

  18. Low vision goggles: optical design studies

    NASA Astrophysics Data System (ADS)

    Levy, Ofer; Apter, Boris; Efron, Uzi

    2006-08-01

    Low Vision (LV) due to Age Related Macular Degeneration (AMD), Glaucoma or Retinitis Pigmentosa (RP) is a growing problem, which will affect more than 15 million people in the U.S alone in 2010. Low Vision Aid Goggles (LVG) have been under development at Ben-Gurion University and the Holon Institute of Technology. The device is based on a unique Image Transceiver Device (ITD), combining both functions of imaging and Display in a single chip. Using the ITD-based goggles, specifically designed for the visually impaired, our aim is to develop a head-mounted device that will allow the capture of the ambient scenery, perform the necessary image enhancement and processing, and re-direct it to the healthy part of the patient's retina. This design methodology will allow the Goggles to be mobile, multi-task and environmental-adaptive. In this paper we present the optical design considerations of the Goggles, including a preliminary performance analysis. Common vision deficiencies of LV patients are usually divided into two main categories: peripheral vision loss (PVL) and central vision loss (CVL), each requiring different Goggles design. A set of design principles had been defined for each category. Four main optical designs are presented and compared according to the design principles. Each of the designs is presented in two main optical configurations: See-through system and Video imaging system. The use of a full-color ITD-Based Goggles is also discussed.

  19. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  20. Polarization Imaging and Insect Vision

    ERIC Educational Resources Information Center

    Green, Adam S.; Ohmann, Paul R.; Leininger, Nick E.; Kavanaugh, James A.

    2010-01-01

    For several years we have included discussions about insect vision in the optics units of our introductory physics courses. This topic is a natural extension of demonstrations involving Brewster's reflection and Rayleigh scattering of polarized light because many insects heavily rely on optical polarization for navigation and communication.…

  1. Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.

    1995-06-01

    Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.

  2. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  3. Nursing care at night: an evaluation using the Night Nursing Care Instrument.

    PubMed

    Oléni, Magnus; Johansson, Peter; Fridlund, Bengt

    2004-07-01

    Night nurses carry overall nursing responsibility for approximately half the time that patients spend in hospital. However, there is a paucity of literature that focuses on nursing care provided at night. The aim of this study was to evaluate nursing care provided at night from the perspective of both nurses and patients. The study, which had an evaluative and a comparative design, was carried out using the Night Nursing Care Instrument at a hospital in southern Sweden. Nurses (n = 178) on night duty were consecutively selected, while the patients (n = 356) were selected by convenience sampling. The results showed a statistically significant difference between nurses' assessments and patients' perceptions of the nursing care provided at night in nursing interventions (P < 0.0001). In the areas of medical interventions and evaluation, no statistically significant differences were found between nurses and patients. For eight of 11 items, patients reported that they were satisfied (> or =80%) with the nursing care provided at night. These findings suggest that night nurses need to improve their ability to assess patients' needs for nursing care at night. A first step in this direction is for them to become aware of how patients perceive night nursing. As a second step, nurses need to increase their knowledge of which nursing actions promote patients' rest at night.

  4. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  5. Visual Problems in Night Operations (Problemes de Vision dans les Operations de Nuit)

    DTIC Science & Technology

    1992-05-01

    appeared devices, true night war fighting capability has at in the early 1980s . For the first time, a device last become a reality (as demonstrated in the...gn6gralement us changement do niveau d’aspiration dans ie resultat do Re 4-SYSTMES MULTI-SENSEURS ET Mt:CANIS- tiche. Par exemple Sp6randio ( 1980 ) montro...I1 faut SPERANDIO J.C.( 1980 ) La psychologie en ergonomie, donc cerner ce qui reste effectivement rtalisable de nuit PUF le Psychologue, Paris. clans le

  6. A new perspective on life-saving procedures in a battlefield setting: Emergency cricothyroidotomy, needle thoracostomy, and chest tube thoracostomy with night vision goggles.

    PubMed

    Bilge, Sedat; Aydın, Attila; Bilge, Meltem; Aydın, Cemile; Çevik, Erdem; Eryılmaz, Mehmet

    2017-11-01

    In the patients with multiple and serious trauma, early applications of life-saving procedures are related to improved survival. We tried to experimentally determine the feasibility of life-saving interventions that are performed with the aid of night vision goggles (NVG) in nighttime combat scenario. Chest tube thoracostomy (CTT), emergency cricothyroidotomy (EC), and needle thoracostomy (NT) interventions were performed by 10 combatant medical staff. The success and duration of interventions were explored in the study. Procedures were performed on the formerly prepared manikins/models in a bright room and in a dark room with the aid of NVG. Operators graded the ease of interventions. All interventions were found successful. Operators stated that both CTT and EC interventions were more difficult in dark than in daytime (p<0.05). No significant difference was observed in the difficulty in the NT interventions. No significant difference was observed in terms of completion times of interventions between in daytime and in dark scenario. The operators who use NVGs have to be aware of that they can perform their tactic and medical activities without taking off the NVGs and without the requirement of an extra light source.

  7. Augmented reality with image registration, vision correction and sunlight readability via liquid crystal devices.

    PubMed

    Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin

    2017-03-27

    Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.

  8. High resolution imaging of the Venus night side using a Rockwell 128x128 HgCdTe array

    NASA Technical Reports Server (NTRS)

    Hodapp, K.-W.; Sinton, W.; Ragent, B.; Allen, D.

    1989-01-01

    The University of Hawaii operates an infrared camera with a 128x128 HgCdTe detector array on loan from JPL's High Resolution Imaging Spectrometer (HIRIS) project. The characteristics of this camera system are discussed. The infrared camera was used to obtain images of the night side of Venus prior to and after inferior conjunction in 1988. The images confirm Allen and Crawford's (1984) discovery of bright features on the dark hemisphere of Venus visible in the H and K bands. Our images of these features are the best obtained to date. Researchers derive a pseudo rotation period of 6.5 days for these features and 1.74 microns brightness temperatures between 425 K and 480 K. The features are produced by nonuniform absorption in the middle cloud layer (47 to 57 Km altitude) of thermal radiation from the lower Venus atmosphere (20 to 30 Km altitude). A more detailed analysis of the data is in progress.

  9. Estimation of Population Number via Light Activities on Night-Time Satellite Images

    NASA Astrophysics Data System (ADS)

    Turan, M. K.; Yücer, E.; Sehirli, E.; Karaş, İ. R.

    2017-11-01

    Estimation and accurate assessment regarding population gets harder and harder day by day due to growth of world population in a fast manner. Estimating tendencies to settlements in cities and countries, socio-cultural development and population numbers is quite difficult. In addition to them, selection and analysis of parameters such as time, work-force and cost seems like another difficult issue. In this study, population number is guessed by evaluating light activities in İstanbul via night-time images of Turkey. By evaluating light activities between 2000 and 2010, average population per pixel is obtained. Hence, it is used to estimate population numbers in 2011, 2012 and 2013. Mean errors are concluded as 4.14 % for 2011, 3.74 % for 2012 and 3.04 % for 2013 separately. As a result of developed thresholding method, mean error is concluded as 3.64 % to estimate population number in İstanbul for next three years.

  10. Practical design and evaluation methods of omnidirectional vision sensors

    NASA Astrophysics Data System (ADS)

    Ohte, Akira; Tsuzuki, Osamu

    2012-01-01

    A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.

  11. Color defective vision and day and night recognition of aviation color signal light flashes.

    DOT National Transportation Integrated Search

    1971-07-01

    A previous study reported on the efficiency with which various tests of color defective vision can predict performance during daylight conditions on a practical test of ability to discriminate aviation signal red, white, and green. In the current stu...

  12. The Role of External Features in Face Recognition with Central Vision Loss.

    PubMed

    Bernard, Jean-Baptiste; Chung, Susana T L

    2016-05-01

    We evaluated how the performance of recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. In experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (experiment 2) and for hybrid images where the internal and external features came from two different source images (experiment 3) for five observers with central vision loss and four age-matched control observers. When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss was centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8 ± 3.3% correct) than for images containing only internal features (35.8 ± 15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4 ± 17.8%) than to the internal features (9.3 ± 4.9%), whereas control observers did not show the same bias toward responding to the external features. Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images.

  13. Color line scan camera technology and machine vision: requirements to consider

    NASA Astrophysics Data System (ADS)

    Paernaenen, Pekka H. T.

    1997-08-01

    Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.

  14. Night-to-night arousal variability and interscorer reliability of arousal measurements.

    PubMed

    Loredo, J S; Clausen, J L; Ancoli-Israel, S; Dimsdale, J E

    1999-11-01

    Measurement of arousals from sleep is clinically important, however, their definition is not well standardized, and little data exist on reliability. The purpose of this study is to determine factors that affect arousal scoring reliability and night-to-night arousal variability. The night-to-night arousal variability and interscorer reliability was assessed in 20 subjects with and without obstructive sleep apnea undergoing attended polysomnography during two consecutive nights. Five definitions of arousal were studied, assessing duration of electroencephalographic (EEG) frequency changes, increases in electromyographic (EMG) activity and leg movement, association with respiratory events, as well as the American Sleep Disorders Association (ASDA) definition of arousals. NA. NA. NA. Interscorer reliability varied with the definition of arousal and ranged from an Intraclass correlation (ICC) of 0.19 to 0.92. Arousals that included increases in EMG activity or leg movement had the greatest reliability, especially when associated with respiratory events (ICC 0.76 to 0.92). The ASDA arousal definition had high interscorer reliability (ICC 0.84). Reliability was lowest for arousals consisting of EEG changes lasting <3 seconds (ICC 0.19 to 0.37). The within subjects night-to-night arousal variability was low for all arousal definitions In a heterogeneous population, interscorer arousal reliability is enhanced by increases in EMG activity, leg movements, and respiratory events and decreased by short duration EEG arousals. The arousal index night-to-night variability was low for all definitions.

  15. NASA Night at Houston Astros, pregame ceremonies

    NASA Image and Video Library

    2005-09-13

    Images from the pregame ceremonies during NASA Night at the Houston Astros game, taken at Minute Maid Park, Houston. View of Center Director Jefferson Howell, Astros owner Drayton McLane, and STS-114 crewmembers Eileen Collins, James Kelly and Charles Camarda, with Collins holding an Astros jersey reading Discovery 114.

  16. SU-C-209-06: Improving X-Ray Imaging with Computer Vision and Augmented Reality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDougall, R.D.; Scherrer, B; Don, S

    Purpose: To determine the feasibility of using a computer vision algorithm and augmented reality interface to reduce repeat rates and improve consistency of image quality and patient exposure in general radiography. Methods: A prototype device, designed for use with commercially available hardware (Microsoft Kinect 2.0) capable of depth sensing and high resolution/frame rate video, was mounted to the x-ray tube housing as part of a Philips DigitalDiagnost digital radiography room. Depth data and video was streamed to a Windows 10 PC. Proprietary software created an augmented reality interface where overlays displayed selectable information projected over real-time video of the patient.more » The information displayed prior to and during x-ray acquisition included: recognition and position of ordered body part, position of image receptor, thickness of anatomy, location of AEC cells, collimated x-ray field, degree of patient motion and suggested x-ray technique. Pre-clinical data was collected in a volunteer study to validate patient thickness measurements and x-ray images were not acquired. Results: Proprietary software correctly identified ordered body part, measured patient motion, and calculated thickness of anatomy. Pre-clinical data demonstrated accuracy and precision of body part thickness measurement when compared with other methods (e.g. laser measurement tool). Thickness measurements provided the basis for developing a database of thickness-based technique charts that can be automatically displayed to the technologist. Conclusion: The utilization of computer vision and commercial hardware to create an augmented reality view of the patient and imaging equipment has the potential to drastically improve the quality and safety of x-ray imaging by reducing repeats and optimizing technique based on patient thickness. Society of Pediatric Radiology Pilot Grant; Washington University Bear Cub Fund.« less

  17. Genotype and phenotype of 101 dutch patients with congenital stationary night blindness.

    PubMed

    Bijveld, Mieke M C; Florijn, Ralph J; Bergen, Arthur A B; van den Born, L Ingeborgh; Kamermans, Maarten; Prick, Liesbeth; Riemslag, Frans C C; van Schooneveld, Mary J; Kappers, Astrid M L; van Genderen, Maria M

    2013-10-01

    To investigate the relative frequency of the genetic causes of the Schubert-Bornschein type of congenital stationary night blindness (CSNB) and to determine the genotype-phenotype correlations in CSNB1 and CSNB2. Clinic-based, longitudinal, multicenter study. A total of 39 patients with CSNB1 from 29 families and 62 patients with CSNB2 from 43 families. Patients underwent full ophthalmologic and electrophysiologic examinations. On the basis of standard electroretinograms (ERGs), patients were diagnosed with CSNB1 or CSNB2. Molecular analysis was performed by direct Sanger sequencing of the entire coding regions in NYX, TRPM1, GRM6, and GPR179 in patients with CSNB1 and CACNA1F and CABP4 in patients with CSNB2. Data included genetic cause of CSNB, refractive error, visual acuity, nystagmus, strabismus, night blindness, photophobia, color vision, dark adaptation (DA) curve, and standard ERGs. A diagnosis of CSNB1 or CSNB2 was based on standard ERGs. The photopic ERG was the most specific criterion to distinguish between CSNB1 and CSNB2 because it showed a "square-wave" appearance in CSNB1 and a decreased b-wave in CSNB2. Mutations causing CSNB1 were found in NYX (20 patients, 13 families), TRPM1 (10 patients, 9 families), GRM6 (4 patients, 3 families), and GPR179 (2 patients, 1 family). Congenital stationary night blindness 2 was primarily caused by mutations in CACNA1F (55 patients, 37 families). Only 3 patients had causative mutations in CABP4 (2 families). Patients with CSNB1 mainly had rod-related problems, and patients with CSNB2 had rod- and cone-related problems. The visual acuity on average was better in CSNB1 (0.30 logarithm of the minimum angle of resolution [logMAR]) than in CSNB2 (0.52 logMAR). All patients with CSNB1 and only 54% of the patients with CSNB2 reported night blindness. The dark-adapted threshold was on average more elevated in CSNB1 (3.0 log) than in CSNB2 (1.8 log). The 3 patients with CABP4 had a relative low visual acuity, were hyperopic

  18. Diagnosing night sweats.

    PubMed

    Viera, Anthon J; Bond, Michael M; Yates, Scott W

    2003-03-01

    Night sweats are a common outpatient complaint, yet literature on the subject is scarce. Tuberculosis and lymphoma are diseases in which night sweats are a dominant symptom, but these are infrequently found to be the cause of night sweats in modern practice. While these diseases remain important diagnostic considerations in patients with night sweats, other diagnoses to consider include human immunodeficiency virus, gastroesophageal reflux disease, obstructive sleep apnea, hyperthyroidism, hypoglycemia, and several less common diseases. Antihypertensives, antipyretics, other medications, and drugs of abuse such as alcohol and heroin may cause night sweats. Serious causes of night sweats can be excluded with a thorough history, physical examination, and directed laboratory and radiographic studies. If a history and physical do not reveal a possible diagnosis, physicians should consider a purified protein derivative, complete blood count, human immunodeficiency virus test, thyroid-stimulating hormone test, erythrocyte sedimentation rate evaluation, chest radiograph, and possibly chest and abdominal computed tomographic scans and bone marrow biopsy.

  19. LWIR passive perception system for stealthy unmanned ground vehicle night operations

    NASA Astrophysics Data System (ADS)

    Lee, Daren; Rankin, Arturo; Huertas, Andres; Nash, Jeremy; Ahuja, Gaurav; Matthies, Larry

    2016-05-01

    Resupplying forward-deployed units in rugged terrain in the presence of hostile forces creates a high threat to manned air and ground vehicles. An autonomous unmanned ground vehicle (UGV) capable of navigating stealthily at night in off-road and on-road terrain could significantly increase the safety and success rate of such resupply missions for warfighters. Passive night-time perception of terrain and obstacle features is a vital requirement for such missions. As part of the ONR 30 Autonomy Team, the Jet Propulsion Laboratory developed a passive, low-cost night-time perception system under the ONR Expeditionary Maneuver Warfare and Combating Terrorism Applied Research program. Using a stereo pair of forward looking LWIR uncooled microbolometer cameras, the perception system generates disparity maps using a local window-based stereo correlator to achieve real-time performance while maintaining low power consumption. To overcome the lower signal-to-noise ratio and spatial resolution of LWIR thermal imaging technologies, a series of pre-filters were applied to the input images to increase the image contrast and stereo correlator enhancements were applied to increase the disparity density. To overcome false positives generated by mixed pixels, noisy disparities from repeated textures, and uncertainty in far range measurements, a series of consistency, multi-resolution, and temporal based post-filters were employed to improve the fidelity of the output range measurements. The stereo processing leverages multi-core processors and runs under the Robot Operating System (ROS). The night-time passive perception system was tested and evaluated on fully autonomous testbed ground vehicles at SPAWAR Systems Center Pacific (SSC Pacific) and Marine Corps Base Camp Pendleton, California. This paper describes the challenges, techniques, and experimental results of developing a passive, low-cost perception system for night-time autonomous navigation.

  20. Nocturnal light environments and species ecology: implications for nocturnal color vision in forests.

    PubMed

    Veilleux, Carrie C; Cummings, Molly E

    2012-12-01

    Although variation in the color of light in terrestrial diurnal and twilight environments has been well documented, relatively little work has examined the color of light in nocturnal habitats. Understanding the range and sources of variation in nocturnal light environments has important implications for nocturnal vision, particularly following recent discoveries of nocturnal color vision. In this study, we measured nocturnal irradiance in a dry forest/woodland and a rainforest in Madagascar over 34 nights. We found that a simple linear model including the additive effects of lunar altitude, lunar phase and canopy openness successfully predicted total irradiance flux measurements across 242 clear sky measurements (r=0.85, P<0.0001). However, the relationship between these variables and spectral irradiance was more complex, as interactions between lunar altitude, lunar phase and canopy openness were also important predictors of spectral variation. Further, in contrast to diurnal conditions, nocturnal forests and woodlands share a yellow-green-dominant light environment with peak flux at 560 nm. To explore how nocturnal light environments influence nocturnal vision, we compared photoreceptor spectral tuning, habitat preference and diet in 32 nocturnal mammals. In many species, long-wavelength-sensitive cone spectral sensitivity matched the peak flux present in nocturnal forests and woodlands, suggesting a possible adaptation to maximize photon absorption at night. Further, controlling for phylogeny, we found that fruit/flower consumption significantly predicted short-wavelength-sensitive cone spectral tuning in nocturnal mammals (P=0.002). These results suggest that variation in nocturnal light environments and species ecology together influence cone spectral tuning and color vision in nocturnal mammals.

  1. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  2. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Development and evaluation of vision rehabilitation devices.

    PubMed

    Luo, Gang; Peli, Eli

    2011-01-01

    We have developed a range of vision rehabilitation devices and techniques for people with impaired vision due to either central vision loss or severely restricted peripheral visual field. We have conducted evaluation studies with patients to test the utilities of these techniques in an effort to document their advantages as well as their limitations. Here we describe our work on a visual field expander based on a head mounted display (HMD) for tunnel vision, a vision enhancement device for central vision loss, and a frequency domain JPEG/MPEG based image enhancement technique. All the evaluation studies included visual search paradigms that are suitable for conducting indoor controllable experiments.

  4. Machine vision system for inspecting characteristics of hybrid rice seed

    NASA Astrophysics Data System (ADS)

    Cheng, Fang; Ying, Yibin

    2004-03-01

    Obtaining clear images advantaged of improving the classification accuracy involves many factors, light source, lens extender and background were discussed in this paper. The analysis of rice seed reflectance curves showed that the wavelength of light source for discrimination of the diseased seeds from normal rice seeds in the monochromic image recognition mode was about 815nm for jinyou402 and shanyou10. To determine optimizing conditions for acquiring digital images of rice seed using a computer vision system, an adjustable color machine vision system was developed. The machine vision system with 20mm to 25mm lens extender produce close-up images which made it easy to object recognition of characteristics in hybrid rice seeds. White background was proved to be better than black background for inspecting rice seeds infected by disease and using the algorithms based on shape. Experimental results indicated good classification for most of the characteristics with the machine vision system. The same algorithm yielded better results in optimizing condition for quality inspection of rice seed. Specifically, the image processing can correct for details such as fine fissure with the machine vision system.

  5. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  6. Pinwheel Crater at Night

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 15 March 2004

    The Odyssey spacecraft has completed a full Mars year of observations of the red planet. For the next several weeks the Image of the Day will look back over this first mars year. It will focus on four themes: 1) the poles - with the seasonal changes seen in the retreat and expansion of the caps; 2) craters - with a variety of morphologies relating to impact materials and later alteration, both infilling and exhumation; 3) channels - the clues to liquid surface flow; and 4) volcanic flow features. While some images have helped answer questions about the history of Mars, many have raised new questions that are still being investigated as Odyssey continues collecting data as it orbits Mars.

    Infrared images taken during the nighttime exhibit only the thermophysical properties of the surface of Mars. The effect of sun-facing versus non-sun-facing energy dissipates quickly at night. Thermophysical effects dominate as different surfaces cool at different rates through the nighttime hours. Rocks cool slowly, and are therefore relatively bright at night (remember that rocks are dark during the day). Dust and other fine grained materials cool very quickly and are dark in nighttime infrared images.

    This nighttime IR image was collected September 28, 2002 during the northern spring season. The 'pinwheel' pattern represents alternating warm and cool materials.

    Image information: IR instrument. Latitude 60.3, Longitude 271.9 East (88.1 West). 100 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data

  7. How do different definitions of night shift affect the exposure assessment of night work?

    PubMed

    Garde, Anne Helene; Hansen, Johnni; Kolstad, Henrik A; Larsen, Ann Dyreborg; Hansen, Åse Marie

    The aim is to show how different definitions affect the proportion of shifts classified as night shifts. The Danish Working Hour Database was used to calculate number of night shifts according to eight definitions. More than 98% of the total night shifts were night shifts by use of both the reference definition (at least 3 h of work between 24:00 and 05:00) and definitions using a period during the night. The overlap with definitions based on starting and ending time was less pronounced (64-71 %). The proportion of classified night shifts differs little when night shifts are based on definitions including a period during the night. Studies based on other definitions may be less comparable.

  8. Vision, eye disease, and art: 2015 Keeler Lecture

    PubMed Central

    Marmor, M F

    2016-01-01

    The purpose of this study was to examine normal vision and eye disease in relation to art. Ophthalmology cannot explain art, but vision is a tool for artists and its normal and abnormal characteristics may influence what an artist can do. The retina codes for contrast, and the impact of this is evident throughout art history from Asian brush painting, to Renaissance chiaroscuro, to Op Art. Art exists, and can portray day or night, only because of the way retina adjusts to light. Color processing is complex, but artists have exploited it to create shimmer (Seurat, Op Art), or to disconnect color from form (fauvists, expressionists, Andy Warhol). It is hazardous to diagnose eye disease from an artist's work, because artists have license to create as they wish. El Greco was not astigmatic; Monet was not myopic; Turner did not have cataracts. But when eye disease is documented, the effects can be analyzed. Color-blind artists limit their palette to ambers and blues, and avoid greens. Dense brown cataracts destroy color distinctions, and Monet's late canvases (before surgery) showed strange and intense uses of color. Degas had failing vision for 40 years, and his pastels grew coarser and coarser. He may have continued working because his blurred vision smoothed over the rough work. This paper can barely touch upon the complexity of either vision or art. However, it demonstrates some ways in which understanding vision and eye disease give insight into art, and thereby an appreciation of both art and ophthalmology. PMID:26563659

  9. Vision, eye disease, and art: 2015 Keeler Lecture.

    PubMed

    Marmor, M F

    2016-02-01

    The purpose of this study was to examine normal vision and eye disease in relation to art. Ophthalmology cannot explain art, but vision is a tool for artists and its normal and abnormal characteristics may influence what an artist can do. The retina codes for contrast, and the impact of this is evident throughout art history from Asian brush painting, to Renaissance chiaroscuro, to Op Art. Art exists, and can portray day or night, only because of the way retina adjusts to light. Color processing is complex, but artists have exploited it to create shimmer (Seurat, Op Art), or to disconnect color from form (fauvists, expressionists, Andy Warhol). It is hazardous to diagnose eye disease from an artist's work, because artists have license to create as they wish. El Greco was not astigmatic; Monet was not myopic; Turner did not have cataracts. But when eye disease is documented, the effects can be analyzed. Color-blind artists limit their palette to ambers and blues, and avoid greens. Dense brown cataracts destroy color distinctions, and Monet's late canvases (before surgery) showed strange and intense uses of color. Degas had failing vision for 40 years, and his pastels grew coarser and coarser. He may have continued working because his blurred vision smoothed over the rough work. This paper can barely touch upon the complexity of either vision or art. However, it demonstrates some ways in which understanding vision and eye disease give insight into art, and thereby an appreciation of both art and ophthalmology.

  10. Biological Basis For Computer Vision: Some Perspectives

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.

    1990-03-01

    Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.

  11. Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera

    NASA Astrophysics Data System (ADS)

    Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu

    2016-09-01

    We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.

  12. S4EI (Spectral Sampling with Slicer for Stellar and Extragalactical Instrumentation), a new-generation of 3D spectro-imager dedicated to night astronomy

    NASA Astrophysics Data System (ADS)

    Sayède, Frédéric; Puech, Mathieu; Mein, Pierre; Bonifacio, Piercarlo; Malherbe, Jean-Marie; Galicher, Raphaël.; Amans, Jean-Philippe; Fasola, Gilles

    2014-07-01

    Multichannel Subtractive Double Pass (MSDP) spectrographs have been widely used in solar spectroscopy because of their ability to provide an excellent compromise between field of view and spatial and spectral resolutions. Compared with other types of spectrographs, MSDP can deliver simultaneous monochromatic images at higher spatial and spectral resolutions without any time-scanning requirement (as with Fabry-Perot spectrographs), and with limited loss of flux. These performances are obtained thanks to a double pass through the dispersive element. Recent advances with VPH (Volume phase holographic) Grisms as well as with image slicers now make MSDP potentially sensitive to much smaller fluxes. We present S4EI (Spectral Sampling with Slicer for Stellar and Extragalactical Instrumentation), which is a new concept for extending MSDP to night-time astronomy. It is based on new generation reflecting plane image slicers working with large apertures specific to night-time telescopes. The resulting design could be potentially very attractive and innovative for different domains of astronomy, e.g., the simultaneous spatial mapping of accurately flux-calibrated emission lines between OH sky lines in extragalactic astronomy or the simultaneous imaging of stars, exoplanets and interstellar medium. We present different possible MSDP/S4EI configurations for these science cases and expected performances on telescopes such as the VLT.

  13. Family Reading Night

    ERIC Educational Resources Information Center

    Hutchins, Darcy; Greenfeld, Marsha; Epstein, Joyce

    2007-01-01

    This book offers clear and practical guidelines to help engage families in student success. It shows families how to conduct a successful Family Reading Night at their school. Family Night themes include Scary Stories, Books We Love, Reading Olympics, Dr. Seuss, and other themes. Family reading nights invite parents to come to school with their…

  14. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  15. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  16. Computational models of human vision with applications

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    Perceptual problems in aeronautics were studied. The mechanism by which color constancy is achieved in human vision was examined. A computable algorithm was developed to model the arrangement of retinal cones in spatial vision. The spatial frequency spectra are similar to the spectra of actual cone mosaics. The Hartley transform as a tool of image processing was evaluated and it is suggested that it could be used in signal processing applications, GR image processing.

  17. Vision Systems with the Human in the Loop

    NASA Astrophysics Data System (ADS)

    Bauckhage, Christian; Hanheide, Marc; Wrede, Sebastian; Käster, Thomas; Pfeiffer, Michael; Sagerer, Gerhard

    2005-12-01

    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed.

  18. Active imaging system performance model for target acquisition

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.

    2007-04-01

    The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.

  19. Research on an autonomous vision-guided helicopter

    NASA Technical Reports Server (NTRS)

    Amidi, Omead; Mesaki, Yuji; Kanade, Takeo

    1994-01-01

    Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.

  20. Computer Vision-Based Structural Displacement Measurement Robust to Light-Induced Image Degradation for In-Service Bridges

    PubMed Central

    Lee, Junhwa; Lee, Kyoung-Chan; Cho, Soojin

    2017-01-01

    The displacement responses of a civil engineering structure can provide important information regarding structural behaviors that help in assessing safety and serviceability. A displacement measurement using conventional devices, such as the linear variable differential transformer (LVDT), is challenging owing to issues related to inconvenient sensor installation that often requires additional temporary structures. A promising alternative is offered by computer vision, which typically provides a low-cost and non-contact displacement measurement that converts the movement of an object, mostly an attached marker, in the captured images into structural displacement. However, there is limited research on addressing light-induced measurement error caused by the inevitable sunlight in field-testing conditions. This study presents a computer vision-based displacement measurement approach tailored to a field-testing environment with enhanced robustness to strong sunlight. An image-processing algorithm with an adaptive region-of-interest (ROI) is proposed to reliably determine a marker’s location even when the marker is indistinct due to unfavorable light. The performance of the proposed system is experimentally validated in both laboratory-scale and field experiments. PMID:29019950

  1. Adnyamathanha Night Skies

    NASA Astrophysics Data System (ADS)

    Curnow, Paul

    2009-06-01

    Aboriginal Australians have been viewing the night skies of Australia for some 45,000 years and possibly much longer. During this time they have been able to develop a complex knowledge of the night sky, the terrestrial environment in addition to seasonal changes. However, few of us in contemporary society have an in-depth knowledge of the nightly waltz of stars above.

  2. Sorting out co-occurrence of rare monogenic retinopathies: Stargardt disease co-existing with congenital stationary night blindness.

    PubMed

    Huynh, Nancy; Jeffrey, Brett G; Turriff, Amy; Sieving, Paul A; Cukras, Catherine A

    2014-03-01

    Inherited retinal diseases are uncommon, and the likelihood of having more than one hereditary disorder is rare. Here, we report a case of Stargardt disease and congenital stationary night blindness (CSNB) in the same patient, and the identification of two novel in-frame deletions in the GRM6 gene. The patient underwent an ophthalmic exam and visual function testing including: visual acuity, color vision, Goldmann visual field, and electroretinography (ERG). Imaging of the retina included fundus photography, spectral-domain optical coherence tomography (OCT), and fundus autofluorescence. Genomic DNA was PCR-amplified for analysis of all coding exons and flanking splice sites of both the ABCA4 and GRM6 genes. A 46-year-old woman presented with recently reduced central vision and clinical findings of characteristic yellow flecks consistent with Stargardt disease. However, ERG testing revealed an ERG phenotype unusual for Stargardt disease but consistent with CSNB1. Genetic testing revealed two previously reported mutations in the ABCA4 gene and two novel deletions in the GRM6 gene. Diagnosis of concurrent Stargardt disease and CSNB was made on the ophthalmic history, clinical examination, ERG, and genetic testing. This case highlights that clinical tests need to be taken in context, and that co-existing retinal dystrophies and degenerations should be considered when clinical impressions and objective data do not correlate.

  3. Planet Formation Imager (PFI): science vision and key requirements

    NASA Astrophysics Data System (ADS)

    Kraus, Stefan; Monnier, John D.; Ireland, Michael J.; Duchêne, Gaspard; Espaillat, Catherine; Hönig, Sebastian; Juhasz, Attila; Mordasini, Chris; Olofsson, Johan; Paladini, Claudia; Stassun, Keivan; Turner, Neal; Vasisht, Gautam; Harries, Tim J.; Bate, Matthew R.; Gonzalez, Jean-François; Matter, Alexis; Zhu, Zhaohuan; Panic, Olja; Regaly, Zsolt; Morbidelli, Alessandro; Meru, Farzana; Wolf, Sebastian; Ilee, John; Berger, Jean-Philippe; Zhao, Ming; Kral, Quentin; Morlok, Andreas; Bonsor, Amy; Ciardi, David; Kane, Stephen R.; Kratter, Kaitlin; Laughlin, Greg; Pepper, Joshua; Raymond, Sean; Labadie, Lucas; Nelson, Richard P.; Weigelt, Gerd; ten Brummelaar, Theo; Pierens, Arnaud; Oudmaijer, Rene; Kley, Wilhelm; Pope, Benjamin; Jensen, Eric L. N.; Bayo, Amelia; Smith, Michael; Boyajian, Tabetha; Quiroga-Nuñez, Luis Henry; Millan-Gabet, Rafael; Chiavassa, Andrea; Gallenne, Alexandre; Reynolds, Mark; de Wit, Willem-Jan; Wittkowski, Markus; Millour, Florentin; Gandhi, Poshak; Ramos Almeida, Cristina; Alonso Herrero, Almudena; Packham, Chris; Kishimoto, Makoto; Tristram, Konrad R. W.; Pott, Jörg-Uwe; Surdej, Jean; Buscher, David; Haniff, Chris; Lacour, Sylvestre; Petrov, Romain; Ridgway, Steve; Tuthill, Peter; van Belle, Gerard; Armitage, Phil; Baruteau, Clement; Benisty, Myriam; Bitsch, Bertram; Paardekooper, Sijme-Jan; Pinte, Christophe; Masset, Frederic; Rosotti, Giovanni

    2016-08-01

    The Planet Formation Imager (PFI) project aims to provide a strong scientific vision for ground-based optical astronomy beyond the upcoming generation of Extremely Large Telescopes. We make the case that a breakthrough in angular resolution imaging capabilities is required in order to unravel the processes involved in planet formation. PFI will be optimised to provide a complete census of the protoplanet population at all stellocentric radii and over the age range from 0.1 to 100 Myr. Within this age period, planetary systems undergo dramatic changes and the final architecture of planetary systems is determined. Our goal is to study the planetary birth on the natural spatial scale where the material is assembled, which is the "Hill Sphere" of the forming planet, and to characterise the protoplanetary cores by measuring their masses and physical properties. Our science working group has investigated the observational characteristics of these young protoplanets as well as the migration mechanisms that might alter the system architecture. We simulated the imprints that the planets leave in the disk and study how PFI could revolutionise areas ranging from exoplanet to extragalactic science. In this contribution we outline the key science drivers of PFI and discuss the requirements that will guide the technology choices, the site selection, and potential science/technology tradeoffs.

  4. Comparison between DMSP-OLS and S-NPP Day-Night Band in Correlating with Regional Socio-economic Variables

    NASA Astrophysics Data System (ADS)

    Jing, X.; Shao, X.; Cao, C.; Fu, X.

    2013-12-01

    Night-time light imagery offers a unique view of the Earth's surface. In the past, the nighttime light data collected by the DMSP-OLS sensors have been used as efficient means to correlate with the global socio-economic activities. With the launch of Suomi National Polar-orbiting Partnership (S-NPP) satellite in October 2011, the Day Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard S-NPP represents a major advancement in night time imaging capabilities because it surpassed its predecessor DMSP-OLS in radiometric accuracy, spatial resolution, and geometric quality. In this paper, we compared the performance of DNB image and DMSP image in correlating regional socio-economic activities and analyzed the leading causes for the differences. The correlation coefficients between the socio-economic variables such as population, regional GDP etc. and the characteristic variables derived from the night time light images of DNB and DMSP at provincial level in China were computed as performance metrics for comparison. In general, the correlation between DNB data and socio-economic data is better than that of DMSP data. To explain the difference in the correlation, we further analyzed the effects of several factors such as radiometric saturation and quantization of DMSP data, low spatial resolution, different data acquisition times between DNB and DMSP images, and difference in the transformation used in converting digital number (DN) value to radiance.

  5. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  6. NASA-NOAA's Suomi NPP Satellite Captures Night-time Look at Cyclone Felleng

    NASA Image and Video Library

    2017-12-08

    NASA-NOAA's Suomi NPP satellite captured this false-colored night-time image of Cyclone Felleng during the night on Jan. 28, 2013. Felleng is located in the Southern Indian Ocean, and is northwest of Madagascar. The image revealed some pretty cold overshooting tops, topping at ~170K. The image shows some interesting gravity waves propagating out from the storm in both the thermal and visible imagery. For full storm history on NASA's Hurricane Web Page, visit: www.nasa.gov/mission_pages/hurricanes/archives/2013/h2013... Credit: William Straka, UWM/NASA/NOAA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  7. NASA-NOAA's Suomi NPP Satellite Captures Night-time Look at Cyclone Felleng

    NASA Image and Video Library

    2013-01-31

    NASA-NOAA's Suomi NPP satellite captured this false-colored night-time image of Cyclone Felleng during the night on Jan. 28, 2013. Felleng is located in the Southern Indian Ocean, and is northwest of Madagascar. The image revealed some pretty cold overshooting tops, topping at ~170K. The image shows some interesting gravity waves propagating out from the storm in both the thermal and visible imagery. For full storm history on NASA's Hurricane Web Page, visit: www.nasa.gov/mission_pages/hurricanes/archives/2013/h2013... Credit: William Straka, UWM/NASA/NOAA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  8. Image processing analysis of traditional Gestalt vision experiments

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  9. Development of a Configurable Growth Chamber with a Computer Vision System to Study Circadian Rhythm in Plants

    PubMed Central

    Navarro, Pedro J.; Fernández, Carlos; Weiss, Julia; Egea-Cortines, Marcos

    2012-01-01

    Plant development is the result of an endogenous morphogenetic program that integrates environmental signals. The so-called circadian clock is a set of genes that integrates environmental inputs into an internal pacing system that gates growth and other outputs. Study of circadian growth responses requires high sampling rates to detect changes in growth and avoid aliasing. We have developed a flexible configurable growth chamber comprising a computer vision system that allows sampling rates ranging between one image per 30 s to hours/days. The vision system has a controlled illumination system, which allows the user to set up different configurations. The illumination system used emits a combination of wavelengths ensuring the optimal growth of species under analysis. In order to obtain high contrast of captured images, the capture system is composed of two CCD cameras, for day and night periods. Depending on the sample type, a flexible image processing software calculates different parameters based on geometric calculations. As a proof of concept we tested the system in three different plant tissues, growth of petunia- and snapdragon (Antirrhinum majus) flowers and of cladodes from the cactus Opuntia ficus-indica. We found that petunia flowers grow at a steady pace and display a strong growth increase in the early morning, whereas Opuntia cladode growth turned out not to follow a circadian growth pattern under the growth conditions imposed. Furthermore we were able to identify a decoupling of increase in area and length indicating that two independent growth processes are responsible for the final size and shape of the cladode. PMID:23202214

  10. Vision-based obstacle recognition system for automated lawn mower robot development

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  11. Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Barton, R. Shane

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  12. Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  13. Visions of Our Planet's Atmosphere, Land and Oceans Electronic-Theater 2001

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The NASA/NOAA/AMS Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to Fredericton New Brunswick. Drop in on the Kennedy Space Center and Park City Utah, site of the 2002 Olympics using 1 m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and International global satellite weather movies including hurricanes & tornadoes. See the latest spectacular images from NASA/NOAA and Canadian remote sensing missions like Terra GOES, TRMM, SeaWiFS, Landsat 7, and Radarsat that are visualized & explained. See how High Definition Television (HDTV) is revolutionizing the way we communicate science in cooperation with the American Museum of Natural History in NYC. See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on Newsweek, TIME, National Geographic, Popular Science covers & National & International Network TV. New visualization tools allow us to roam & zoom through massive global images eg Landsat tours of the US, Africa, & New Zealand showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds data. Spectacular new visualizations of the global atmosphere & oceans are shown. See massive dust storms sweeping across Africa. See vortexes and currents in the global oceans that bring up the nutrients to feed tiny plankton and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. The demonstration is interactively driven by a SGI Onyx II Graphics Supercomputer with four CPUs, 8 Gigabytes of RAM and Terabyte of disk. With multiple projectors on a giant screen. See the city lights, fishing fleets, gas flares and bio-mass burning of the Earth at night observed by the "night-vision" DMSP

  14. Night Terrors (For Parents)

    MedlinePlus

    ... Safe Videos for Educators Search English Español Night Terrors KidsHealth / For Parents / Night Terrors Print en español Terrores nocturnos What Are Night Terrors? Most parents have comforted their child after the ...

  15. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi-algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6201s each has a local memory space, and they also have a shared system memory space which enables them to intercommunicate and exchange data efficiently. At the same time, they can be directly inter-connected in star-shaped architecture. All of these are under the control of a FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  16. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi- algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6210s each has a local memory space, and they also have a shared system memory space which enable them to intercommunicate and exchange data efficiently. At the same time, they can be directly interconnected in star- shaped architecture. All of these are under the control of FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  17. Visions of our Planet's Atmosphere, Land and Oceans: NASA/NOAA Electronic Theater 2002

    NASA Technical Reports Server (NTRS)

    Haser, Fritz; Starr, David (Technical Monitor)

    2002-01-01

    The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to the 2002 Winter Olympic Stadium Site of the Olympic Opening and Closing Ceremonies in Salt Lake City. Fly in and through Olympic Alpine Venues using 1 m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes and "tornadoes". See the latest visualizations of spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including new 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained. See how High-Definition Television (HDTV) is revolutionizing the way we communicate science. (In cooperation with the American Museum of Natural History in NYC) See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on the covers of Newsweek, TIME, National Geographic, Popular Science and on National and International Network TV. New computer software tools allow us to roam and zoom through massive global images e.g. Landsat tours of the US, and Africa, showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds. data. Spectacular new visualizations of the global atmosphere and oceans are shown. See vortexes and currents in the global oceans that bring up the nutrients to feed tiny algae and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nino/La Nina climate changes. See the city lights, fishing fleets, gas flares and bio-mass burning of the Earth at night observed by the "night-vision" DMSP military satellite.

  18. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  19. High-fidelity video and still-image communication based on spectral information: natural vision system and its applications

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki

    2006-01-01

    In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.

  20. The Accuracy, Night-to-Night Variability, and Stability of Frontopolar Sleep Electroencephalography Biomarkers

    PubMed Central

    Levendowski, Daniel J.; Ferini-Strambi, Luigi; Gamaldo, Charlene; Cetel, Mindy; Rosenberg, Robert; Westbrook, Philip R.

    2017-01-01

    Study Objectives: To assess the validity of sleep architecture and sleep continuity biomarkers obtained from a portable, multichannel forehead electroencephalography (EEG) recorder. Methods: Forty-seven subjects simultaneously underwent polysomnography (PSG) while wearing a multichannel frontopolar EEG recording device (Sleep Profiler). The PSG recordings independently staged by 5 registered polysomnographic technologists were compared for agreement with the autoscored sleep EEG before and after expert review. To assess the night-to-night variability and first night bias, 2 nights of self-applied, in-home EEG recordings obtained from a clinical cohort of 63 patients were used (41% with a diagnosis of insomnia/depression, 35% with insomnia/obstructive sleep apnea, and 17.5% with all three). The between-night stability of abnormal sleep biomarkers was determined by comparing each night's data to normative reference values. Results: The mean overall interscorer agreements between the 5 technologists were 75.9%, and the mean kappa score was 0.70. After visual review, the mean kappa score between the autostaging and five raters was 0.67, and staging agreed with a majority of scorers in at least 80% of the epochs for all stages except stage N1. Sleep spindles, autonomic activation, and stage N3 exhibited the least between-night variability (P < .0001) and strongest between-night stability. Antihypertensive medications were found to have a significant effect on sleep quality biomarkers (P < .02). Conclusions: A strong agreement was observed between the automated sleep staging and human-scored PSG. One night's recording appeared sufficient to characterize abnormal slow wave sleep, sleep spindle activity, and heart rate variability in patients, but a 2-night average improved the assessment of all other sleep biomarkers. Commentary: Two commentaries on this article appear in this issue on pages 771 and 773. Citation: Levendowski DJ, Ferini-Strambi L, Gamaldo C, Cetel M

  1. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  2. Limb darkening in Venus night-side disk as viewed from Akatsuki IR2

    NASA Astrophysics Data System (ADS)

    Satoh, Takehiko; Nakakushi, Takashi; Sato, Takao M.; Hashimoto, George L.

    2017-10-01

    Night-side hemisphere of Venus exhibits dark and bright regions as a result of spatially inhomogeneous cloud opacity which is illuminated by infrared radiation from deeper atmosphere. The 2-μm camera (IR2) onboard Akatsuki, Japan's Venus Climate Orbiter, is equipped with three narrow-band filters (1.735, 2.26, and 2.32 μm) to image Venus night-side disk in well-known transparency windows of CO2 atmosphere (Allen and Crawford 1984). In general, a cloud feature appears brightest when it is in the disk center and becomes darker as the zenith angle of emergent light increases. Such limb darkening was observed with Galileo/NIMS and mathematically approximated (Carlson et al., 1993). Limb-darkening correction helps to identify branches, in a 1.74-μm vs. 2.3-μm radiances scatter plot, each of which corresponds to a group of aerosols with similar properties. We analyzed Akatsuki/IR2 images to characterize the limb darkening for three night-side filters.There is, however, contamination from the intense day-side disk blurred by IR2's point spread function (PSF). It is found that infrared light can be multiplly reflected within the Si substrate of IR2 detector (1024x1024 pixels PtSi array), causing elongated tail in the actual PSF. We treated this in two different ways. One is to mathematically approximate the PSF (with a combination of modified Lorentz functions) and another is to differentiate 2.26-μm image from 2.32-μm image so that the blurred light pattern can directly be obtained. By comparing results from these two methods, we are able to reasonablly clean up the night-side images and limb darkening is extracted. Physical interpretation of limb darkening, as well as "true" time variations of cloud brightness will be presented/discussed.

  3. Mini-review: Far peripheral vision.

    PubMed

    Simpson, Michael J

    2017-11-01

    The region of far peripheral vision, beyond 60 degrees of visual angle, is important to the evaluation of peripheral dark shadows (negative dysphotopsia) seen by some intraocular lens (IOL) patients. Theoretical calculations show that the limited diameter of an IOL affects ray paths at large angles, leading to a dimming of the main image for small pupils, and to peripheral illumination by light bypassing the IOL for larger pupils. These effects are rarely bothersome, and cataract surgery is highly successful, but there is a need to improve the characterization of far peripheral vision, for both pseudophakic and phakic eyes. Perimetry is the main quantitative test, but the purpose is to evaluate pathologies rather than characterize vision (and object and image regions are no longer uniquely related in the pseudophakic eye). The maximum visual angle is approximately 105 0 , but there is limited information about variations with age, race, or refractive error (in case there is an unexpected link with the development of myopia), or about how clear cornea, iris location, and the limiting retina are related. Also, the detection of peripheral motion is widely recognized to be important, yet rarely evaluated. Overall, people rarely complain specifically about this visual region, but with "normal" vision including an IOL for >5% of people, and increasing interest in virtual reality and augmented reality, there are new reasons to characterize peripheral vision more completely. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Longitudinal Cohort Study of Apache AH Mk 1 Pilots -(Vision and Handedness)

    DTIC Science & Technology

    2015-05-19

    reported by U.S. Army aviators using NVG for night flights (Glick and Moser, 1974). It was initially, and incorrectly, called “brown eye syndrome ...112 FREQUENCY Never Rarely Occasionally Often Eye irritation Eye pain Blurred vision Dry eye ... Eye pain Blurred vision Dry eye Light sensitivity j. Since your last contact lens review, did you experience any of the following

  5. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  6. A discrepancy within primate spatial vision and its bearing on the definition of edge detection processes in machine vision

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1990-01-01

    The visual perception of form information is considered to be based on the functioning of simple and complex neurons in the primate striate cortex. However, a review of the physiological data on these brain cells cannot be harmonized with either the perceptual spatial frequency performance of primates or the performance which is necessary for form perception in humans. This discrepancy together with recent interest in cortical-like and perceptual-like processing in image coding and machine vision prompted a series of image processing experiments intended to provide some definition of the selection of image operators. The experiments were aimed at determining operators which could be used to detect edges in a computational manner consistent with the visual perception of structure in images. Fundamental issues were the selection of size (peak spatial frequency) and circular versus oriented operators (or some combination). In a previous study, circular difference-of-Gaussian (DOG) operators, with peak spatial frequency responses at about 11 and 33 cyc/deg were found to capture the primary structural information in images. Here larger scale circular DOG operators were explored and led to severe loss of image structure and introduced spatial dislocations (due to blur) in structure which is not consistent with visual perception. Orientation sensitive operators (akin to one class of simple cortical neurons) introduced ambiguities of edge extent regardless of the scale of the operator. For machine vision schemes which are functionally similar to natural vision form perception, two circularly symmetric very high spatial frequency channels appear to be necessary and sufficient for a wide range of natural images. Such a machine vision scheme is most similar to the physiological performance of the primate lateral geniculate nucleus rather than the striate cortex.

  7. From the Night Side

    NASA Image and Video Library

    2015-09-14

    The night sides of Saturn and Tethys are dark places indeed. We know that shadows are darker areas than sunlit areas, and in space, with no air to scatter the light, shadows can appear almost totally black. Tethys (660 miles or 1,062 kilometers across) is just barely seen in the lower left quadrant of this image below the ring plane and has been brightened by a factor of three to increase its visibility. The wavy outline of Saturn's polar hexagon is visible at top center. This view looks toward the sunlit side of the rings from about 10 degrees above the ring plane. The image was taken with the Cassini spacecraft wide-angle camera on Jan. 15, 2015 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 752 nanometers. The view was obtained at a distance of approximately 1.5 million miles (2.4 million kilometers) from Saturn. Image scale is 88 miles (141 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18333

  8. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  9. A smart telerobotic system driven by monocular vision

    NASA Technical Reports Server (NTRS)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  10. Feedforward object-vision models only tolerate small image variations compared to human

    PubMed Central

    Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2014-01-01

    Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986

  11. The role of external features in face recognition with central vision loss: A pilot study

    PubMed Central

    Bernard, Jean-Baptiste; Chung, Susana T.L.

    2016-01-01

    Purpose We evaluated how the performance for recognizing familiar face images depends on the internal (eyebrows, eyes, nose, mouth) and external face features (chin, outline of face, hairline) in individuals with central vision loss. Methods In Experiment 1, we measured eye movements for four observers with central vision loss to determine whether they fixated more often on the internal or the external features of face images while attempting to recognize the images. We then measured the accuracy for recognizing face images that contained only the internal, only the external, or both internal and external features (Experiment 2), and for hybrid images where the internal and external features came from two different source images (Experiment 3), for five observers with central vision loss and four age-matched control observers. Results When recognizing familiar face images, approximately 40% of the fixations of observers with central vision loss were centered on the external features of faces. The recognition accuracy was higher for images containing only external features (66.8±3.3% correct) than for images containing only internal features (35.8±15.0%), a finding contradicting that of control observers. For hybrid face images, observers with central vision loss responded more accurately to the external features (50.4±17.8%) than to the internal features (9.3±4.9%), while control observers did not show the same bias toward responding to the external features. Conclusions Contrary to people with normal vision who rely more on the internal features of face images for recognizing familiar faces, individuals with central vision loss show a higher dependence on using external features of face images. PMID:26829260

  12. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    PubMed

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  13. NVSIM: UNIX-based thermal imaging system simulator

    NASA Astrophysics Data System (ADS)

    Horger, John D.

    1993-08-01

    For several years the Night Vision and Electronic Sensors Directorate (NVESD) has been using an internally developed forward looking infrared (FLIR) simulation program. In response to interest in the simulation part of these projects by other organizations, NVESD has been working on a new version of the simulation, NVSIM, that will be made generally available to the FLIR using community. NVSIM uses basic FLIR specification data, high resolution thermal input imagery and spatial domain image processing techniques to produce simulated image outputs from a broad variety of FLIRs. It is being built around modular programming techniques to allow simpler addition of more sensor effects. The modularity also allows selective inclusion and exclusion of individual sensor effects at run time. The simulation has been written in the industry standard ANSI C programming language under the widely used UNIX operating system to make it easily portable to a wide variety of computer platforms.

  14. The Night Sky on Mars

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. This time-lapse composite, acquired the evening of Spirit's martian sol 590 (Aug. 30, 2005) from a perch atop 'Husband Hill' in Gusev Crater, shows Phobos, the brighter moon, on the left, and Deimos, the dimmer moon, on the right. In this sequence of images obtained every 170 seconds, both moons move from top to bottom. The bright star Aldebaran forms a trail on the right, along with some other stars in the constellation Taurus. Most of the other streaks in the image mark the collision of cosmic rays with pixels in the camera.

    Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the six images that make up this composite using Spirit's panoramic camera with the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.

  15. Fundus white spots and acquired night blindness due to vitamin A deficiency.

    PubMed

    Genead, Mohamed A; Fishman, Gerald A; Lindeman, Martin

    2009-12-01

    To report a successfully treated case of acquired night blindness associated with fundus white spots secondary to vitamin A deficiency. An ocular examination, electrophysiologic testing, as well as visual field and OCT examinations were obtained on a 61-year-old man with vitamin A deficiency who had previously undergone gastric bypass surgery. The patient had a re-evaluation after treatment with high doses of oral vitamin A. The patient was observed to have numerous white spots in the retina of each eye. Best-corrected visual acuity was initially 20/80 in each eye, which improved to 20/40-1 OU after oral vitamin A therapy for 2 months. Full field electroretinogram (ERG) testing, showed non-detectable rod function and a 34 and 41% reduction for 32-Hz flicker and single flash cone responses, respectively, below the lower limits of normal. Both rod and cone functions markedly improved after initiation of vitamin A therapy. Vitamin A deficiency needs to be considered in a patient with white spots of the retina in the presence of poor night vision.

  16. Illumination-based synchronization of high-speed vision sensors.

    PubMed

    Hou, Lei; Kagami, Shingo; Hashimoto, Koichi

    2010-01-01

    To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.

  17. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  18. Alternatives to Pyrotechnic Distress Signals; Laboratory and Field Studies

    DTIC Science & Technology

    2015-03-01

    using night vision imaging systems (NVIS) with “minus-blue” filtering,” the project recommends additional research and testing leading to the inclusion...18  5.2.3  Background Images ...Example of image capture from radiant imaging colorimeter. ....................................................... 16  Figure 10. Laboratory setup

  19. Night-time neuronal activation of Cluster N in a day- and night-migrating songbird.

    PubMed

    Zapka, Manuela; Heyers, Dominik; Liedvogel, Miriam; Jarvis, Erich D; Mouritsen, Henrik

    2010-08-01

    Magnetic compass orientation in a night-migratory songbird requires that Cluster N, a cluster of forebrain regions, is functional. Cluster N, which receives input from the eyes via the thalamofugal pathway, shows high neuronal activity in night-migrants performing magnetic compass-guided behaviour at night, whereas no activation is observed during the day, and covering up the birds' eyes strongly reduces neuronal activation. These findings suggest that Cluster N processes light-dependent magnetic compass information in night-migrating songbirds. The aim of this study was to test if Cluster N is active during daytime migration. We used behavioural molecular mapping based on ZENK activation to investigate if Cluster N is active in the meadow pipit (Anthus pratensis), a day- and night-migratory species. We found that Cluster N of meadow pipits shows high neuronal activity under dim-light at night, but not under full room-light conditions during the day. These data suggest that, in day- and night-migratory meadow pipits, the light-dependent magnetic compass, which requires an active Cluster N, may only be used during night-time, whereas another magnetosensory mechanism and/or other reference system(s), like the sun or polarized light, may be used as primary orientation cues during the day.

  20. Flight test of a passive millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Martin, Christopher A.; Manning, Will; Kolinko, Vladimir G.; Hall, Max

    2005-05-01

    A real-time passive millimeter-wave imaging system with a wide-field of view and 3K temperature sensitivity is described. The system was flown on a UH-1H helicopter in a flight test conducted by the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD). We collected approximately eight hours of data over the course of the two-week flight test. Flight data was collected in horizontal and vertical polarizations at look down angles from 0 to 40 degrees. Speeds varied from 0 to 90 knots and altitudes varied from 0' to 1000'. Targets imaged include roads, freeways, railroads, houses, industrial buildings, power plants, people, streams, rivers, bridges, cars, trucks, trains, boats, planes, runways, treelines, shorelines, and the horizon. The imaging system withstood vibration and temperature variations, but experienced some RF interference. The flight test demonstrated the system's capabilities as an airborne navigation and surveillance aid. It also performed in a personnel recovery scenario.

  1. Intelligent Vision On The SM9O Mini-Computer Basis And Applications

    NASA Astrophysics Data System (ADS)

    Hawryszkiw, J.

    1985-02-01

    Distinction has to be made between image processing and vision Image processing finds its roots in the strong tradition of linear signal processing and promotes geometrical transform techniques, such as fi I tering , compression, and restoration. Its purpose is to transform an image for a human observer to easily extract from that image information significant for him. For example edges after a gradient operator, or a specific direction after a directional filtering operation. Image processing consists in fact in a set of local or global space-time transforms. The interpretation of the final image is done by the human observer. The purpose of vision is to extract the semantic content of the image. The machine can then understand that content, and run a process of decision, which turns into an action. Thus, intel I i gent vision depends on - Image processing - Pattern recognition - Artificial intel I igence

  2. Advanced IT Education for the Vision Impaired via e-Learning

    ERIC Educational Resources Information Center

    Armstrong, Helen L.

    2009-01-01

    Lack of accessibility in the design of e-learning courses continues to hinder students with vision impairment. E-learning materials are predominantly vision-centric, incorporating images, animation, and interactive media, and as a result students with acute vision impairment do not have equal opportunity to gain tertiary qualifications or skills…

  3. [Night-to-night variability of the obstructive sleep apnoea-hypopnoea syndrome].

    PubMed

    Mjid, M; Ouahchi, Y; Toujani, S; Snen, H; Ben Salah, N; Ben Hmida, A; Louzir, B; Mhiri, N; Cherif, J; Beji, M

    2016-11-01

    The apnoea-hypopnoea index (AHI) is the primary measurement used to characterize the obstructive sleep apnoea-hypopnoea syndrome (OSAHS). Despite its popularity, there are limiting factors to its application such as night-to-night variability. To evaluate the variability of AHI in the OSAHS. A prospective study was designed in our university hospital's sleep unit. Adults with clinical suspicion of OSAHS underwent 2 consecutive nights of polysomnographic recording. The population was divided in two groups according to an AHI>or<10. Patients with psychiatric disorders or professions that might result in sleep deprivation or an altered sleep/wake cycle were excluded. Twenty patients were enrolled. The mean age was 50.6±9.3 years. OSAHS was mild in 4 cases, moderate in 6 cases and severe in 8 cases. AHI was less than 5 in two cases. AHI values were not significantly altered throughout both recording nights (33.2 vs. 31.8 events/h). A significant positive correlation was found between AHI measured on the first and the second night. However, a significant individual variability was noted. Comparison between both patient's groups showed a correlation between AHI and the body mass index. This study demonstrates that the AHI in OSAHS patients is well correlated between two consecutive nights. However, a significant individual variability should be taken into consideration, especially when AHI is used in the classification of OSAHS or as a criterion of therapeutic success. Copyright © 2016. Published by Elsevier Masson SAS.

  4. Vision - night blindness

    MedlinePlus

    ... walking through a dark room, such as a movie theater. These problems are often worse just after ... Lippincott Williams & Wilkins; 2013:vol 3, chap 2. Review Date 8/20/2016 Updated by: Franklin W. ...

  5. Reduce volume of head-up display by image stitching

    NASA Astrophysics Data System (ADS)

    Chiu, Yi-Feng; Su, Guo-Dung J.

    2016-09-01

    Head-up Display (HUD) is a safety feature for automobile drivers. Although there have been some HUD systems in commercial product already, their images are too small to show assistance information. Another problem, the volume of HUD is too large. We proposed a HUD including micro-projectors, rear-projection screen, microlens array (MLA) and the light source is 28 mm x 14 mm realized a 200 mm x 100 mm image in 3 meters from drivers. We want to use the MLA to reduce the volume by virtual image stitching. We design the HUD's package dimensions is 12 cm x 12 cm x 9 cm. It is able to show speed, map-navigation and night vision information. We used Liquid Crystal Display (LCD) as our image source due to its brighter image output required and the minimum volume occupancy. The MLA is a multi aperture system. The proposed MLA consists of many optical channels each transmitting a segment of the whole field of view. The design of the system provides the stitching of the partial images, so that we can see the whole virtual image.

  6. Georgia Academy for the Blind: Orientation and Mobility Curriculum. Crossroads to Independence.

    ERIC Educational Resources Information Center

    Berner, Catherine L., Comp.; Lindh, Peter D., Comp.

    The Georgia Academy for the Blind curriculum guide covers orientation, cane skills, and travel skills. Chapter two, on low vision utilization, includes indoor, outdoor, and night low vision lessons checklists. Chapter three covers postural development and motor coordination. Chapter four, on concept development, covers body image, spatial…

  7. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  8. Visions of our Planet's Atmosphere, Land and Oceans: NASA/NOAA Electronic-Theater 2002. Spectacular Visualizations of our Blue Marble

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Starr, David (Technical Monitor)

    2002-01-01

    Spectacular Visualizations of our Blue Marble The NASA/NOAA Electronic Theater presents Earth science observations and visualizations in a historical perspective. Fly in from outer space to the 2002 Winter Olympic Stadium Site of the Olympic Opening and Closing Ceremonies in Salt Lake City. Fly in and through Olympic Alpine Venues using 1 m IKONOS "Spy Satellite" data. Go back to the early weather satellite images from the 1960s and see them contrasted with the latest US and international global satellite weather movies including hurricanes & "tornadoes". See the latest visualizations of spectacular images from NASA/NOAA remote sensing missions like Terra, GOES, TRMM, SeaWiFS, Landsat 7 including new 1 - min GOES rapid scan image sequences of Nov 9th 2001 Midwest tornadic thunderstorms and have them explained. See how High-Definition Television (HDTV) is revolutionizing the way we communicate science. (In cooperation with the American Museum of Natural History in NYC). See dust storms in Africa and smoke plumes from fires in Mexico. See visualizations featured on the covers of Newsweek, TIME, National Geographic, Popular Science & on National & International Network TV. New computer software tools allow us to roam & zoom through massive global images e.g. Landsat tours of the US, and Africa, showing desert and mountain geology as well as seasonal changes in vegetation. See animations of the polar ice packs and the motion of gigantic Antarctic Icebergs from SeaWinds data. Spectacular new visualizations of the global atmosphere & oceans are shown. See vertexes and currents in the global oceans that bring up the nutrients to feed tiny algae and draw the fish, whales and fisherman. See the how the ocean blooms in response to these currents and El Nicola Nina climate changes. See the city lights, fishing fleets, gas flares and biomass burning of the Earth at night observed by the "night-vision" DMSP military satellite.

  9. SED16 autonomous star tracker night sky testing

    NASA Astrophysics Data System (ADS)

    Foisneau, Thierry; Piriou, Véronique; Perrimon, Nicolas; Jacob, Philippe; Blarre, Ludovic; Vilaire, Didier

    2017-11-01

    The SED16 is an autonomous multi-missions star tracker which delivers three axis satellite attitude in an inertial reference frame and the satellite angular velocity with no prior information. The qualification process of this star sensor includes five validation steps using optical star simulator, digitized image simulator and a night sky tests setup. The night sky testing was the final step of the qualification process during which all the functions of the star tracker were used in almost nominal conditions : Autonomous Acquisition of the attitude, Autonomous Tracking of ten stars. These tests were performed in Calern in the premises of the OCA (Observatoire de la Cote d'Azur). The test set-up and the test results are described after a brief review of the sensor main characteristics and qualification process.

  10. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  11. Prevalence and predictors of night sweats, day sweats, and hot flashes in older primary care patients: an OKPRN study.

    PubMed

    Mold, James W; Roberts, Michelle; Aboshady, Hesham M

    2004-01-01

    We wanted to estimate the prevalence of night sweats, day sweats, and hot flashes in older primary care patients and identify associated factors. We undertook a cross-sectional study of patients older than 64 years recruited from the practices of 23 family physicians. Variables included sociodemographic information, health habits, chronic medical problems, symptoms, quality of life, and the degree to which patients were bothered by night sweats, daytime sweating, and hot flashes. Among the 795 patients, 10% reported being bothered by night sweats, 9% by day sweats, and 8% by hot flashes. Eighteen percent reported at least 1 of these symptoms. The 3 symptoms were strongly correlated. Factors associated with night sweats in the multivariate models were age (odds ratio [OR] 0.94/y; 95% confidence interval [CI], 0.89-0.98), fever (OR 12.60; 95% CI, 6.58-24.14), muscle cramps (OR 2.84; 95% CI, 1.53-5.24), numbness of hands and feet (OR 3.34; 95% CI, 1.92-5.81), impaired vision (OR 2.45; 95% CI, 1.41-4.27), and hearing loss (OR 1.84; 95% CI, 1.03-3.27). Day sweats were associated with fever (OR 4.10; 95% CI, 2.14-7.87), restless legs (OR 3.22; 95% CI, 1.76-5.89), lightheadedness (OR 2.24; 95% CI, 1.30-3.88), and diabetes (OR 2.19; 95% CI, 1.22-3.92). Hot flashes were associated with nonwhite race (OR 3.10; 95% CI, 1.60-5.98), fever (OR 3.98; 95% CI, 1.97-8.04), bone pain (OR 2.31; CI 95%: 1.30-4.08), impaired vision (OR 2.12; 95% CI, 1.19-3.79), and nervous spells (OR 1.87; 95% CI, 1.01-3.46). All 3 symptoms were associated with reduced quality of life. Many older patients are bothered by night sweats, day sweats, and hot flashes. Though these symptoms are similar and related, they have somewhat different associations with other variables. Clinical evaluation should include questions about febrile illnesses, sensory deficits, anxiety, depression, pain, muscle cramps, and restless legs syndrome.

  12. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  13. Craters 'Twixt Day and Night

    NASA Image and Video Library

    2004-12-20

    Three sizeable impact craters, including one with a marked central peak, lie along the line that divides day and night on the Saturnian moon, Dione (dee-OH-nee), which is 1,118 kilometers, or 695 miles across. The low angle of the Sun along the terminator, as this dividing line is called, brings details like these craters into sharp relief. This view shows principally the leading hemisphere of Dione. Some of this moon's bright, wispy streaks can be seen curling around its eastern limb. Cassini imaged the wispy terrain at high resolution during its first Dione flyby on Dec. 14, 2004. This image was taken in visible light with the Cassini spacecraft narrow angle camera on Nov. 1, 2004, at a distance of 2.4 million kilometers (1.5 million miles) from Dione and at a Sun-Dione-spacecraft, or phase, angle of 106 degrees. North is up. The image scale is 14 kilometers (8.7 miles) per pixel. The image has been magnified by a factor of two and contrast-enhanced to aid visibility of surface features. http://photojournal.jpl.nasa.gov/catalog/PIA06542

  14. Research on three-dimensional reconstruction method based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  15. "Data Day" and "Data Night" Definitions - Towards Producing Seamless Global Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Schmaltz, J. E.

    2017-12-01

    For centuries, the art and science of cartography has struggled with the challenge of mapping the round earth on to a flat page, or a flat computer monitor. Earth observing satellites with continuous monitoring of our planet have added the additional complexity of the time dimension to this procedure. The most common current practice is to segment this data by 24-hour Coordinated Universal Time (UTC) day and then split the day into sun side "Data Day" and shadow side "Data Night" global imagery that spans from dateline to dateline. Due to the nature of satellite orbits, simply binning the data by UTC date produces significant discontinuities at the dateline for day images and at Greenwich for night images. Instead, imagery could be generated in a fashion that follows the spatial and temporal progression of the satellite which would produce seamless imagery everywhere on the globe for all times. This presentation will explore approaches to produce such imagery but will also address some of the practical and logistical difficulties in implementing such changes. Topics will include composites versus granule/orbit based imagery, day/night versus ascending/descending definitions, and polar versus global projections.

  16. Optimized feature-detection for on-board vision-based surveillance

    NASA Astrophysics Data System (ADS)

    Gond, Laetitia; Monnin, David; Schneider, Armin

    2012-06-01

    The detection and matching of robust features in images is an important step in many computer vision applications. In this paper, the importance of the keypoint detection algorithms and their inherent parameters in the particular context of an image-based change detection system for IED detection is studied. Through extensive application-oriented experiments, we draw an evaluation and comparison of the most popular feature detectors proposed by the computer vision community. We analyze how to automatically adjust these algorithms to changing imaging conditions and suggest improvements in order to achieve more exibility and robustness in their practical implementation.

  17. Night shift work exposure profile and obesity: Baseline results from a Chinese night shift worker cohort.

    PubMed

    Sun, Miaomiao; Feng, Wenting; Wang, Feng; Zhang, Liuzhuo; Wu, Zijun; Li, Zhimin; Zhang, Bo; He, Yonghua; Xie, Shaohua; Li, Mengjie; Fok, Joan P C; Tse, Gary; Wong, Martin C S; Tang, Jin-Ling; Wong, Samuel Y S; Vlaanderen, Jelle; Evans, Greg; Vermeulen, Roel; Tse, Lap Ah

    2018-01-01

    This study aimed to evaluate the associations between types of night shift work and different indices of obesity using the baseline information from a prospective cohort study of night shift workers in China. A total of 3,871 workers from five companies were recruited from the baseline survey. A structured self-administered questionnaire was employed to collect the participants' demographic information, lifetime working history, and lifestyle habits. Participants were grouped into rotating, permanent and irregular night shift work groups. Anthropometric parameters were assessed by healthcare professionals. Multiple logistic regression models were used to evaluate the associations between night shift work and different indices of obesity. Night shift workers had increased risk of overweight and obesity, and odds ratios (ORs) were 1.17 (95% CI, 0.97-1.41) and 1.27 (95% CI, 0.74-2.18), respectively. Abdominal obesity had a significant but marginal association with night shift work (OR = 1.20, 95% CI, 1.01-1.43). A positive gradient between the number of years of night shift work and overweight or abdominal obesity was observed. Permanent night shift work showed the highest odds of being overweight (OR = 3.94, 95% CI, 1.40-11.03) and having increased abdominal obesity (OR = 3.34, 95% CI, 1.19-9.37). Irregular night shift work was also significantly associated with overweight (OR = 1.56, 95% CI, 1.13-2.14), but its association with abdominal obesity was borderline (OR = 1.26, 95% CI, 0.94-1.69). By contrast, the association between rotating night shift work and these parameters was not significant. Permanent and irregular night shift work were more likely to be associated with overweight or abdominal obesity than rotating night shift work. These associations need to be verified in prospective cohort studies.

  18. Night shift work exposure profile and obesity: Baseline results from a Chinese night shift worker cohort

    PubMed Central

    Feng, Wenting; Wang, Feng; Zhang, Liuzhuo; Wu, Zijun; Li, Zhimin; Zhang, Bo; He, Yonghua; Xie, Shaohua; Li, Mengjie; Fok, Joan P. C.; Tse, Gary; Wong, Martin C. S.; Tang, Jin-ling; Wong, Samuel Y. S.; Vlaanderen, Jelle; Evans, Greg; Vermeulen, Roel; Tse, Lap Ah

    2018-01-01

    Aims This study aimed to evaluate the associations between types of night shift work and different indices of obesity using the baseline information from a prospective cohort study of night shift workers in China. Methods A total of 3,871 workers from five companies were recruited from the baseline survey. A structured self-administered questionnaire was employed to collect the participants’ demographic information, lifetime working history, and lifestyle habits. Participants were grouped into rotating, permanent and irregular night shift work groups. Anthropometric parameters were assessed by healthcare professionals. Multiple logistic regression models were used to evaluate the associations between night shift work and different indices of obesity. Results Night shift workers had increased risk of overweight and obesity, and odds ratios (ORs) were 1.17 (95% CI, 0.97–1.41) and 1.27 (95% CI, 0.74–2.18), respectively. Abdominal obesity had a significant but marginal association with night shift work (OR = 1.20, 95% CI, 1.01–1.43). A positive gradient between the number of years of night shift work and overweight or abdominal obesity was observed. Permanent night shift work showed the highest odds of being overweight (OR = 3.94, 95% CI, 1.40–11.03) and having increased abdominal obesity (OR = 3.34, 95% CI, 1.19–9.37). Irregular night shift work was also significantly associated with overweight (OR = 1.56, 95% CI, 1.13–2.14), but its association with abdominal obesity was borderline (OR = 1.26, 95% CI, 0.94–1.69). By contrast, the association between rotating night shift work and these parameters was not significant. Conclusion Permanent and irregular night shift work were more likely to be associated with overweight or abdominal obesity than rotating night shift work. These associations need to be verified in prospective cohort studies. PMID:29763461

  19. Short-Term Neural Adaptation to Simultaneous Bifocal Images

    PubMed Central

    Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Sawides, Lucie; Marcos, Susana

    2014-01-01

    Simultaneous vision is an increasingly used solution for the correction of presbyopia (the age-related loss of ability to focus near images). Simultaneous Vision corrections, normally delivered in the form of contact or intraocular lenses, project on the patient's retina a focused image for near vision superimposed with a degraded image for far vision, or a focused image for far vision superimposed with the defocused image of the near scene. It is expected that patients with these corrections are able to adapt to the complex Simultaneous Vision retinal images, although the mechanisms or the extent to which this happens is not known. We studied the neural adaptation to simultaneous vision by studying changes in the Natural Perceived Focus and in the Perceptual Score of image quality in subjects after exposure to Simultaneous Vision. We show that Natural Perceived Focus shifts after a brief period of adaptation to a Simultaneous Vision blur, similar to adaptation to Pure Defocus. This shift strongly correlates with the magnitude and proportion of defocus in the adapting image. The magnitude of defocus affects perceived quality of Simultaneous Vision images, with 0.5 D defocus scored lowest and beyond 1.5 D scored “sharp”. Adaptation to Simultaneous Vision shifts the Perceptual Score of these images towards higher rankings. Larger improvements occurred when testing simultaneous images with the same magnitude of defocus as the adapting images, indicating that wearing a particular bifocal correction improves the perception of images provided by that correction. PMID:24664087

  20. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  1. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  2. "Chrono-functional milk": The difference between melatonin concentrations in night-milk versus day-milk under different night illumination conditions.

    PubMed

    Asher, A; Shabtay, A; Brosh, A; Eitam, H; Agmon, R; Cohen-Zinder, M; Zubidat, A E; Haim, A

    2015-01-01

    Pineal melatonin (MLT) is produced at highest levels during the night, under dark conditions. We evaluated differences in MLT-concentration by comparing daytime versus night time milk samples, from two dairy farms with different night illumination conditions: (1) natural dark (Dark-Night); (2) short wavelength Artificial Light at Night (ALAN, Night-Illuminated). Samples were collected from 14 Israeli Holstein cows from each commercial dairy farm at 04:30 h ("Night-milk") 12:30 h ("Day-milk") and analyzed for MLT-concentration. In order to study the effects of night illumination conditions on cows circadian rhythms, Heart Rate (HR) daily rhythms were recorded. MLT-concentrations of Night-milk samples from the dark-night group were significantly (p < 0.001) higher than those of Night-illuminated conditions (30.70 ± 1.79 and 17.81 ± 0.33 pg/ml, respectively). Interestingly, night illumination conditions also affected melatonin concentrations at daytime where under Dark-Night conditions values are significantly (p < 0.001) higher than Night-Illuminated conditions, (5.36 ± 0.33 and 3.30 ± 0.18 pg/ml, respectively). There were no significant differences between the two treatments in the milk yield and milk composition except somatic cell count (SCC), which was significantly lower (p = 0.02) in the Dark-Night group compared with the Night-Illuminated group. Cows in both groups presented a significant (p < 0.01) HR daily rhythm, therefore we assume that in the night illuminated cows feeding and milking time are the "time keeper", while in the Dark-night cows, HR rhythms were entrained by the light/dark cycle. The higher MLT-concentration in Dark-night cows with the lower SCC values calls upon farmers to avoid exposure of cows to ALAN. Therefore, under Dark-night conditions milk quality will improve by lowering SCC values where separation between night and day of such milk can produce chrono-functional milk, naturally rich with

  3. Parallel computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhr, L.

    1987-01-01

    This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.

  4. 5 CFR 532.505 - Night shift differentials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... employee regularly assigned to a night shift who is temporarily assigned to a day shift or to a night shift... regularly assigned to a day shift who is temporarily assigned to a night shift shall be paid a night shift... schedule involving work on both day and night shifts shall be paid a night shift differential only for any...

  5. Research on the feature set construction method for spherical stereo vision

    NASA Astrophysics Data System (ADS)

    Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia

    2015-01-01

    Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.

  6. Application of aircraft navigation sensors to enhanced vision systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.

    1993-01-01

    In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.

  7. Hurricane Sandy Viewed in the Dark of Night

    NASA Image and Video Library

    2017-12-08

    NASA image acquired October 28, 2012 For the latest info from NASA on Hurricane Sandy go to: 1.usa.gov/Ti5SgS This image of Hurricane Sandy was acquired by the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite around 2:42 a.m. Eastern Daylight Time (06:42 Universal Time) on October 28, 2012. The storm was captured by a special “day-night band,” which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe dim signals such as auroras, airglow, gas flares, city lights, and reflected moonlight. In this case, the cloud tops were lit by the nearly full Moon (full occurs on October 29). Some city lights in Florida and Georgia are also visible amidst the clouds. The Suomi NPP satellite was launched one year ago today (on October 28, 2011) to extend and enhance long-term records of key environmental data monitored by NASA, the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Department of Defense. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership (Suomi NPP). Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Michael Carlowicz. Instrument: Suomi NPP - VIIRS NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  8. Protyping machine vision software on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Karantalis, George; Batchelor, Bruce G.

    1998-10-01

    Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.

  9. 75 FR 52790 - Small Business Size Standards: Waiver of the Nonmanufacturer Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-27

    ... for GEN II and GEN III Image Intensifier Tubes. SUMMARY: The U.S. Small Business Administration (SBA... Image Intensifier Tubes, Product Service Code (PSC) 5855, Night Vision Equipment, Emitted and Reflected... business GEN II and GEN III image intensifier tube manufacturers. If granted, the waiver would allow...

  10. Night terrors in children

    MedlinePlus

    Pavor nocturnus; Sleep terror disorder ... The cause is unknown, but night terrors may be triggered by: Fever Lack of sleep Periods of emotional tension, stress, or conflict Night terrors are most common in children ...

  11. Comparison of Flexible Ureterorenoscope Quality of Vision: An In Vitro Study.

    PubMed

    Talso, Michele; Proietti, Silvia; Emiliani, Esteban; Gallioli, Andrea; Dragos, Laurian; Orosa, Andrea; Servian, Pol; Barreiro, Aaron; Giusti, Guido; Montanari, Emanuele; Somani, Bhaskar; Traxer, Olivier

    2018-06-01

    Flexible ureterorenoscopy (fURS) is one of the best solutions for treatment of renal calculi <2 cm and for upper urinary tract urothelial carcinoma conservative treatment. An adequate quality of vision is mandatory to help surgeon get better outcomes. No studies have been done, to our knowledge, about what fURS in the market has the best quality of vision. Seven different fURS were used to compare the image quality (Lithovue, Olympus V, Olympus V2, Storz Flex XC-in White Light and in Clara+Chroma mode-Wolf Cobra Vision, Olympus P6, and Storx Flex X2). Two standardized grids to evaluate contrast and image definition and three stones of different composition were filmed in four standardized different scenarios. These videos were shown to 103 subjects (51 urologists and 52 nonurologists) who had to evaluate them with a rating scale from 1 (very bad) to 5 (very good). No difference in terms of scores was observed for sex of the participants. Digital (D) ureterorenoscopes were rated better than fiber optics (FOs) ureterorenoscopes. Overall, Flex XC White Light and XC Clara+Chroma image quality resulted steadily better than other fURS (p < 0.0001). Olympus V generally provided a vision better than Lithovue. Cobra Vision and Olympus V2 had superimposable values that were significantly lower than Lithovue's ones. Olympus P6 and Storz X2 offered a low quality of vision compared to the others. In the medium simulating bleeding, Olympus V and V2 significantly improved their scores of 12% and 8.1%, contrary to rest of the ureterorenoscopes. D ureterorenoscopes have a better image quality than FO ones. The only disposable ureterorenoscope tested was comparable to the majority of other D ureterorenoscopes. The best image quality was provided by Storz D ureterorenoscopes, being Clara Chroma the favorite Spies Mode, according to literature.

  12. Computer Vision Techniques for Transcatheter Intervention

    PubMed Central

    Zhao, Feng; Roach, Matthew

    2015-01-01

    Minimally invasive transcatheter technologies have demonstrated substantial promise for the diagnosis and the treatment of cardiovascular diseases. For example, transcatheter aortic valve implantation is an alternative to aortic valve replacement for the treatment of severe aortic stenosis, and transcatheter atrial fibrillation ablation is widely used for the treatment and the cure of atrial fibrillation. In addition, catheter-based intravascular ultrasound and optical coherence tomography imaging of coronary arteries provides important information about the coronary lumen, wall, and plaque characteristics. Qualitative and quantitative analysis of these cross-sectional image data will be beneficial to the evaluation and the treatment of coronary artery diseases such as atherosclerosis. In all the phases (preoperative, intraoperative, and postoperative) during the transcatheter intervention procedure, computer vision techniques (e.g., image segmentation and motion tracking) have been largely applied in the field to accomplish tasks like annulus measurement, valve selection, catheter placement control, and vessel centerline extraction. This provides beneficial guidance for the clinicians in surgical planning, disease diagnosis, and treatment assessment. In this paper, we present a systematical review on these state-of-the-art methods. We aim to give a comprehensive overview for researchers in the area of computer vision on the subject of transcatheter intervention. Research in medical computing is multi-disciplinary due to its nature, and hence, it is important to understand the application domain, clinical background, and imaging modality, so that methods and quantitative measurements derived from analyzing the imaging data are appropriate and meaningful. We thus provide an overview on the background information of the transcatheter intervention procedures, as well as a review of the computer vision techniques and methodologies applied in this area. PMID:27170893

  13. Color Vision in Aniridia.

    PubMed

    Pedersen, Hilde R; Hagen, Lene A; Landsend, Erlend C S; Gilson, Stuart J; Utheim, Øygunn A; Utheim, Tor P; Neitz, Maureen; Baraas, Rigmor C

    2018-04-01

    To assess color vision and its association with retinal structure in persons with congenital aniridia. We included 36 persons with congenital aniridia (10-66 years), and 52 healthy, normal trichromatic controls (10-74 years) in the study. Color vision was assessed with Hardy-Rand-Rittler (HRR) pseudo-isochromatic plates (4th ed., 2002); Cambridge Color Test and a low-vision version of the Color Assessment and Diagnosis test (CAD-LV). Cone-opsin genes were analyzed to confirm normal versus congenital color vision deficiencies. Visual acuity and ocular media opacities were assessed. The central 30° of both eyes were imaged with the Heidelberg Spectralis OCT2 to grade the severity of foveal hypoplasia (FH, normal to complete: 0-4). Five participants with aniridia had cone opsin genes conferring deutan color vision deficiency and were excluded from further analysis. Of the 31 with aniridia and normal opsin genes, 11 made two or more red-green (RG) errors on HRR, four of whom also made yellow-blue (YB) errors; one made YB errors only. A total of 19 participants had higher CAD-LV RG thresholds, of which eight also had higher CAD-LV YB thresholds, than normal controls. In aniridia, the thresholds were higher along the RG than the YB axis, and those with a complete FH had significantly higher RG thresholds than those with mild FH (P = 0.038). Additional increase in YB threshold was associated with secondary ocular pathology. Arrested foveal formation and associated alterations in retinal processing are likely to be the primary reason for impaired red-green color vision in aniridia.

  14. Optoelectronic vision

    NASA Astrophysics Data System (ADS)

    Ren, Chunye; Parel, Jean-Marie A.

    1993-06-01

    Scientists have searched every discipline to find effective methods of treating blindness, such as using aids based on conversion of the optical image, to auditory or tactile stimuli. However, the limited performance of such equipment and difficulties in training patients have seriously hampered practical applications. A great edification has been given by the discovery of Foerster (1929) and Krause & Schum (1931), who found that the electrical stimulation of the visual cortex evokes the perception of a small spot of light called `phosphene' in both blind and sighted subjects. According to this principle, it is possible to invite artificial vision by using stimulation with electrodes placed on the vision neural system, thereby developing a prosthesis for the blind that might be of value in reading and mobility. In fact, a number of investigators have already exploited this phenomena to produce a functional visual prosthesis, bringing about great advances in this area.

  15. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Features of the Vision of Elderly Pedestrians when Crossing a Road.

    PubMed

    Matsui, Yasuhiro; Oikawa, Shoko; Aoki, Yoshio; Sekine, Michiaki; Mitobe, Kazutaka

    2014-11-01

    The present study clarifies the mechanism by which an accident occurs when an elderly pedestrian crosses a road in front of a car, focusing on features of the central and peripheral vision of elderly pedestrians who are judging when it is safe to cross the road. For the pedestrian's central visual field, we investigated the effect of age on the timing judgment using an actual car. The results for daytime conditions indicate that the elderly pedestrians tended to make later judgments of when they crossed the road from the right side of the driver's view at high car velocities. At night, for a car with its headlights on high beam, the average car-pedestrian distances of elderly pedestrians on the left side of the driver's view were significantly longer than those of young pedestrians at velocities of 20 and 40 km/h. The eyesight of the elderly pedestrians during the day did not affect the timing judgment of crossing a road. At night, for a car with its headlights on either high or low beam, the average car-pedestrian distances of elderly pedestrians having good eyesight were longer than those of elderly pedestrians having poor eyesight, for all car velocities. The color of the car body in the central visual field did not affect the timing judgment of elderly pedestrians crossing the road. Meanwhile, the car-body color in the elderly pedestrian's peripheral vision strongly affected the pedestrian's awareness of the car.

  17. The use of handheld spectral domain optical coherence tomography in pediatric ophthalmology practice: Our experience of 975 infants and children.

    PubMed

    Mallipatna, Ashwin; Vinekar, Anand; Jayadev, Chaitra; Dabir, Supriya; Sivakumar, Munsusamy; Krishnan, Narasimha; Mehta, Pooja; Berendschot, Tos; Yadav, Naresh Kumar

    2015-07-01

    Optical coherence tomography (OCT) is an important imaging tool assessing retinal architecture. In this article, we report a single centers experience of using handheld spectral domain (SD)-OCT in a pediatric population using the Envisu 2300 (Bioptigen Inc., Research Triangle Park, NC, USA). We studied SD-OCT images from 975 patients imaged from January 2011 to December 2014. The variety of cases that underwent an SD-OCT was analyzed. Cases examples from different case scenarios were selected to showcase unique examples of many diseases. Three hundred and sixty-eight infants (37.7%) were imaged for retinopathy of prematurity, 362 children (37.1%) underwent the test for evaluation of suboptimal vision or an unexplained vision loss, 126 children (12.9%) for evaluation of nystagmus or night blindness, 54 children (5.5%) for an intraocular tumor or a mass lesion such as retinoblastoma, and 65 children (6.7%) for other diseases of the pediatric retina. The unique findings in the retinal morphology seen with some of these diseases are discussed. The handheld SD-OCT is useful in the evaluation of the pediatric retinal diseases. The test is useful in the assessment of vision development in premature children, evaluation of unexplained vision loss and amblyopia, nystagmus and night blindness, and intraocular tumors (including retinoblastoma).

  18. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  19. X-linked retinitis pigmentosa: Report of a large kindred with loss of central vision and preserved peripheral function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shastry, B.S.; Trese, M.T.

    1995-11-20

    X-linked retinitis pigmentosa (XLRP) is the most severe form of the inherited forms of retinitis pigmentosa and is clinically variable and genetically heterogeneous. It affects one in 20,000 live births. The affected individuals manifest degeneration of the peripheral retina during the first two decades of life on the basis of night blindness. Central vision usually is preserved until age 50, when the disease advances, affecting central vision and ultimately leading to complete loss of sight. Linkage analysis has shown two loci with a possibility of a third locus on the human X chromosome. The genetic abnormality that causes XLRP ismore » not known at present. Here we describe a large kindred which manifests central loss of field with the preservation of peripheral vision. 5 refs., 1 fig.« less

  20. Hybrid Vision-Fusion system for whole-body scintigraphy.

    PubMed

    Barjaktarović, Marko; Janković, Milica M; Jeremić, Marija; Matović, Milovan

    2018-05-01

    Radioiodine therapy in the treatment of differentiated thyroid carcinoma (DTC) is used in clinical practice for the ablation of thyroid residues and/or destruction of tumour tissue. Whole-body scintigraphy for visualization of the spatial 131I distribution performed by a gamma camera (GC) is a standard procedure in DTC patients after application of radioiodine therapy. A common problem is the precise topographic localization of regions where radioiodine is accumulated even in SPECT imaging. SPECT/CT can provide precise topographic localization of regions where radioiodine is accumulated, but it is often unavailable, especially in developing countries because of the high price of the equipment. In this paper, we present a Vision-Fusion system as an affordable solution for 1) acquiring an optical whole-body image during routine whole-body scintigraphy and 2) fusing gamma and optical images (also available for the auto-contour mode of GC). The estimated prediction error for image registration is 1.84 mm. The validity of fusing was tested by performing simultaneous optical and scintigraphy image acquisition of the bar phantom. The fusion result shows that the fusing process has a slight influence and is lower than the spatial resolution of GC (mean value ± standard deviation: 1.24 ± 0.22 mm). The Vision-Fusion system was used for radioiodine post-therapeutic treatment, and 17 patients were followed (11 women and 6 men, with an average age of 48.18 ± 13.27 years). Visual inspection showed no misregistration. Based on our first clinical experience, we noticed that the Vision-Fusion system could be very useful for improving the diagnostic possibility of whole-body scintigraphy after radioiodine therapy. Additionally, the proposed Vision-Fusion software can be used as an upgrade for any GC to improve localizations of thyroid/tumour tissue. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Vision Screening

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  2. Uncooled LWIR imaging: applications and market analysis

    NASA Astrophysics Data System (ADS)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  3. Effect of Using High Signal-to-Noise Image Intensifier Tubes on Night Vision Goggle (NVG) Aided Visual Acuity

    DTIC Science & Technology

    2006-05-01

    tubes utilizing thin- filmed technology allowing for a higher SNR, and the F4949G goggles were tested. Twelve participants tested each goggle under six...LogMAR Visual Acuity as a Function of Illumination, Contrast, and NVG ........ 37 Repeated Measures Within-Subjects Analysis of Variance...auto-gated power supply and thin- filmed technology. The Pinnacle’sTM thin- filmed technology gave the image intensifier tube an increase in the signal-to

  4. Determinants of day-night difference in blood pressure, a comparison with determinants of daytime and night-time blood pressure.

    PubMed

    Musameh, M D; Nelson, C P; Gracey, J; Tobin, M; Tomaszewski, M; Samani, N J

    2017-01-01

    Blunted day-night difference in blood pressure (BP) is an independent cardiovascular risk factor, although there is limited information on determinants of diurnal variation in BP. We investigated determinants of day-night difference in systolic (SBP) and diastolic (DBP) BP and how these compared with determinants of daytime and night-time SBP and DBP. We analysed the association of mean daytime, mean night-time and mean day-night difference (defined as (mean daytime-mean night-time)/mean daytime) in SBP and DBP with clinical, lifestyle and biochemical parameters from 1562 adult individuals (mean age 38.6) from 509 nuclear families recruited in the GRAPHIC Study. We estimated the heritability of the various BP phenotypes. In multivariate analysis, there were significant associations of age, sex, markers of adiposity (body mass index and waist-hip ratio), plasma lipids (total and low-density lipoprotein cholesterol and triglycerides), serum uric acid, alcohol intake and current smoking status on daytime or night-time SBP and/or DBP. Of these, only age (P=4.7 × 10 -5 ), total cholesterol (P=0.002), plasma triglycerides (P=0.006) and current smoking (P=3.8 × 10 -9 ) associated with day-night difference in SBP, and age (P=0.001), plasma triglyceride (P=2.2 × 10 -5 ) and current smoking (3.8 × 10 -4 ) associated with day-night difference in DBP. 24-h, daytime and night-time SBP and DBP showed substantial heritability (ranging from 18-43%). In contrast day-night difference in SBP showed a lower heritability (13%) while heritability of day-night difference in DBP was not significant. These data suggest that specific clinical, lifestyle and biochemical factors contribute to inter-individual variation in daytime, night-time and day-night differences in SBP and DBP. Variation in day-night differences in BP is largely non-genetic.

  5. Determinants of day–night difference in blood pressure, a comparison with determinants of daytime and night-time blood pressure

    PubMed Central

    Musameh, M D; Nelson, C P; Gracey, J; Tobin, M; Tomaszewski, M; Samani, N J

    2017-01-01

    Blunted day–night difference in blood pressure (BP) is an independent cardiovascular risk factor, although there is limited information on determinants of diurnal variation in BP. We investigated determinants of day–night difference in systolic (SBP) and diastolic (DBP) BP and how these compared with determinants of daytime and night-time SBP and DBP. We analysed the association of mean daytime, mean night-time and mean day–night difference (defined as (mean daytime−mean night-time)/mean daytime) in SBP and DBP with clinical, lifestyle and biochemical parameters from 1562 adult individuals (mean age 38.6) from 509 nuclear families recruited in the GRAPHIC Study. We estimated the heritability of the various BP phenotypes. In multivariate analysis, there were significant associations of age, sex, markers of adiposity (body mass index and waist–hip ratio), plasma lipids (total and low-density lipoprotein cholesterol and triglycerides), serum uric acid, alcohol intake and current smoking status on daytime or night-time SBP and/or DBP. Of these, only age (P=4.7 × 10−5), total cholesterol (P=0.002), plasma triglycerides (P=0.006) and current smoking (P=3.8 × 10−9) associated with day–night difference in SBP, and age (P=0.001), plasma triglyceride (P=2.2 × 10−5) and current smoking (3.8 × 10−4) associated with day–night difference in DBP. 24-h, daytime and night-time SBP and DBP showed substantial heritability (ranging from 18–43%). In contrast day–night difference in SBP showed a lower heritability (13%) while heritability of day–night difference in DBP was not significant. These data suggest that specific clinical, lifestyle and biochemical factors contribute to inter-individual variation in daytime, night-time and day–night differences in SBP and DBP. Variation in day–night differences in BP is largely non-genetic. PMID:26984683

  6. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  7. Nocturnal vision and landmark orientation in a tropical halictid bee.

    PubMed

    Warrant, Eric J; Kelber, Almut; Gislén, Anna; Greiner, Birgit; Ribi, Willi; Wcislo, William T

    2004-08-10

    Some bees and wasps have evolved nocturnal behavior, presumably to exploit night-flowering plants or avoid predators. Like their day-active relatives, they have apposition compound eyes, a design usually found in diurnal insects. The insensitive optics of apposition eyes are not well suited for nocturnal vision. How well then do nocturnal bees and wasps see? What optical and neural adaptations have they evolved for nocturnal vision? We studied female tropical nocturnal sweat bees (Megalopta genalis) and discovered that they are able to learn landmarks around their nest entrance prior to nocturnal foraging trips and to use them to locate the nest upon return. The morphology and optics of the eye, and the physiological properties of the photoreceptors, have evolved to give Megalopta's eyes almost 30 times greater sensitivity to light than the eyes of diurnal worker honeybees, but this alone does not explain their nocturnal visual behavior. This implies that sensitivity is improved by a strategy of photon summation in time and in space, the latter of which requires the presence of specialized cells that laterally connect ommatidia into groups. First-order interneurons, with significantly wider lateral branching than those found in diurnal bees, have been identified in the first optic ganglion (the lamina ganglionaris) of Megalopta's optic lobe. We believe that these cells have the potential to mediate spatial summation. Despite the scarcity of photons, Megalopta is able to visually orient to landmarks at night in a dark forest understory, an ability permitted by unusually sensitive apposition eyes and neural photon summation.

  8. Machine learning and computer vision approaches for phenotypic profiling

    PubMed Central

    Morris, Quaid

    2017-01-01

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. PMID:27940887

  9. Machine learning and computer vision approaches for phenotypic profiling.

    PubMed

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  10. Local spatial frequency analysis for computer vision

    NASA Technical Reports Server (NTRS)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  11. A 16 x 16-pixel retinal-prosthesis vision chip with in-pixel digital image processing in a frequency domain by use of a pulse-frequency-modulation photosensor

    NASA Astrophysics Data System (ADS)

    Kagawa, Keiichiro; Furumiya, Tetsuo; Ng, David C.; Uehara, Akihiro; Ohta, Jun; Nunoshita, Masahiro

    2004-06-01

    We are exploring the application of pulse-frequency-modulation (PFM) photosensor to retinal prosthesis for the blind because behavior of PFM photosensors is similar to retinal ganglion cells, from which visual data are transmitted from the retina toward the brain. We have developed retinal-prosthesis vision chips that reshape the output pulses of the PFM photosensor to biphasic current pulses suitable for electric stimulation of retinal cells. In this paper, we introduce image-processing functions to the pixel circuits. We have designed a 16x16-pixel retinal-prosthesis vision chip with several kinds of in-pixel digital image processing such as edge enhancement, edge detection, and low-pass filtering. This chip is a prototype demonstrator of the retinal prosthesis vision chip applicable to in-vitro experiments. By utilizing the feature of PFM photosensor, we propose a new scheme to implement the above image processing in a frequency domain by digital circuitry. Intensity of incident light is converted to a 1-bit data stream by a PFM photosensor, and then image processing is executed by a 1-bit image processor based on joint and annihilation of pulses. The retinal prosthesis vision chip is composed of four blocks: a pixels array block, a row-parallel stimulation current amplifiers array block, a decoder block, and a base current generators block. All blocks except PFM photosensors and stimulation current amplifiers are embodied as digital circuitry. This fact contributes to robustness against noises and fluctuation of power lines. With our vision chip, we can control photosensitivity and intensity and durations of stimulus biphasic currents, which are necessary for retinal prosthesis vision chip. The designed dynamic range is more than 100 dB. The amplitude of the stimulus current is given by a base current, which is common for all pixels, multiplied by a value in an amplitude memory of pixel. Base currents of the negative and positive pulses are common for the all

  12. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  13. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  14. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  15. Oxidative DNA damage during night shift work.

    PubMed

    Bhatti, Parveen; Mirick, Dana K; Randolph, Timothy W; Gong, Jicheng; Buchanan, Diana Taibi; Zhang, Junfeng Jim; Davis, Scott

    2017-09-01

    We previously reported that compared with night sleep, day sleep among shift workers was associated with reduced urinary excretion of 8-hydroxydeoxyguanosine (8-OH-dG), potentially reflecting a reduced ability to repair 8-OH-dG lesions in DNA. We identified the absence of melatonin during day sleep as the likely causative factor. We now investigate whether night work is also associated with reduced urinary excretion of 8-OH-dG. For this cross-sectional study, 50 shift workers with the largest negative differences in night work versus night sleep circulating melatonin levels (measured as 6-sulfatoxymelatonin in urine) were selected from among the 223 shift workers included in our previous study. 8-OH-dG concentrations were measured in stored urine samples using high performance liquid chromatography with electrochemical detection. Mixed effects models were used to compare night work versus night sleep 8-OH-dG levels. Circulating melatonin levels during night work (mean=17.1 ng/mg creatinine/mg creatinine) were much lower than during night sleep (mean=51.7 ng/mg creatinine). In adjusted analyses, average urinary 8-OH-dG levels during the night work period were only 20% of those observed during the night sleep period (95% CI 10% to 30%; p<0.001). This study suggests that night work, relative to night sleep, is associated with reduced repair of 8-OH-dG lesions in DNA and that the effect is likely driven by melatonin suppression occurring during night work relative to night sleep. If confirmed, future studies should evaluate melatonin supplementation as a means to restore oxidative DNA damage repair capacity among shift workers. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    NASA Astrophysics Data System (ADS)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  17. Use of a vision model to quantify the significance of factors effecting target conspicuity

    NASA Astrophysics Data System (ADS)

    Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.

    2006-05-01

    When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.

  18. A Night-time Look at Typhoon Soudelor from NASA-NOAA's Suomi NPP Satellite

    NASA Image and Video Library

    2015-08-10

    On August 6, 2015, NASA-NOAA's Suomi NPP satellite passed over powerful Typhoon Soudelor at night when it was headed toward Taiwan. The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard NASA-NOAA's Suomi satellite captured this night-time infrared image of the storm. At 1500 UTC (11 a.m. EDT) on August 6, 2015, Typhoon Soudelor had maximum sustained winds near 90 knots (103.6 mph/166.7 kph). It was centered near 21.3 North latitude and 127.5 East longitude, about 324 nautical miles (372.9 miles/600 km) south of Kadena Air Base, Okinawa, Japan. It was moving to the west at 10 knots (11.5 mph/18.5 kph). Taiwan is located west (left) of the powerful typhoon in this image. Credit: UWM/CIMSS/SSEC, William Straka III NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  19. How lateral inhibition and fast retinogeniculo-cortical oscillations create vision: A new hypothesis.

    PubMed

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Nixon-Shapiro, Elizabeth

    2016-11-01

    The role of the physiological processes involved in human vision escapes clarification in current literature. Many unanswered questions about vision include: 1) whether there is more to lateral inhibition than previously proposed, 2) the role of the discs in rods and cones, 3) how inverted images on the retina are converted to erect images for visual perception, 4) what portion of the image formed on the retina is actually processed in the brain, 5) the reason we have an after-image with antagonistic colors, and 6) how we remember space. This theoretical article attempts to clarify some of the physiological processes involved with human vision. The global integration of visual information is conceptual; therefore, we include illustrations to present our theory. Universally, the eyeball is 2.4cm and works together with membrane potential, correspondingly representing the retinal layers, photoreceptors, and cortex. Images formed within the photoreceptors must first be converted into chemical signals on the photoreceptors' individual discs and the signals at each disc are transduced from light photons into electrical signals. We contend that the discs code the electrical signals into accurate distances and are shown in our figures. The pre-existing oscillations among the various cortices including the striate and parietal cortex, and the retina work in unison to create an infrastructure of visual space that functionally "places" the objects within this "neural" space. The horizontal layers integrate all discs accurately to create a retina that is pre-coded for distance. Our theory suggests image inversion never takes place on the retina, but rather images fall onto the retina as compressed and coiled, then amplified through lateral inhibition through intensification and amplification on the OFF-center cones. The intensified and amplified images are decompressed and expanded in the brain, which become the images we perceive as external vision. This is a theoretical

  20. Implementing a night-shift clinical nurse specialist.

    PubMed

    Becker, Dawn Marie

    2013-01-01

    Night-shift nurses receive fewer educational opportunities and less administrative support than do day-shift staff, tend to be newer, with less experience and fewer resources, and experience greater turnover rates, stress, and procedural errors. In an attempt to bridge the gap between day- and night-shift nursing, a night-shift clinical nurse specialist (CNS) position was created in a midsized, community teaching hospital. The goal was to provide an advanced practice presence to improve patient outcomes, communication, education, and cost-effectiveness. The night-shift CNS participated in nursing education and skill certifications, communicated new procedures and information, and created a communication committee specifically for night-shift nurses. Through regular rounding and on-call notification, the CNS was available to every area of the hospital for consultation and clinical assistance and assisted with rapid responses, codes, and traumas. Providing education during night shift reduced overtime costs and increased morale, positively affecting turnover rates. The night-shift CNS position has improved morale and equalized support for night-shift nurses. More research, most notably in specific night-shift metrics, is necessary, and with the implementation of the role in additional facilities, more can be understood about improving patient care and nursing staff satisfaction during night shift.

  1. Operator-coached machine vision for space telerobotics

    NASA Technical Reports Server (NTRS)

    Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.

    1991-01-01

    A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.

  2. Night shift work and modifiable lifestyle factors.

    PubMed

    Pepłońska, Beata; Burdelak, Weronika; Krysicka, Jolanta; Bukowska, Agnieszka; Marcinkiewicz, Andrzej; Sobala, Wojciech; Klimecka-Muszyńska, Dorota; Rybacki, Marcin

    2014-10-01

    Night shift work has been linked to some chronic diseases. Modification of lifestyle by night work may partially contribute to the development of these diseases, nevertheless, so far epidemiological evidence is limited. The aim of the study was to explore association between night shift work and lifestyle factors using data from a cross-sectional study among blue-collar workers employed in industrial plants in Łódź, Poland. The anonymous questionnaire was self-administered among 605 employees (236 women and 369 men, aged 35 or more) - 434 individuals currently working night shifts. Distribution of the selected lifestyle related factors such as smoking, alcohol drinking, physical activity, body mass index (BMI), number of main meals and the hour of the last meal was compared between current, former, and never night shift workers. Adjusted ORs or predicted means were calculated, as a measure of the associations between night shift work and lifestyle factors, with age, marital status and education included in the models as covariates. Recreational inactivity (defined here as less than one hour per week of recreational physical activity) was associated with current night shift work when compared to never night shift workers (OR = 2.43, 95% CI: 1.13-5.22) among men. Alcohol abstinence and later time of the last meal was associated with night shift work among women. Statistically significant positive relationship between night shift work duration and BMI was observed among men (p = 0.029). This study confirms previous studies reporting lower exercising among night shift workers and tendency to increase body weight. This finding provides important public health implication for the prevention of chronic diseases among night shift workers. Initiatives promoting physical activity addressed in particular to the night shift workers are recommended.

  3. 3-D Signal Processing in a Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  4. Time-to-impact sensors in robot vision applications based on the near-sensor image processing concept

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2012-03-01

    Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.

  5. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  6. The role of vision processing in prosthetic vision.

    PubMed

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  7. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  8. Night and day in the VA: associations between night shift staffing, nurse workforce characteristics, and length of stay.

    PubMed

    de Cordova, Pamela B; Phibbs, Ciaran S; Schmitt, Susan K; Stone, Patricia W

    2014-04-01

    In hospitals, nurses provide patient care around the clock, but the impact of night staff characteristics on patient outcomes is not well understood. The aim of this study was to examine the association between night nurse staffing and workforce characteristics and the length of stay (LOS) in 138 veterans affairs (VA) hospitals using panel data from 2002 through 2006. Staffing in hours per patient day was higher during the day than at night. The day nurse workforce had more educational preparation than the night workforce. Nurses' years of experience at the unit, facility, and VA level were greater at night. In multivariable analyses controlling for confounding variables, higher night staffing and a higher skill mix were associated with reduced LOS. © 2014 Wiley Periodicals, Inc.

  9. Application of the SP theory of intelligence to the understanding of natural vision and the development of computer vision.

    PubMed

    Wolff, J Gerard

    2014-01-01

    The SP theory of intelligence aims to simplify and integrate concepts in computing and cognition, with information compression as a unifying theme. This article is about how the SP theory may, with advantage, be applied to the understanding of natural vision and the development of computer vision. Potential benefits include an overall simplification of concepts in a universal framework for knowledge and seamless integration of vision with other sensory modalities and other aspects of intelligence. Low level perceptual features such as edges or corners may be identified by the extraction of redundancy in uniform areas in the manner of the run-length encoding technique for information compression. The concept of multiple alignment in the SP theory may be applied to the recognition of objects, and to scene analysis, with a hierarchy of parts and sub-parts, at multiple levels of abstraction, and with family-resemblance or polythetic categories. The theory has potential for the unsupervised learning of visual objects and classes of objects, and suggests how coherent concepts may be derived from fragments. As in natural vision, both recognition and learning in the SP system are robust in the face of errors of omission, commission and substitution. The theory suggests how, via vision, we may piece together a knowledge of the three-dimensional structure of objects and of our environment, it provides an account of how we may see things that are not objectively present in an image, how we may recognise something despite variations in the size of its retinal image, and how raster graphics and vector graphics may be unified. And it has things to say about the phenomena of lightness constancy and colour constancy, the role of context in recognition, ambiguities in visual perception, and the integration of vision with other senses and other aspects of intelligence.

  10. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  11. Diagnostic Performance of Ultrasonography for Pediatric Appendicitis: A Night and Day Difference?

    PubMed

    Mangona, Kate Louise M; Guillerman, R Paul; Mangona, Victor S; Carpenter, Jennifer; Zhang, Wei; Lopez, Monica; Orth, Robert C

    2017-12-01

    For imaging pediatric appendicitis, ultrasonography (US) is preferred because of its lack of ionizing radiation, but is limited by operator dependence. This study investigates the US diagnostic performance during night shifts covered by radiology trainees compared to day shifts covered by attending radiologists. Appy-Scores (1 = completely visualized normal appendix; 2 = partially visualized normal appendix; 3 = nonvisualized appendix with no inflammatory changes in the expected region of the appendix; 4 = equivocal; 5a = nonperforated appendicitis; 5b = perforated appendicitis) from 2935 US examinations (2161:774, day-to-night) from July 2013 to 2014 were correlated with the intraoperative diagnoses and the clinical follow-up. The diagnostic performance of trainees and attendings was compared with Fisher exact test. Interobserver agreement was measured by Cohen kappa coefficient. Appendicitis prevalence was 25.3% (day) and 22.5% (night). Sensitivity, specificity, accuracy, negative predictive value, and positive predictive vale were 94.0%, 93.7%, 93.8%, 97.9%, and 83.4% during the day and 92.0%, 91.2%, 91.3%, 97.5%, and 75.2% at night. Specificity (P = .048) and positive predictive value (P = .011) differed, with more false positives at night (7%) than during the day (4.7%). Trainee and attending agreement was high (k = 0.995), with Appy-Scores of 1, 4, and 5a most frequently discordant. US has a high diagnostic performance and interobserver agreement for pediatric appendicitis when interpreted by radiology trainees during night shifts or attending radiologists during day shifts. However, lower specificity and positive predictive value at night warrants a thorough trainee education to avoid false-positive examinations. Published by Elsevier Inc.

  12. Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes

    NASA Technical Reports Server (NTRS)

    Agapakis, John E.; Bolstad, Jon

    1993-01-01

    Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.

  13. Predicting pork loin intramuscular fat using computer vision system.

    PubMed

    Liu, J-H; Sun, X; Young, J M; Bachmeier, L A; Newman, D J

    2018-09-01

    The objective of this study was to investigate the ability of computer vision system to predict pork intramuscular fat percentage (IMF%). Center-cut loin samples (n = 85) were trimmed of subcutaneous fat and connective tissue. Images were acquired and pixels were segregated to estimate image IMF% and 18 image color features for each image. Subjective IMF% was determined by a trained grader. Ether extract IMF% was calculated using ether extract method. Image color features and image IMF% were used as predictors for stepwise regression and support vector machine models. Results showed that subjective IMF% had a correlation of 0.81 with ether extract IMF% while the image IMF% had a 0.66 correlation with ether extract IMF%. Accuracy rates for regression models were 0.63 for stepwise and 0.75 for support vector machine. Although subjective IMF% has shown to have better prediction, results from computer vision system demonstrates the potential of being used as a tool in predicting pork IMF% in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Iris recognition and what is next? Iris diagnosis: a new challenging topic for machine vision from image acquisition to image interpretation

    NASA Astrophysics Data System (ADS)

    Perner, Petra

    2017-03-01

    Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.

  15. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  16. Improvement in vision: a new goal for treatment of hereditary retinal degenerations

    PubMed Central

    Jacobson, Samuel G; Cideciyan, Artur V; Aguirre, Gustavo D; Roman, Alejandro J; Sumaroka, Alexander; Hauswirth, William W; Palczewski, Krzysztof

    2015-01-01

    Introduction: Inherited retinal degenerations (IRDs) have long been considered untreatable and incurable. Recently, one form of early-onset autosomal recessive IRD, Leber congenital amaurosis (LCA) caused by mutations in RPE65 (retinal pigment epithelium-specific protein 65 kDa) gene, has responded with some improvement of vision to gene augmentation therapy and oral retinoid administration. This early success now requires refinement of such therapeutics to fully realize the impact of these major scientific and clinical advances. Areas covered: Progress toward human therapy for RPE65-LCA is detailed from the understanding of molecular mechanisms to preclinical proof-of-concept research to clinical trials. Unexpected positive and complicating results in the patients receiving treatment are explained. Logical next steps to advance the clinical value of the therapeutics are suggested. Expert opinion: The first molecularly based early-phase therapies for an IRD are remarkably successful in that vision has improved and adverse events are mainly associated with surgical delivery to the subretinal space. Yet, there are features of the gene augmentation therapeutic response, such as slowed kinetics of night vision, lack of foveal cone function improvement and relentlessly progressive retinal degeneration despite therapy, that still require research attention. PMID:26246977

  17. Three-dimensional ocular kinematics underlying binocular single vision

    PubMed Central

    Misslisch, H.

    2016-01-01

    We have analyzed the binocular coordination of the eyes during far-to-near refixation saccades based on the evaluation of distance ratios and angular directions of the projected target images relative to the eyes' rotation centers. By defining the geometric point of binocular single vision, called Helmholtz point, we found that disparities during fixations of targets at near distances were limited in the subject's three-dimensional visual field to the vertical and forward directions. These disparities collapsed to simple vertical disparities in the projective binocular image plane. Subjects were able to perfectly fuse the vertically disparate target images with respect to the projected Helmholtz point of single binocular vision, independent of the particular location relative to the horizontal plane of regard. Target image fusion was achieved by binocular torsion combined with corrective modulations of the differential half-vergence angles of the eyes in the horizontal plane. Our findings support the notion that oculomotor control combines vergence in the horizontal plane of regard with active torsion in the frontal plane to achieve fusion of the dichoptic binocular target images. PMID:27655969

  18. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.; Wu, Chris K.; Lin, Y. H.

    1991-01-01

    A system was developed for displaying computer graphics images of space objects and the use of the system was demonstrated as a testbed for evaluating vision systems for space applications. In order to evaluate vision systems, it is desirable to be able to control all factors involved in creating the images used for processing by the vision system. Considerable time and expense is involved in building accurate physical models of space objects. Also, precise location of the model relative to the viewer and accurate location of the light source require additional effort. As part of this project, graphics models of space objects such as the Solarmax satellite are created that the user can control the light direction and the relative position of the object and the viewer. The work is also aimed at providing control of hue, shading, noise and shadows for use in demonstrating and testing imaging processing techniques. The simulated camera data can provide XYZ coordinates, pitch, yaw, and roll for the models. A physical model is also being used to provide comparison of camera images with the graphics images.

  19. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  20. PixonVision real-time Deblurring Anisoplanaticism Corrector (DAC)

    NASA Astrophysics Data System (ADS)

    Hier, R. G.; Puetter, R. C.

    2007-09-01

    DigiVision, Inc. and PixonImaging LLC have teamed to develop a real-time Deblurring Anisoplanaticism Corrector (DAC) for the Army. The DAC measures the geometric image warp caused by anisoplanaticism and removes it to rectify and stabilize (dejitter) the incoming image. Each new geometrically corrected image field is combined into a running-average reference image. The image averager employs a higher-order filter that uses temporal bandpass information to help identify true motion of objects and thereby adaptively moderate the contribution of each new pixel to the reference image. This result is then passed to a real-time PixonVision video processor (see paper 6696-04 note, the DAC also first dehazes the incoming video) where additional blur from high-order seeing effects is removed, the image is spatially denoised, and contrast is adjusted in a spatially adaptive manner. We plan to implement the entire algorithm within a few large modern FPGAs on a circuit board for video use. Obvious applications are within the DOD, surveillance and intelligence, security and law enforcement communities. Prototype hardware is scheduled to be available in late 2008. To demonstrate the capabilities of the DAC, we present a software simulation of the algorithm applied to real atmosphere-corrupted video data collected by Sandia Labs.

  1. Recognizing pedestrian's unsafe behaviors in far-infrared imagery at night

    NASA Astrophysics Data System (ADS)

    Lee, Eun Ju; Ko, Byoung Chul; Nam, Jae-Yeal

    2016-05-01

    Pedestrian behavior recognition is important work for early accident prevention in advanced driver assistance system (ADAS). In particular, because most pedestrian-vehicle crashes are occurred from late of night to early of dawn, our study focus on recognizing unsafe behavior of pedestrians using thermal image captured from moving vehicle at night. For recognizing unsafe behavior, this study uses convolutional neural network (CNN) which shows high quality of recognition performance. However, because traditional CNN requires the very expensive training time and memory, we design the light CNN consisted of two convolutional layers and two subsampling layers for real-time processing of vehicle applications. In addition, we combine light CNN with boosted random forest (Boosted RF) classifier so that the output of CNN is not fully connected with the classifier but randomly connected with Boosted random forest. We named this CNN as randomly connected CNN (RC-CNN). The proposed method was successfully applied to the pedestrian unsafe behavior (PUB) dataset captured from far-infrared camera at night and its behavior recognition accuracy is confirmed to be higher than that of some algorithms related to CNNs, with a shorter processing time.

  2. Difference in initial dental biofilm accumulation between night and day.

    PubMed

    Dige, Irene; Schlafer, Sebastian; Nyvad, Bente

    2012-12-01

    The study of initial microbial colonization on dental surfaces is a field of intensive research because of the aetiological role of biofilms in oral diseases. Most previous studies of de novo accumulation and composition of dental biofilms in vivo do not differentiate between biofilms formed during day and night. This study hypothesized that there is a diurnal variation in the rate of accumulation of bacteria on solid surfaces in the oral cavity. In situ biofilm from healthy individuals was collected for 12 h during day and night, respectively, subjected to fluorescent in situ hybridization and visualized using confocal laser scanning microscopy. Analysis of the biofilms using stereological methods and digital image analysis revealed a consistent statistically significant difference between both the total number of bacteria and the biovolume in the two 12-h groups (p = 0.012), with the highest accumulation of bacteria during daytime (a factor of 8.8 and 6.1 higher, respectively). Hybridization with probes specific for streptococci and Actinomyces naeslundii indicated a higher proportion of streptococci in biofilms grown during daytime as compared to night-time. No differences could be observed for A. naeslundii. The degree of microbial coverage and the bacterial composition varied considerably between different individuals. The data provide firm evidence that initial biofilm formation decreases during the night, which may reflect differences in the availability of salivary nutrients. This finding is of significant importance when studying population dynamics during experimental dental biofilm formation.

  3. Effects of one night of induced night-wakings versus sleep restriction on sustained attention and mood: a pilot study.

    PubMed

    Kahn, Michal; Fridenson, Shimrit; Lerer, Reut; Bar-Haim, Yair; Sadeh, Avi

    2014-07-01

    Despite their high prevalence in daily life, repeated night-wakings and their cognitive and emotional consequences have received less research attention compared to other types of sleep disturbances. Our aim was to experimentally compare the effects of one night of induced infrequent night-wakings (of ∼15 min, each requiring a purposeful response) and sleep restriction on sustained attention and mood in young adults. In a within-between subjects counterbalanced design, 61 healthy adults (40 females; aged 20-29 years) underwent home assessments of sustained attention and self-reported mood at two times: after a normal (control) sleep night, and after a night of either sleep restriction (4h in bed) or induced night-wakings (four prolonged awakenings across 8h in bed). Sleep was monitored using actigraphy and sleep diaries. Sustained attention was assessed using an online continuous performance test (OCPT), and mood was reported online using the Profile of Mood States (POMS). Actigraphic data revealed good compliance with experimental sleep requirements. Induced night-wakings and sleep restriction both resulted in more OCPT omission and commission errors, and in increased depression, fatigue and confusion levels and reduced vigor compared to the normal sleep night. Moreover, there were no significant differences between the consequences of induced awakenings and sleep restriction. Our pilot study indicates that, similar to sleep restriction, one night of life-like repeated night-wakings negatively affects mood and sustained attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A computer vision approach for solar radiation nowcasting using MSG images

    NASA Astrophysics Data System (ADS)

    Álvarez, L.; Castaño Moraga, C. A.; Martín, J.

    2010-09-01

    Cloud structures and haze are the two main atmospheric phenomena that reduce the performance of solar power plants, since they absorb solar energy reaching terrestrial surface. Thus, accurate forecasting of solar radiation is a challenging research area that involves both a precise localization of cloud structures and haze, as well as the attenuation introduced by these artifacts. Our work presents a novel approach for nowcasting services based on image processing techniques applied to MSG satellite images provided by the EUMETSAT Rapid Scan Service (RSS) service. These data are an interesting source of information for our purposes since every 5 minutes we obtain actual information of the atmospheric state in nearly real time. However, a workaround must be given in order to forecast solar radiation. To that end, we synthetically forecast MSG images forecasts from past images applying computer vision techniques adapted to fluid flows in order to evolve atmospheric state. First, we classify cloud structures on two different layers, corresponding to top and bottom clouds, which includes haze. This two-level classification responds to the dominant climate conditions found in our region of interest, the Canary Islands archipelago, regulated by the Gulf Stream and Trade Winds. Vertical structure of Trade Winds consists of two layers, the bottom one, which is fresh and humid, and the top one, which is warm and dry. Between these two layers a thermal inversion appears that does not allow bottom clouds to go up and naturally divides clouds in these two layers. Top clouds can be directly obtained from satellite images by means of a segmentation algorithm on histogram heights. However, bottom clouds are usually overlapped by the former, so an inpainting algorithm is used to recover overlapped areas of bottom clouds. For each layer, cloud motion is estimated through a correlation based optic flow algorithm that provides a vector field that describes the displacement field in

  5. Night ventilation control strategies in office buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhaojun; Yi, Lingli; Gao, Fusheng

    2009-10-15

    In moderate climates night ventilation is an effective and energy-efficient approach to improve the indoor thermal environment for office buildings during the summer months, especially for heavyweight construction. However, is night ventilation a suitable strategy for office buildings with lightweight construction located in cold climates? In order to answer this question, the whole energy-consumption analysis software EnergyPlus was used to simulate the indoor thermal environment and energy consumption in typical office buildings with night mechanical ventilation in three cities in northern China. The summer outdoor climate data was analyzed, and three typical design days were chosen. The most important factorsmore » influencing night ventilation performance such as ventilation rates, ventilation duration, building mass and climatic conditions were evaluated. When night ventilation operation time is closer to active cooling time, the efficiency of night ventilation is higher. With night ventilation rate of 10 ach, the mean radiant temperature of the indoor surface decreased by up to 3.9 C. The longer the duration of operation, the more efficient the night ventilation strategy becomes. The control strategies for three locations are given in the paper. Based on the optimized strategies, the operation consumption and fees are calculated. The results show that more energy is saved in office buildings cooled by a night ventilation system in northern China than ones that do not employ this strategy. (author)« less

  6. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  7. Frame Rate and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  8. Associations between number of consecutive night shifts and impairment of neurobehavioral performance during a subsequent simulated night shift.

    PubMed

    Magee, Michelle; Sletten, Tracey L; Ferguson, Sally A; Grunstein, Ronald R; Anderson, Clare; Kennaway, David J; Lockley, Steven W; Rajaratnam, Shantha Mw

    2016-05-01

    This study aimed to investigate sleep and circadian phase in the relationships between neurobehavioral performance and the number of consecutive shifts worked. Thirty-four shift workers [20 men, mean age 31.8 (SD 10.9) years] worked 2-7 consecutive night shifts immediately prior to a laboratory-based, simulated night shift. For 7 days prior, participants worked their usual shift sequence, and sleep was assessed with logs and actigraphy. Participants completed a 10-minute auditory psychomotor vigilance task (PVT) at the start (~21:00 hours) and end (~07:00 hours) of the simulated night shift. Mean reaction times (RT), number of lapses and RT distribution was compared between those who worked 2-3 consecutive night shifts versus those who worked 4-7 shifts. Following 4-7 shifts, night shift workers had significantly longer mean RT at the start and end of shift, compared to those who worked 2-3 shifts. The slowest and fastest 10% RT were significantly slower at the start, but not end, of shift among participants who worked 4-7 nights. Those working 4-7 nights also demonstrated a broader RT distribution at the start and end of shift and had significantly slower RT based on cumulative distribution analysis (5 (th), 25 (th), 50 (th), 75 (th)percentiles at the start of shift; 75th percentile at the end of shift). No group differences in sleep parameters were found for 7 days and 24 hours prior to the simulated night shift. A greater number of consecutive night shifts has a negative impact on neurobehavioral performance, likely due to cognitive slowing.

  9. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision.

    PubMed

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-22

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on-off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  10. Self-powered vision electronic-skin basing on piezo-photodetecting Ppy/PVDF pixel-patterned matrix for mimicking vision

    NASA Astrophysics Data System (ADS)

    Han, Wuxiao; Zhang, Linlin; He, Haoxuan; Liu, Hongmin; Xing, Lili; Xue, Xinyu

    2018-06-01

    The development of multifunctional electronic-skin that establishes human-machine interfaces, enhances perception abilities or has other distinct biomedical applications is the key to the realization of artificial intelligence. In this paper, a new self-powered (battery-free) flexible vision electronic-skin has been realized from pixel-patterned matrix of piezo-photodetecting PVDF/Ppy film. The electronic-skin under applied deformation can actively output piezoelectric voltage, and the outputting signal can be significantly influenced by UV illumination. The piezoelectric output can act as both the photodetecting signal and electricity power. The reliability is demonstrated over 200 light on–off cycles. The sensing unit matrix of 6 × 6 pixels on the electronic-skin can realize image recognition through mapping multi-point UV stimuli. This self-powered vision electronic-skin that simply mimics human retina may have potential application in vision substitution.

  11. Binocular combination in abnormal binocular vision

    PubMed Central

    Ding, Jian; Klein, Stanley A.; Levi, Dennis M.

    2013-01-01

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments

  12. Binocular combination in abnormal binocular vision.

    PubMed

    Ding, Jian; Klein, Stanley A; Levi, Dennis M

    2013-02-08

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments

  13. GMT007_09_33_Terry Virts_India Maldives night zoom chennai colum

    NASA Image and Video Library

    2015-01-06

    ISS042eE01551 (01/06/2015) --- NASA astronaut Terry Virts tweeted this night image out with the twinkling city lights of the coast of India and the Maldives. The Maldives is a tropical nation in the Indian Ocean composed of 26 coral atolls, which stretch for hundreds of islands. It’s known for its beaches, blue lagoons and extensive reefs. Terry tweeted this comment along with the image: " Moonlit clouds over southeast #India coastline, with Chennai, Bangalore, and Hyderabad."

  14. STS-56 ESC Earth observation of New York City at night

    NASA Technical Reports Server (NTRS)

    1993-01-01

    STS-56 electronic still camera (ESC) Earth observation image shows New York City at night as recorded on the 64th orbit of Discovery, Orbiter Vehicle (OV) 103. The image was recorded with an image intensifier on the Hand-held, Earth-oriented, Real-time, Cooperative, User-friendly, Location-targeting and Environmental System (HERCULES). HERCULES is a device that makes it simple for shuttle crewmembers to take pictures of Earth as they merely point a modified 35mm camera and shoot any interesting feature, whose latitude and longitude are automatically determined in real-time. Center coordinates on this image are 40.665 degrees north latitude and 74.048 degrees west longitude. (1/60 second exposure). Digital file name is ESC04034.IMG.

  15. Database Integrity Monitoring for Synthetic Vision Systems Using Machine Vision and SHADE

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Young, Steven D.

    2005-01-01

    In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

  16. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  17. Channel at Night in Thermal Infrared

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This nighttime thermal infrared image, taken by the thermal emission imaging system on NASA's 2001 Mars Odyssey spacecraft, shows differences in temperature that are due to differences in the abundance of rocks, sand and dust on the surface. Rocks remain warm at night, as seen in the warm (bright) rim of the five kilometer (three mile) diameter crater located on the right of this image.

    The sinuous channel floor is cold, suggesting that it is covered by material that is more finely grained than the surrounding plains. The interior of the crater shows a great deal of thermal structure, indicating that the distribution of rocks, sand and dust varies across the floor.

    The presence of rocks on the rim and inner wall indicates that this crater maintains some of its original character, despite erosion and deposition by Martian winds. Nighttime infrared images such as this one will greatly aid in mapping the physical properties of Mars' surface.

    This image is centered at 2 degrees north, 0.4 degrees west, and was acquired at about 3:15 a.m. local Martian time. North is to the right of the image.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The thermal emission imaging system was provided by Arizona State University, Tempe. Lockheed Martin Astronautics, Denver, is the prime contractor for the project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  18. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  19. Small-scale anomaly detection in panoramic imaging using neural models of low-level vision

    NASA Astrophysics Data System (ADS)

    Casey, Matthew C.; Hickman, Duncan L.; Pavlou, Athanasios; Sadler, James R. E.

    2011-06-01

    Our understanding of sensory processing in animals has reached the stage where we can exploit neurobiological principles in commercial systems. In human vision, one brain structure that offers insight into how we might detect anomalies in real-time imaging is the superior colliculus (SC). The SC is a small structure that rapidly orients our eyes to a movement, sound or touch that it detects, even when the stimulus may be on a small-scale; think of a camouflaged movement or the rustle of leaves. This automatic orientation allows us to prioritize the use of our eyes to raise awareness of a potential threat, such as a predator approaching stealthily. In this paper we describe the application of a neural network model of the SC to the detection of anomalies in panoramic imaging. The neural approach consists of a mosaic of topographic maps that are each trained using competitive Hebbian learning to rapidly detect image features of a pre-defined shape and scale. What makes this approach interesting is the ability of the competition between neurons to automatically filter noise, yet with the capability of generalizing the desired shape and scale. We will present the results of this technique applied to the real-time detection of obscured targets in visible-band panoramic CCTV images. Using background subtraction to highlight potential movement, the technique is able to correctly identify targets which span as little as 3 pixels wide while filtering small-scale noise.

  20. Optoelectronic stereoscopic device for diagnostics, treatment, and developing of binocular vision

    NASA Astrophysics Data System (ADS)

    Pautova, Larisa; Elkhov, Victor A.; Ovechkis, Yuri N.

    2003-08-01

    Operation of the device is based on alternative generation of pictures for left and right eyes on the monitor screen. Controller gives pulses on LCG so that shutter for left or right eye opens synchronously with pictures. The device provides frequency of switching more than 100 Hz, and that is why the flickering is absent. Thus, a separate demonstration of images to the left eye or to the right one in turn is obtained for patients being unaware and creates the conditions of binocular perception clsoe to natural ones without any additional separation of vision fields. LC-cell transfer characteristic coodination with time parameters of monitor screen has enabled to improve stereo image quality. Complicated problem of computer stereo images with LC-glasses is so called 'ghosts' - noise images that come to blocked eye. We reduced its influence by adapting stereo images to phosphor and LC-cells characteristics. The device is intended for diagnostics and treatment of stabismus, amblyopia and other binocular and stereoscopic vision impairments, for cultivating, training and developing of stereoscopic vision, for measurements of horizontal and vertical phoria, phusion reserves, the stereovision acuity and some else, for fixing central scotoma borders, as well as suppression scotoma in strabismus too.

  1. Artificial light at night advances avian reproductive physiology

    PubMed Central

    Dominoni, Davide; Quetting, Michael; Partecke, Jesko

    2013-01-01

    Artificial light at night is a rapidly increasing phenomenon and it is presumed to have global implications. Light at night has been associated with health problems in humans as a consequence of altered biological rhythms. Effects on wild animals have been less investigated, but light at night has often been assumed to affect seasonal cycles of urban dwellers. Using light loggers attached to free-living European blackbirds (Turdus merula), we first measured light intensity at night which forest and city birds are subjected to in the wild. Then we used these measurements to test for the effect of light at night on timing of reproductive physiology. Captive city and forest blackbirds were exposed to either dark nights or very low light intensities at night (0.3 lux). Birds exposed to light at night developed their reproductive system up to one month earlier, and also moulted earlier, than birds kept under dark nights. Furthermore, city birds responded differently than forest individuals to the light at night treatment, suggesting that urbanization can alter the physiological phenotype of songbirds. Our results emphasize the impact of human-induced lighting on the ecology of millions of animals living in cities and call for an understanding of the fitness consequences of light pollution. PMID:23407836

  2. Night shift work and hormone levels in women.

    PubMed

    Davis, Scott; Mirick, Dana K; Chen, Chu; Stanczyk, Frank Z

    2012-04-01

    Night shift work may disrupt the normal nocturnal rise in melatonin, resulting in increased breast cancer risk, possibly through increased reproductive hormone levels. We investigated whether night shift work is associated with decreased levels of urinary 6-sulfatoxymelatonin, the primary metabolite of melatonin, and increased urinary reproductive hormone levels. Participants were 172 night shift and 151 day shift-working nurses, aged 20-49 years, with regular menstrual cycles. Urine samples were collected throughout work and sleep periods and assayed for 6-sulfatoxymelatonin, luteinizing hormone (LH), follicle-stimulating hormone (FSH), and estrone conjugate (E1C). 6-Sulfatoxymelatonin levels were 62% lower and FSH and LH were 62% and 58% higher, respectively, in night shift-working women during daytime sleep than in day shift-working women during nighttime sleep (P ≤ 0.0001). Nighttime sleep on off-nights was associated with 42% lower 6-sulfatoxymelatonin levels among the night shift workers, relative to the day shift workers (P < 0.0001); no significant differences in LH or FSH were observed. 6-Sulfatoxymelatonin levels during night work were approximately 69% lower and FSH and LH were 35% and 38% higher, compared with day shift workers during nighttime sleep. No differences in E1C levels between night and day shift workers were observed. Within night shift workers, 6-sulfatoxymelatonin levels were lower and reproductive hormone levels were higher during daytime sleep and nighttime work, relative to nighttime sleep (P < 0.05). These results indicate that night shift workers have substantially reduced 6-sulfatoxymelatonin levels during night work and daytime sleep and that levels remain low even when a night shift worker sleeps at night. Shift work could be an important risk factor for many other cancers in addition to breast cancer. ©2012 AACR.

  3. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  4. Neuromorphic vision sensors and preprocessors in system applications

    NASA Astrophysics Data System (ADS)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  5. Security surveillance challenges and proven thermal imaging capabilities in real-world applications

    NASA Astrophysics Data System (ADS)

    Francisco, Glen L.; Roberts, Sharon

    2004-09-01

    Uncooled thermal imaging was first introduced to the public in early 1980's by Raytheon (legacy Texas Instruments Defense Segment Electronics Group) as a solution for military applications. Since the introduction of this technology, Raytheon has remained the leader in this market as well as introduced commercial versions of thermal imaging products specifically designed for security, law enforcement, fire fighting, automotive and industrial uses. Today, low cost thermal imaging for commercial use in security applications is a reality. Organizations of all types have begun to understand the advantages of using thermal imaging as a means to solve common surveillance problems where other popular technologies fall short. Thermal imaging has proven to be a successful solution for common security needs such as: ¸ vision at night where lighting is undesired and 24x7 surveillance is needed ¸ surveillance over waterways, lakes and ports where water and lighting options are impractical ¸ surveillance through challenging weather conditions where other technologies will be challenged by atmospheric particulates ¸ low maintenance requirements due to remote or difficult locations ¸ low cost over life of product Thermal imaging is now a common addition to the integrated security package. Companies are relying on thermal imaging for specific applications where no other technology can perform.

  6. Hybrid vision activities at NASA Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.

  7. A Target-Less Vision-Based Displacement Sensor Based on Image Convex Hull Optimization for Measuring the Dynamic Response of Building Structures.

    PubMed

    Choi, Insub; Kim, JunHee; Kim, Donghyun

    2016-12-08

    Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.

  8. An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)

    DTIC Science & Technology

    2010-03-01

    technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D

  9. Swap intensified WDR CMOS module for I2/LWIR fusion

    NASA Astrophysics Data System (ADS)

    Ni, Yang; Noguier, Vincent

    2015-05-01

    The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.

  10. Frequency of College Students' Night-Sky Watching Behaviors

    ERIC Educational Resources Information Center

    Kelly, William E.; Kelly, Kathryn E.; Batey, Jason

    2006-01-01

    College students (N = 112) completed the Noctcaelador Inventory, a measure of psychological attachment to the night-sky, and estimated various night-sky watching related activities: frequency and duration of night-sky watching, astro-tourism, ownership of night-sky viewing equipment, and attendance of observatories or planetariums. The results…

  11. CAD-model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.

    1988-01-01

    A pose acquisition system operating in space must be able to perform well in a variety of different applications including automated guidance and inspections tasks with many different, but known objects. Since the space station is being designed with automation in mind, there will be CAD models of all the objects, including the station itself. The construction of vision models and procedures directly from the CAD models is the goal of this project. The system that is being designed and implementing must convert CAD models to vision models, predict visible features from a given view point from the vision models, construct view classes representing views of the objects, and use the view class model thus derived to rapidly determine the pose of the object from single images and/or stereo pairs.

  12. Comparative Visual Performance with ANVIS (Aviator’s Night Vision Imaging System) and AN/PVS-5A Night Vision Goggles under Starlight Conditions

    DTIC Science & Technology

    1984-08-01

    inside or outside o•h United States without first obtaining me esport license. A violation of the ITAR or CAR m4ay be ublect to a penalty of up to to...clinical data are shown in Table 2. The subjects ranged in age from 24 to 61; there were 7 males and 3 females ; 6 of the subjects wore a visual

  13. [Comparison study between biological vision and computer vision].

    PubMed

    Liu, W; Yuan, X G; Yang, C X; Liu, Z Q; Wang, R

    2001-08-01

    The development and bearing of biology vision in structure and mechanism were discussed, especially on the aspects including anatomical structure of biological vision, tentative classification of reception field, parallel processing of visual information, feedback and conformity effect of visual cortical, and so on. The new advance in the field was introduced through the study of the morphology of biological vision. Besides, comparison between biological vision and computer vision was made, and their similarities and differences were pointed out.

  14. Contrasting trends in light pollution across Europe based on satellite observed night time lights.

    PubMed

    Bennie, Jonathan; Davies, Thomas W; Duffy, James P; Inger, Richard; Gaston, Kevin J

    2014-01-21

    Since the 1970s nighttime satellite images of the Earth from space have provided a striking illustration of the extent of artificial light. Meanwhile, growing awareness of adverse impacts of artificial light at night on scientific astronomy, human health, ecological processes and aesthetic enjoyment of the night sky has led to recognition of light pollution as a significant global environmental issue. Links between economic activity, population growth and artificial light are well documented in rapidly developing regions. Applying a novel method to analysis of satellite images of European nighttime lights over 15 years, we show that while the continental trend is towards increasing brightness, some economically developed regions show more complex patterns with large areas decreasing in observed brightness over this period. This highlights that opportunities exist to constrain and even reduce the environmental impact of artificial light pollution while delivering cost and energy-saving benefits.

  15. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  16. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  17. International Observe the Moon Night

    NASA Image and Video Library

    2010-09-19

    Double beams shoot into the night sky during the Internation Observe the Moon night event. Goddard's Laser Ranging Facility directs a laser toward the Lunar Reconassaince Orbiter on International Observe the Moon Night. (Sept 18, 2010) Background on laser ranging: www.nasa.gov/mission_pages/LRO/news/LRO_lr.html Credit: NASA/GSFC/Debbie Mccallum On September 18, 2010 the world joined the NASA Goddard Space Flight Center's Visitor Center in Greenbelt, Md., as well as other NASA Centers to celebrate the first annual International Observe the Moon Night (InOMN). To read more go to: www.nasa.gov/centers/goddard/news/features/2010/moon-nigh... NASA Goddard Space Flight Center contributes to NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s endeavors by providing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  18. Conquest of darkness by management of the stars

    NASA Astrophysics Data System (ADS)

    Wiseman, Robert S.

    This text was presented as the Thomas B. Dowd Memorial Lecture for 1991 national Infrared Information Symposium (IRIS). The history of Army Night Vision from World War II to 1972 proves how the right organization with talented people and proper support can succeed. This presentation not only illustrates the growth of image intensifier technology and families of equipment, but the key events and stars that made it all happen. Described are the management techniques used and how to organize for effective research, development, engineering, and production programs; the evolution of the Far Infrared Common Module program is described; and how the Night Vision Laboratory was unique.

  19. Nurses' assessments and patients' perceptions: development of the Night Nursing Care Instrument (NNCI), measuring nursing care at night.

    PubMed

    Johansson, Peter; Oléni, Magnus; Fridlund, Bengt

    2005-07-01

    Nursing care provided at night has a different purpose and objective to that provided during the day. A review of the literature does not reveal any scientifically tested research instruments for evaluating and comparing the nurse's assessment of nursing care with the patient's perception at night. The aim of this study was to develop and test an instrument for evaluating nursing care and to compare nurses' assessments with patients' perceptions of nursing care provided at night. The study was carried out in two phases; the first had an explorative design and the second an evaluative and comparative design. The Night Nursing Care Instrument (NNCI) included two questionnaires; one for nurses and one for patients. These questionnaires were developed from a nursing framework and covered the following three areas: 'nursing interventions', 'medical interventions' and 'evaluation'. Nurses (n = 40) on night duty on a medical ward at a central hospital in southern Sweden were consecutively selected, to participate in the study. The patients (n = 80) were selected by means of convenience sampling. In order to achieve construct validity, factor analysis of each individual area was carried out. Reliability in terms of internal consistency was tested by Cronbach's alpha. The overall NNCI had acceptable reliability and validity. There was no statistically significant difference between nurses' assessments and patients' perceptions in any of the three areas of 'nursing interventions', 'medical interventions' or 'evaluation'. The patients rated night nursing care as satisfactory for the majority of the items. These findings demonstrate that it is possible to create a short instrument with acceptable reliability and validity, which is easy to use in clinical practice. The results also show that night nurses need to improve their ability to assess patients' needs during the night to increase the quality of night nursing care.

  20. Ideas for Teaching Vision and Visioning

    ERIC Educational Resources Information Center

    Quijada, Maria Alejandra

    2017-01-01

    In teaching leadership, a key element to include should be a discussion about vision: what it is, how to communicate it, and how to ensure that it is effective and shared. This article describes a series of exercises that rely on videos to illustrate different aspects of vision and visioning, both in the positive and in the negative. The article…

  1. The loss and recovery of vertebrate vision examined in microplates.

    PubMed

    Thorn, Robert J; Clift, Danielle E; Ojo, Oladele; Colwill, Ruth M; Creton, Robbert

    2017-01-01

    Regenerative medicine offers potentially ground-breaking treatments of blindness and low vision. However, as new methodologies are developed, a critical question will need to be addressed: how do we monitor in vivo for functional success? In the present study, we developed novel behavioral assays to examine vision in a vertebrate model system. In the assays, zebrafish larvae are imaged in multiwell or multilane plates while various red, green, blue, yellow or cyan objects are presented to the larvae on a computer screen. The assays were used to examine a loss of vision at 4 or 5 days post-fertilization and a gradual recovery of vision in subsequent days. The developed assays are the first to measure the loss and recovery of vertebrate vision in microplates and provide an efficient platform to evaluate novel treatments of visual impairment.

  2. The "Night Owl" Learning Style of Art Students: Creativity and Daily Rhythm

    ERIC Educational Resources Information Center

    Wang, Sy-Chyi; Chern, Jin-Yuan

    2008-01-01

    This article explores the deep-rooted "night owl" image of art practitioners and calls for attention on a consideration of the time for learning in art. It has been recognised that the human body has its own internal timings and knowing the "time" pattern is important for better productivity in conducting creativity-related activities. This study…

  3. Willingness to Pay for a Clear Night Sky: Use of the Contingent Valuation Method

    NASA Astrophysics Data System (ADS)

    Simpson, Stephanie; Winebrake, J.; Noel-Storr, J.

    2006-12-01

    A clear night sky is a public good, and as a public good government intervention to regulate it is feasible and necessary. Light pollution decreases the ability to view the unobstructed night sky, and can have biological, human health, energy related, and scientific consequences. In order for governments to intervene more effectively with light pollution controls (costs), the benefits of light pollution reduction also need to be determined. This project uses the contingent valuation method to place an economic value on one of the benefits of light pollution reduction aesthetics. Using a willingness to pay approach, this study monetizes the value of a clear night sky for students at RIT. Images representing various levels of light pollution were presented to this population as part of a survey. The results of this study may aid local, state, and federal policy makers in making informed decisions regarding light pollution.

  4. Optical correction and quality of vision of the French soldiers stationed in the Republic of Djibouti in 2009.

    PubMed

    Vignal, Rodolphe; Ollivier, Lénaïck

    2011-03-01

    To ensure vision readiness on the battlefield, the French military has been providing its soldiers with eyewear since World War I. A military refractive surgery program was initiated in 2008. A prospective questionnaire-based investigation on optical correction and quality of vision among active duty members with visual deficiencies stationed in Djibouti, Africa, was conducted in 2009. It revealed that 59.3% of the soldiers were wearing spectacles, 21.2% were wearing contact lenses--despite official recommendations--and 8.5% had undergone refractive surgery. Satisfaction rates were high with refractive surgery and contact lenses; 33.6% of eyeglass wearers were planning to have surgery. Eye dryness and night vision disturbances were the most reported symptoms following surgery. Military optical devices were under-prescribed before deployment. This suggests that additional and more effective studies on the use of military optical devices should be performed and policy supporting refractive surgery in military populations should be strengthened.

  5. [Oguchi disease or stationary congenital night blindness: a case report].

    PubMed

    Boissonnot, M; Robert, M F; Gilbert-Dussardier, B; Dighiero, P

    2007-01-01

    Oguchi disease, originally described in Japanese people, is a rare form of stationary night blindness in patients with normal acuity. We report the case of an 8-year-old girl who presented with an abnormal terrified behavior in the dark. Thorough questioning revealed hemeralopia. Her clinical examination (visual acuity, Goldmann visual field, and color vision) were normal. The fundus examination showed golden-brown color, grayish, almost greenish yellow discoloration in the peripheral area with no osteoclast. This abnormality disappeared after prolonged dark adaptation. The electroretinogram showed a reduced b wave amplitude under scotopic conditions. Her parents were cousins. This diagnosis should be suggested when hemeralopia is associated with typical fundus aspect resolving after dark adaptation (so called Mizuo-Nakamura phenomenon). The long-term prognosis in these patients is good in the absence of clinical progression. This is a genetic autosomal recessive disease caused by mutations in the gene coding for arrestin located in 2q37.1.

  6. Machine vision based quality inspection of flat glass products

    NASA Astrophysics Data System (ADS)

    Zauner, G.; Schagerl, M.

    2014-03-01

    This application paper presents a machine vision solution for the quality inspection of flat glass products. A contact image sensor (CIS) is used to generate digital images of the glass surfaces. The presented machine vision based quality inspection at the end of the production line aims to classify five different glass defect types. The defect images are usually characterized by very little `image structure', i.e. homogeneous regions without distinct image texture. Additionally, these defect images usually consist of only a few pixels. At the same time the appearance of certain defect classes can be very diverse (e.g. water drops). We used simple state-of-the-art image features like histogram-based features (std. deviation, curtosis, skewness), geometric features (form factor/elongation, eccentricity, Hu-moments) and texture features (grey level run length matrix, co-occurrence matrix) to extract defect information. The main contribution of this work now lies in the systematic evaluation of various machine learning algorithms to identify appropriate classification approaches for this specific class of images. In this way, the following machine learning algorithms were compared: decision tree (J48), random forest, JRip rules, naive Bayes, Support Vector Machine (multi class), neural network (multilayer perceptron) and k-Nearest Neighbour. We used a representative image database of 2300 defect images and applied cross validation for evaluation purposes.

  7. Wide-angle vision for road views

    NASA Astrophysics Data System (ADS)

    Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.

    2013-03-01

    The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.

  8. Night airglow in RGB mode

    NASA Astrophysics Data System (ADS)

    Mikhalev, Aleksandr; Podlesny, Stepan; Stoeva, Penka

    2016-09-01

    To study dynamics of the upper atmosphere, we consider results of the night sky photometry, using a color CCD camera and taking into account the night airglow and features of its spectral composition. We use night airglow observations for 2010-2015, which have been obtained at the ISTP SB RAS Geophysical Observatory (52° N, 103° E) by the camera with KODAK KAI-11002 CCD sensor. We estimate the average brightness of the night sky in R, G, B channels of the color camera for eastern Siberia with typical values ranging from ~0.008 to 0.01 erg*cm-2*s-1. Besides, we determine seasonal variations in the night sky luminosities in R, G, B channels of the color camera. In these channels, luminosities decrease in spring, increase in autumn, and have a pronounced summer maximum, which can be explained by scattered light and is associated with the location of the Geophysical Observatory. We consider geophysical phenomena with their optical effects in R, G, B channels of the color camera. For some geophysical phenomena (geomagnetic storms, sudden stratospheric warmings), we demonstrate the possibility of the quantitative relationship between enhanced signals in R and G channels and increases in intensities of discrete 557.7 and 630 nm emissions, which are predominant in the airglow spectrum.

  9. A fuzzy structural matching scheme for space robotics vision

    NASA Technical Reports Server (NTRS)

    Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka

    1994-01-01

    In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

  10. What's Up? Use the night sky to engage the public through amateur astronomy in IYA; What's Up monthly astronomy themed podcasts; Annual Saturn Observation Night worldwide celebration of Saturn Opposition

    NASA Astrophysics Data System (ADS)

    Houston Jones, Jane

    2008-09-01

    Abstract What's Up video podcasts: connecting "astronomy for everyone" monthly astronomical views with related NASA missions, science, images and handson education. Background: What's Up Podcasts are 2 minute video podcasts available through RSS feed, You tube, and NASA websites every month. They feature an astronomy related viewing target in the sky each month, targets visible to everyone, from city or country, just by looking up! No telescope is required to view these objects. Summary: Expand and broaden the scope of the existing "What's Up" public astronomy themed video podcasts. NASA builds partnerships and linkages between Science, Technology, Engineering and Mathematics formal and informal education providers. What's Up podcasts provides a link between astronomical views and events, or "what's up in the night sky this month" with current NASA missions, mission milestones and events, space telescope images or press releases. These podcasts, plus supporting star charts, hands-on activities, standards-based educational lessons and mission links will be used by museums, planetariums, astronomy clubs, civic and youth groups, as well as by classrooms and the general public. They can be translated to other languages, too. Providing the podcasts in high definition, through the NASA websites, You Tube, iTunes and other web video sharing sites reaches wide audiences of all ages. Third Saturn Observation Night - May 18, 2008 Centered on Saturn Opposition, when the Sun and Saturn are on opposite sides of the Earth, all IYA participants - in all countries around the world - will be encouraged to take their telescopes out and share the planet Saturn with their communities. NASA's International Saturn Observation Campaign network of astronomy enthusiasts has now conducted a Saturn Observation Night event for the past 2 years, and it succeeds by building an international community all sharing Saturn. This celebration has been successfully conducted in hundreds of locations

  11. Learning prosthetic vision: a virtual-reality study.

    PubMed

    Chen, Spencer C; Hallum, Luke E; Lovell, Nigel H; Suaning, Gregg J

    2005-09-01

    Acceptance of prosthetic vision will be heavily dependent on the ability of recipients to form useful information from such vision. Training strategies to accelerate learning and maximize visual comprehension would need to be designed in the light of the factors affecting human learning under prosthetic vision. Some of these potential factors were examined in a visual acuity study using the Landolt C optotype under virtual-reality simulation of prosthetic vision. Fifteen normally sighted subjects were tested for 10-20 sessions. Potential learning factors were tested at p < 0.05 with regression models. Learning was most evident across-sessions, though 17% of sessions did express significant within-session trends. Learning was highly concentrated toward a critical range of optotype sizes, and subjects were less capable in identifying the closed optotype (a Landolt C with no gap, forming a closed annulus). Training for implant recipients should target these critical sizes and the closed optotype to extend the limit of visual comprehension. Although there was no evidence that image processing affected overall learning, subjects showed varying personal preferences.

  12. AstroCV: Astronomy computer vision library

    NASA Astrophysics Data System (ADS)

    González, Roberto E.; Muñoz, Roberto P.; Hernández, Cristian A.

    2018-04-01

    AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.

  13. Evaluation of status of calcium, magnesium, potassium, and sodium levels in biological samples in children of different age groups with normal vision and night blindness.

    PubMed

    Afridi, Hassan Imran; Kazi, Tasneem Gul; Kazi, Naveed; Kandhro, Ghulam Abbas; Baig, Jameel Ahmed; Shah, Abdul Qadir; Khan, Sumaira; Kolachi, Nida Fatima; Wadhwa, Sham Kumar; Shah, Faheem

    2011-01-01

    The most common cause of blindness in developing countries is vitamin A deficiency. The World Health Organization (WHO) estimates 13.8 million children have some degree of visual loss related to vitamin A deficiency. The causes of night blindness in children are multifactorial and particular consideration has been given to childhood nutritional deficiency, which is the most common problem found in underdeveloped countries. Such deficiency can result in physiological and pathological processes that in turn influence biological sample composition. Vitamin and mineral deficiency prevents more than two billion people from achieving their full intellectual and physical potential. This study was designed to compare the levels of magnesium (Mg), calcium (Ca), potassium (K), and sodium (Na) in scalp hair, serum, blood, and urine of night blindness children in two age groups, (1-5) and (6-10) years, of both genders comparing them to sex- and age-matched controls. A microwave assisted wet acid digestion procedure was developed as a sample pretreatment for the determination of Mg, Ca, K, and Na in biological samples of children with night blindness. The proposed method was validated by using conventional wet digestion and certified reference samples of hair, serum, blood, and urine. The digests of all biological samples were analysed for Mg, Ca, K, and Na by flame atomic absorption spectrometry (FAAS) using an air/acetylene flame. The results indicated significantly lower levels of Mg, Ca, and K in the biological samples (blood, serum, and scalp hair) of male and female children with night blindness and higher values of Na compared with control subjects of both genders. These data present guidance to clinicians and other professionals investigating deficiency of essential mineral elements in biological samples (scalp hair, serum, and blood) of children with night blindness.

  14. Pixel-wise deblurring imaging system based on active vision for structural health monitoring at a speed of 100 km/h

    NASA Astrophysics Data System (ADS)

    Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi

    2018-04-01

    In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.

  15. Benchmarking neuromorphic vision: lessons learnt from computer vision

    PubMed Central

    Tan, Cheston; Lallee, Stephane; Orchard, Garrick

    2015-01-01

    Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120

  16. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  17. Night driving simulation in a randomized prospective comparison of Visian toric implantable collamer lens and conventional PRK for moderate to high myopic astigmatism.

    PubMed

    Schallhorn, Steven; Tanzer, David; Sanders, Donald R; Sanders, Monica; Brown, Mitch; Kaupp, Sandor E

    2010-05-01

    To compare changes in simulated night driving performance after Visian Toric Implantable Collamer Lens (TICL; STAAR Surgical) implantation and photorefractive keratectomy (PRK) for the correction of moderate to high myopic astigmatism. This prospective, randomized study consisted of 43 eyes implanted with the TICL (20 bilateral cases) and 45 eyes receiving conventional PRK (VISX Star S3 excimer laser) with mitomycin C (22 bilateral cases) for moderate to high myopia (-6.00 to -20.00 diopters[D] sphere) measured at the spectacle plane and 1.00 to 4.00 D of astigmatism. As a substudy, 27 eyes of 14 TICL patients and 41 eyes of 21 PRK patients underwent a simulated night driving test. The detection and identification distances of road signs and hazards with the Night Driving Simulator (Vision Sciences Research Corp) were measured with and without a glare source before and 6 months after each procedure. No significant difference was noted in the pre- to postoperative Night Driving Simulator in detection distances with and without the glare source between the TICL and PRK groups. The differences in identification distances without glare were significantly better for business and traffic road signs and pedestrian hazards in the TICL group relative to the PRK group whereas with glare, only the pedestrian hazards were significantly better. A clinically relevant change of Night Driving Simulator performance (>0.5 seconds change in ability to identify tasks postoperatively) was significantly better in the TICL group (with and without glare) for all identification tasks. The TICL performed better than conventional PRK in the pre- to postoperative Night Driving Simulator testing with and without a glare source present. Copyright 2010, SLACK Incorporated.

  18. Foundation doctors working at night: what training opportunities exist?

    PubMed

    Coomber, R; Smith, D; McGuinness, D; Shao, E; Soobrah, R; Frankel, A H

    2014-07-01

    Foundation Training is designed for doctors in their first two years of post-graduation. The number of foundation doctors (FD) in the UK working nights has reduced because of a perception that clinical supervision at night is unsatisfactory and that minimal training opportunities exist. We aimed to assess the value of night shifts to FDs and hypothesised that removing FDs from nights may be detrimental to training. Using a survey, we assessed the number of FDs working nights in London, FDs views on working nights and their supervision at night. We evaluated whether working at night, compared to daytime working provided opportunities to achieve foundation competencies. 83% (N = 2157/2593) of FDs completed the survey. Over 90% of FDs who worked nights felt that the experience they gained improved their ability to prioritise, make decisions and plan. FDs who worked nights reported higher scores for achieving competencies in history taking (2.67 vs. 2.51; p = 0.00), examination (2.72 vs. 2.59; p = 0.01) and resuscitation (2.27 vs. 1.96; p = 0.00). The majority (65%) felt adequately supervised. Our survey has demonstrated that FDs find working nights a valuable experience, providing important training opportunities, which are additional to those encountered during daytime working.

  19. Functional Defects in Color Vision in Patients With Choroideremia.

    PubMed

    Jolly, Jasleen K; Groppe, Markus; Birks, Jacqueline; Downes, Susan M; MacLaren, Robert E

    2015-10-01

    To characterize defects in color vision in patients with choroideremia. Prospective cohort study. Thirty patients with choroideremia (41 eyes) and 10 age-matched male controls (19 eyes) with visual acuity of ≥6/36 attending outpatient clinics in Oxford Eye Hospital underwent color vision testing with the Farnsworth-Munsell 100 hue test, visual acuity testing, and autofluorescence imaging. To exclude changes caused by degeneration of the fovea, a subgroup of 14 patients with a visual acuity ≥6/6 was analyzed. Calculated color vision total error scores were compared between the groups and related to a range of factors using a random-effects model. Mean color vision total error scores were 120 (95% confidence interval [CI] 92, 156) in the ≥6/6 choroideremia group, 206 (95% CI 161, 266) in the <6/6 visual acuity choroideremia group, and 47 (95% CI 32, 69) in the control group. Covariate analysis showed a significant difference in color vision total error score between the groups (P < .001 between each group). Patients with choroideremia have a functional defect in color vision compared with age-matched controls. The color vision defect deteriorates as the degeneration encroaches on the fovea. The presence of an early functional defect in color vision provides a useful biomarker against which to assess successful gene transfer in gene therapy trials. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.