Sample records for scene generation system

  1. PC Scene Generation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  2. Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2

    NASA Astrophysics Data System (ADS)

    Makar, Robert J.; O'Toole, Brian E.

    1998-07-01

    An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.

  3. Automated synthetic scene generation

    NASA Astrophysics Data System (ADS)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  4. Aligning Where to See and What to Tell: Image Captioning with Region-Based Attention and Scene-Specific Contexts.

    PubMed

    Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui

    2017-12-01

    Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.

  5. Real-time maritime scene simulation for ladar sensors

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  6. Research on the generation of the background with sea and sky in infrared scene

    NASA Astrophysics Data System (ADS)

    Dong, Yan-zhi; Han, Yan-li; Lou, Shu-li

    2008-03-01

    It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture of infrared image better and can be used in the infrared scene generation widely in future.

  7. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  8. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  9. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.

    1975-01-01

    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.

  10. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  11. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  12. Visible-Infrared Hyperspectral Image Projector

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew

    2013-01-01

    The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.

  13. Computer image generation: Reconfigurability as a strategy in high fidelity space applications

    NASA Technical Reports Server (NTRS)

    Bartholomew, Michael J.

    1989-01-01

    The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.

  14. Framework of passive millimeter-wave scene simulation based on material classification

    NASA Astrophysics Data System (ADS)

    Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun

    2006-05-01

    Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting using actual PMMW sensors. With the reliable PMMW scene simulator, it will be more efficient to apply the PMMW sensor to various applications.

  15. Guidance of visual attention by semantic information in real-world scenes

    PubMed Central

    Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc

    2014-01-01

    Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724

  16. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  17. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  18. Integration of Irma tactical scene generator into directed-energy weapon system simulation

    NASA Astrophysics Data System (ADS)

    Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.

    2003-08-01

    Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.

  19. Synthetic Scene Generation of the Stennis V and V Target Range for the Calibration of Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Cao, Chang-Yong; Blonski, Slawomir; Ryan, Robert; Gasser, Jerry; Zanoni, Vicki

    1999-01-01

    The verification and validation (V&V) target range developed at Stennis Space Center is a useful test site for the calibration of remote sensing systems. In this paper, we present a simple algorithm for generating synthetic radiance scenes or digital models of this target range. The radiation propagation for the target in the solar reflective and thermal infrared spectral regions is modeled using the atmospheric radiative transfer code MODTRAN 4. The at-sensor, in-band radiance and spectral radiance for a given sensor at a given altitude is predicted. Software is developed to generate scenes with different spatial and spectral resolutions using the simulated at-sensor radiance values. The radiometric accuracy of the simulation is evaluated by comparing simulated with AVIRIS acquired radiance values. The results show that in general there is a good match between AVIRIS sensor measured and MODTRAN predicted radiance values for the target despite the fact that some anomalies exist. Synthetic scenes provide a cost-effective way for in-flight validation of the spatial and radiometric accuracy of the data. Other applications include mission planning, sensor simulation, and trade-off analysis in sensor design.

  20. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  1. Low-cost real-time infrared scene generation for image projection and signal injection

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; King, David E.; Bowden, Mark H.

    1998-07-01

    As cost becomes an increasingly important factor in the development and testing of Infrared sensors and flight computer/processors, the need for accurate hardware-in-the- loop (HWIL) simulations is critical. In the past, expensive and complex dedicated scene generation hardware was needed to attain the fidelity necessary for accurate testing. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost-effective replacements for dedicated scene generators. These new scene generators are mainly constructed from commercial-off-the-shelf (COTS) hardware and software components. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC), researchers have developed such a dynamic IR scene generator (IRSG) built around COTS hardware and software. The IRSG is used to provide dynamic inputs to an IR scene projector for in-band seeker testing and for direct signal injection into the seeker or processor electronics. AMCOM MRDEC has developed a second generation IRSG, namely IRSG2, using the latest Silicon Graphics Incorporated (SGI) Onyx2 with Infinite Reality graphics. As reported in previous papers, the SGI Onyx Reality Engine 2 is the platform of the original IRSG that is now referred to as IRSG1. IRSG1 has been in operation and used daily for the past three years on several IR projection and signal injection HWIL programs. Using this second generation IRSG, frame rates have increased from 120 Hz to 400 Hz and intensity resolution from 12 bits to 16 bits. The key features of the IRSGs are real time missile frame rates and frame sizes, dynamic missile-to-target(s) viewpoint updated each frame in real-time by a six-degree-of- freedom (6DOF) system under test (SUT) simulation, multiple dynamic objects (e.g. targets, terrain/background, countermeasures, and atmospheric effects), latency compensation, point-to-extended source anti-aliased targets, and sensor modeling effects. This paper provides a comparison between the IRSG1 and IRSG2 systems and focuses on the IRSG software, real time features, and database development tools.

  2. Electrostatic artificial eyelid actuator as an analog micromirror device

    NASA Astrophysics Data System (ADS)

    Goodwin, Scott H.; Dausch, David E.; Solomon, Steven L.; Lamvik, Michael K.

    2005-05-01

    An electrostatic MEMS actuator is described for use as an analog micromirror device (AMD) for high performance, broadband, hardware-in-the-loop (HWIL) scene generation. Current state-of-the-art technology is based on resistively heated pixel arrays. As these arrays drive to the higher scene temperatures required by missile defense scenarios, the power required to drive the large format resistive arrays will ultimately become prohibitive. Existing digital micromirrors (DMD) are, in principle, capable of generating the required scene irradiances, but suffer from limited dynamic range, resolution and flicker effects. An AMD would be free of these limitations, and so represents a viable alternative for high performance UV/VIS/IR scene generation. An electrostatic flexible film actuator technology, developed for use as "artificial eyelid" shutters for focal plane sensors to protect against damaging radiation, is suitable as an AMD for analog control of projection irradiance. In shutter applications, the artificial eyelid actuator contained radius of curvature as low as 25um and operated at high voltage (>200V). Recent testing suggests that these devices are capable of analog operation as reflective microcantilever mirrors appropriate for scene projector systems. In this case, the device would possess larger radius and operate at lower voltages (20-50V). Additionally, frame rates have been measured at greater than 5kHz for continuous operation. The paper will describe the artificial eyelid technology, preliminary measurements of analog test pixels, and design aspects related to application for scene projection systems. We believe this technology will enable AMD projectors with at least 5122 spatial resolution, non-temporally-modulated output, and pixel response times of <1.25ms.

  3. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  4. Graphics processing unit (GPU) real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  5. Computer-generated, calligraphic, full-spectrum color system for visual simulation landing approach maneuvers

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1975-01-01

    The calligraphic chromatic projector described was developed to improve the perceived realism of visual scene simulation ('out-the-window visuals'). The optical arrangement of the projector is illustrated and discussed. The device permits drawing 2000 vectors in as many as 500 colors, all above critical flicker frequencies, and use of high scene resolution and brightness at an acceptable level to the pilot, with the maximum system capabilities of 1000 lines and 1000 fL. The device for generating the colors is discussed, along with an experiment conducted to demonstrate potential improvements in performance and pilot opinion. Current research work and future research plans are noted.

  6. Utilization of DIRSIG in support of real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Brown, Scott D.

    2000-07-01

    Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.

  7. MIRAGE: system overview and status

    NASA Astrophysics Data System (ADS)

    Robinson, Richard M.; Oleson, Jim; Rubin, Lane; McHugh, Stephen W.

    2000-07-01

    Santa Barbara Infrared's (SBIR) MIRAGE (Multispectral InfraRed Animation Generation Equipment) is a state-of-the-art dynamic infrared scene projector system. Imagery from the first MIRAGE system was presented to the scene simulation community during last year's SPIE AeroSense 99 Symposium. Since that time, SBIR has delivered five MIRAGE systems. This paper will provide an overview of the MIRAGE system and discuss the current status of the MIRAGE. Included is an update of system hardware, and the current configuration. Proposed upgrades to this configuration and options will be discussed. Updates on the latest installations, applications and measured data will also be presented.

  8. Active modulation of laser coded systems using near infrared video projection system based on digital micromirror device (DMD)

    NASA Astrophysics Data System (ADS)

    Khalifa, Aly A.; Aly, Hussein A.; El-Sherif, Ashraf F.

    2016-02-01

    Near infrared (NIR) dynamic scene projection systems are used to perform hardware in-the-loop (HWIL) testing of a unit under test operating in the NIR band. The common and complex requirement of a class of these units is a dynamic scene that is spatio-temporal variant. In this paper we apply and investigate active external modulation of NIR laser in different ranges of temporal frequencies. We use digital micromirror devices (DMDs) integrated as the core of a NIR projection system to generate these dynamic scenes. We deploy the spatial pattern to the DMD controller to simultaneously yield the required amplitude by pulse width modulation (PWM) of the mirror elements as well as the spatio-temporal pattern. Desired modulation and coding of high stable, high power visible (Red laser at 640 nm) and NIR (Diode laser at 976 nm) using the combination of different optical masks based on DMD were achieved. These spatial versatile active coding strategies for both low and high frequencies in the range of kHz for irradiance of different targets were generated by our system and recorded using VIS-NIR fast cameras. The temporally-modulated laser pulse traces were measured using array of fast response photodetectors. Finally using a high resolution spectrometer, we evaluated the NIR dynamic scene projection system response in terms of preserving the wavelength and band spread of the NIR source after projection.

  9. Adaptation of facial synthesis to parameter analysis in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yu, Lu; Zhang, Jingyu; Liu, Yunhai

    2000-12-01

    In MPEG-4, Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs) are defined to animate 1 a facial object. Most of the previous facial animation reconstruction systems were focused on synthesizing animation from manually or automatically generated FAPs but not the FAPs extracted from natural video scene. In this paper, an analysis-synthesis MPEG-4 visual communication system is established, in which facial animation is reconstructed from FAPs extracted from natural video scene.

  10. Key features for ATA / ATR database design in missile systems

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2017-05-01

    Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.

  11. Improving semantic scene understanding using prior information

    NASA Astrophysics Data System (ADS)

    Laddha, Ankit; Hebert, Martial

    2016-05-01

    Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.

  12. Mobility aid for the blind

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A project to develop an effective mobility aid for blind pedestrians which acquires consecutive images of the scenes before a moving pedestrian, which locates and identifies the pedestrian's path and potential obstacles in the path, which presents path and obstacle information to the pedestrian, and which operates in real-time is discussed. The mobility aid has three principal components: an image acquisition system, an image interpretation system, and an information presentation system. The image acquisition system consists of a miniature, solid-state TV camera which transforms the scene before the blind pedestrian into an image which can be received by the image interpretation system. The image interpretation system is implemented on a microprocessor which has been programmed to execute real-time feature extraction and scene analysis algorithms for locating and identifying the pedestrian's path and potential obstacles. Identity and location information is presented to the pedestrian by means of tactile coding and machine-generated speech.

  13. Can IR scene projectors reduce total system cost?

    NASA Astrophysics Data System (ADS)

    Ginn, Robert; Solomon, Steven

    2006-05-01

    There is an incredible amount of system engineering involved in turning the typical infrared system needs of probability of detection, probability of identification, and probability of false alarm into focal plane array (FPA) requirements of noise equivalent irradiance (NEI), modulation transfer function (MTF), fixed pattern noise (FPN), and defective pixels. Unfortunately, there are no analytic solutions to this problem so many approximations and plenty of "seat of the pants" engineering is employed. This leads to conservative specifications, which needlessly drive up system costs by increasing system engineering costs, reducing FPA yields, increasing test costs, increasing rework and the never ending renegotiation of requirements in an effort to rein in costs. These issues do not include the added complexity to the FPA factory manager of trying to meet varied, and changing, requirements for similar products because different customers have made different approximations and flown down different specifications. Scene generation technology may well be mature and cost effective enough to generate considerable overall savings for FPA based systems. We will compare the costs and capabilities of various existing scene generation systems and estimate the potential savings if implemented at several locations in the IR system fabrication cycle. The costs of implementing this new testing methodology will be compared to the probable savings in systems engineering, test, rework, yield improvement and others. The diverse requirements and techniques required for testing missile warning systems, missile seekers, and FLIRs will be defined. Last, we will discuss both the hardware and software requirements necessary to meet the new test paradigm and discuss additional cost improvements related to the incorporation of these technologies.

  14. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  15. A HWIL test facility of infrared imaging laser radar using direct signal injection

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Lu, Wei; Wang, Chunhui; Wang, Qi

    2005-01-01

    Laser radar has been widely used these years and the hardware-in-the-loop (HWIL) testing of laser radar become important because of its low cost and high fidelity compare with On-the-Fly testing and whole digital simulation separately. Scene generation and projection two key technologies of hardware-in-the-loop testing of laser radar and is a complicated problem because the 3D images result from time delay. The scene generation process begins with the definition of the target geometry and reflectivity and range. The real-time 3D scene generation computer is a PC based hardware and the 3D target models were modeled using 3dsMAX. The scene generation software was written in C and OpenGL and is executed to extract the Z-buffer from the bit planes to main memory as range image. These pixels contain each target position x, y, z and its respective intensity and range value. Expensive optical injection technologies of scene projection such as LDP array, VCSEL array, DMD and associated scene generation is ongoing. But the optical scene projection is complicated and always unaffordable. In this paper a cheaper test facility was described that uses direct electronic injection to provide rang images for laser radar testing. The electronic delay and pulse shaping circuits inject the scenes directly into the seeker's signal processing unit.

  16. Fusion of monocular cues to detect man-made structures in aerial imagery

    NASA Technical Reports Server (NTRS)

    Shufelt, Jefferey; Mckeown, David M.

    1991-01-01

    The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.

  17. Computer-generated scenes depicting the HST capture and EVA repair mission

    NASA Image and Video Library

    1993-11-12

    Computer generated scenes depicting the Hubble Space Telescope capture and a sequence of planned events on the planned extravehicular activity (EVA). Scenes include the Remote Manipulator System (RMS) arm assisting two astronauts changing out the Wide Field/Planetary Camera (WF/PC) (48699); RMS arm assisting in the temporary mating of the orbiting telescope to the flight support system in Endeavour's cargo bay (48700); Endeavour's RMS arm assisting in the "capture" of the orbiting telescope (48701); Two astronauts changing out the telescope's coprocessor (48702); RMS arm assistign two astronauts replacing one of the telescope's electronic control units (48703); RMS assisting two astronauts replacing the fuse plugs on the telescope's Power Distribution Unit (PDU) (48704); The telescope's High Resolution Spectrograph (HRS) kit is depicted in this scene (48705); Two astronauts during the removal of the high speed photometer and the installation of the COSTAR instrument (48706); Two astronauts, standing on the RMS, during installation of one of the Magnetic Sensing System (MSS) (48707); High angle view of the orbiting Space Shuttle Endeavour with its cargo bay doors open, revealing the bay's pre-capture configuration. Seen are, from the left, the Solar Array Carrier, the ORU Carrier and the flight support system (48708); Two astronauts performing the replacement of HST's Rate Sensor Units (RSU) (48709); The RMS arm assisting two astronauts with the replacement of the telescope's solar array panels (48710); Two astronauts replacing the telescope's Solar Array Drive Electronics (SADE) (48711).

  18. Hierarchical, Three-Dimensional Measurement System for Crime Scene Scanning.

    PubMed

    Marcin, Adamczyk; Maciej, Sieniło; Robert, Sitnik; Adam, Woźniak

    2017-07-01

    We present a new generation of three-dimensional (3D) measuring systems, developed for the process of crime scene documentation. This measuring system facilitates the preparation of more insightful, complete, and objective documentation for crime scenes. Our system reflects the actual requirements for hierarchical documentation, and it consists of three independent 3D scanners: a laser scanner for overall measurements, a situational structured light scanner for more minute measurements, and a detailed structured light scanner for the most detailed parts of tscene. Each scanner has its own spatial resolution, of 2.0, 0.3, and 0.05 mm, respectively. The results of interviews we have conducted with technicians indicate that our developed 3D measuring system has significant potential to become a useful tool for forensic technicians. To ensure the maximum compatibility of our measuring system with the standards that regulate the documentation process, we have also performed a metrological validation and designated the maximum permissible length measurement error E MPE for each structured light scanner. In this study, we present additional results regarding documentation processes conducted during crime scene inspections and a training session. © 2017 American Academy of Forensic Sciences.

  19. Pilot Task Profiles, Human Factors, And Image Realism

    NASA Astrophysics Data System (ADS)

    McCormick, Dennis

    1982-06-01

    Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.

  20. Hierarchy-associated semantic-rule inference framework for classifying indoor scenes

    NASA Astrophysics Data System (ADS)

    Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei

    2016-03-01

    Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  1. Change Blindness Phenomena for Virtual Reality Display Systems.

    PubMed

    Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete

    2011-09-01

    In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.

  2. The new generation of OpenGL support in ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, M.

    2008-07-01

    OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.

  3. Projection technologies for imaging sensor calibration, characterization, and HWIL testing at AEDC

    NASA Astrophysics Data System (ADS)

    Lowry, H. S.; Breeden, M. F.; Crider, D. H.; Steely, S. L.; Nicholson, R. A.; Labello, J. M.

    2010-04-01

    The characterization, calibration, and mission simulation testing of imaging sensors require continual involvement in the development and evaluation of radiometric projection technologies. Arnold Engineering Development Center (AEDC) uses these technologies to perform hardware-in-the-loop (HWIL) testing with high-fidelity complex scene projection technologies that involve sophisticated radiometric source calibration systems to validate sensor mission performance. Testing with the National Institute of Standards and Technology (NIST) Ballistic Missile Defense Organization (BMDO) transfer radiometer (BXR) and Missile Defense Agency (MDA) transfer radiometer (MDXR) offers improved radiometric and temporal fidelity in this cold-background environment. The development of hardware and test methodologies to accommodate wide field of view (WFOV), polarimetric, and multi/hyperspectral imaging systems is being pursued to support a variety of program needs such as space situational awareness (SSA). Test techniques for the acquisition of data needed for scene generation models (solar/lunar exclusion, radiation effects, etc.) are also needed and are being sought. The extension of HWIL testing to the 7V Chamber requires the upgrade of the current satellite emulation scene generation system. This paper provides an overview of pertinent technologies being investigated and implemented at AEDC.

  4. JView Visualization for Next Generation Air Transportation System

    DTIC Science & Technology

    2011-01-01

    hardware graphics acceleration. JView relies on concrete Object Oriented Design (OOD) and programming techniques to provide a robust and venue non...visibility priority of a texture set. A good example of this is you have translucent images that should always be visible over the other textures...elements present in the scene. • Capture Alpha. Allows the alpha color channel ( translucency ) to be saved when capturing images or movies of a 3D scene

  5. High-fidelity real-time maritime scene rendering

    NASA Astrophysics Data System (ADS)

    Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin

    2011-06-01

    The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.

  6. Generation and physical characteristics of the ERTS MSS system corrected computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1973-01-01

    The generation and format are discussed of the ERTS system corrected multispectral scanner computer compatible tapes. The discussion includes spacecraft sensors, scene characteristics, data transmission, and conversion of data to computer compatible tapes at the NASA Data Processing Facility. Geometeric and radiometric corrections, tape formats, and the physical characteristics of the tapes are also included.

  7. On validating remote sensing simulations using coincident real data

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan

    2016-05-01

    The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.

  8. Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.

    2016-05-01

    Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.

  9. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  10. Visual supports for shared reading with young children: the effect of static overlay design.

    PubMed

    Wood Jackson, Carla; Wahlquist, Jordan; Marquis, Cassandra

    2011-06-01

    This study examined the effects of two types of static overlay design (visual scene display and grid display) on 39 children's use of a speech-generating device during shared storybook reading with an adult. This pilot project included two groups: preschool children with typical communication skills (n = 26) and with complex communication needs (n = 13). All participants engaged in shared reading with two books using each visual layout on a speech-generating device (SGD). The children averaged a greater number of activations when presented with a grid display during introductory exploration and free play. There was a large effect of the static overlay design on the number of silent hits, evidencing more silent hits with visual scene displays. On average, the children demonstrated relatively few spontaneous activations of the speech-generating device while the adult was reading, regardless of overlay design. When responding to questions, children with communication needs appeared to perform better when using visual scene displays, but the effect of display condition on the accuracy of responses to wh-questions was not statistically significant. In response to an open ended question, children with communication disorders demonstrated more frequent activations of the SGD using a grid display than a visual scene. Suggestions for future research as well as potential implications for designing AAC systems for shared reading with young children are discussed.

  11. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  12. New technologies for HWIL testing of WFOV, large-format FPA sensor systems

    NASA Astrophysics Data System (ADS)

    Fink, Christopher

    2016-05-01

    Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.

  13. Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data

    NASA Astrophysics Data System (ADS)

    d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter

    2010-12-01

    IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.

  14. Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation

    NASA Astrophysics Data System (ADS)

    Inamoto, Naho; Saito, Hideo

    2003-06-01

    This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..

  15. Computer 3D site model generation based on aerial images

    NASA Astrophysics Data System (ADS)

    Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.

    1997-07-01

    The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.

  16. Maximizing Trust in the Wireless Emergency Alerts (WEA) Service

    DTIC Science & Technology

    2014-02-01

    Homeland Security under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software En - gineering Institute, a...AOs will protect their alert-generating systems from misuse. A compro- mised alert-generating system could overload the IPAWS-OPEN message validation...greater accessibility, such as accessing the WEA service re- motely from the scene of an incident. Although we are currently unaware of any alerting

  17. Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method.

    PubMed

    Matsushima, Kyoji; Nakahara, Sumio

    2009-12-01

    A large-scale full-parallax computer-generated hologram (CGH) with four billion (2(16) x 2(16)) pixels is created to reconstruct a fine true 3D image of a scene, with occlusions. The polygon-based method numerically generates the object field of a surface object, whose shape is provided by a set of vertex data of polygonal facets, while the silhouette method makes it possible to reconstruct the occluded scene. A novel technique using the segmented frame buffer is presented for handling and propagating large wave fields even in the case where the whole wave field cannot be stored in memory. We demonstrate that the full-parallax CGH, calculated by the proposed method and fabricated by a laser lithography system, reconstructs a fine 3D image accompanied by a strong sensation of depth.

  18. Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing

    NASA Astrophysics Data System (ADS)

    Sedlar, Michael F.; Griffith, Jerry A.

    1988-07-01

    This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.

  19. Enhancement of Stereo Imagery by Artificial Texture Projection Generated Using a LIDAR

    NASA Astrophysics Data System (ADS)

    Veitch-Michaelis, Joshua; Muller, Jan-Peter; Walton, David; Storey, Jonathan; Foster, Michael; Crutchley, Benjamin

    2016-06-01

    Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.

  20. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  1. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  2. A Low-Signal-to-Noise-Ratio Sensor Framework Incorporating Improved Nighttime Capabilities in DIRSIG

    NASA Astrophysics Data System (ADS)

    Rizzuto, Anthony P.

    When designing new remote sensing systems, it is difficult to make apples-to-apples comparisons between designs because of the number of sensor parameters that can affect the final image. Using synthetic imagery and a computer sensor model allows for comparisons to be made between widely different sensor designs or between competing design parameters. Little work has been done in fully modeling low-SNR systems end-to-end for these types of comparisons. Currently DIRSIG has limited capability to accurately model nighttime scenes under new moon conditions or near large cities. An improved DIRSIG scene modeling capability is presented that incorporates all significant sources of nighttime radiance, including new models for urban glow and airglow, both taken from the astronomy community. A low-SNR sensor modeling tool is also presented that accounts for sensor components and noise sources to generate synthetic imagery from a DIRSIG scene. The various sensor parameters that affect SNR are discussed, and example imagery is shown with the new sensor modeling tool. New low-SNR detectors have recently been designed and marketed for remote sensing applications. A comparison of system parameters for a state-of-the-art low-SNR sensor is discussed, and a sample design trade study is presented for a hypothetical scene and sensor.

  3. Generative Learning during Visual Search for Scene Changes: Enhancing Free Recall of Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael T.; Soraci, Sal A.; Strawbridge, Christina P.

    2005-01-01

    Memory for scene changes that were identified immediately (passive encoding) or following systematic and effortful search (generative encoding) was compared across groups differing in age and intelligence. In the context of flicker methodology, generative search for the changing object involved selection and rejection of multiple potential…

  4. Concurrent-scene/alternate-pattern analysis for robust video-based docking systems

    NASA Technical Reports Server (NTRS)

    Udomkesmalee, Suraphol

    1991-01-01

    A typical docking target employs a three-point design of retroreflective tape, one at each endpoint of the center-line, and one on the tip of the central post. Scenes, sensed via laser diode illumination, produce pictures with spots corresponding to desired reflection from the retroreflectors and other reflections. Control corrections for each axis of the vehicle can then be properly applied if the desired spots are accurately tracked. However, initial acquisition of these three spots (detection and identification problem) are non-trivial under a severe noise environment. Signal-to-noise enhancement, accomplished by subtracting the non-illuminated scene from the target scene illuminated by laser diodes, can not eliminate every false spot. Hence, minimization of docking failures due to target mistracking would suggest needed inclusion of added processing features pertaining to target locations. In this paper, we present a concurrent processing scheme for a modified docking target scene which could lead to a perfect docking system. Since the non-illuminated target scene is already available, adding another feature to the three-point design by marking two non-reflective lines, one between the two end-points and one from the tip of the central post to the center-line, would allow this line feature to be picked-up only when capturing the background scene (sensor data without laser illumination). Therefore, instead of performing the image subtraction to generate a picture with a high signal-to-noise ratio, a processed line-image based on the robust line detection technique (Hough transform) can be used to fuse with the actively sensed three-point target image to deduce the true locations of the docking target. This dual-channel confirmation scheme is necessary if a fail-safe system is to be realized from both the sensing and processing point-of-views. Detailed algorithms and preliminary results are presented.

  5. Programmable personality interface for the dynamic infrared scene generator (IRSG2)

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Mobley, Scott B.; Mayhall, Anthony J.; Braselton, William J.

    1998-07-01

    As scene generator platforms begin to rely specifically on commercial off-the-shelf (COTS) hardware and software components, the need for high speed programmable personality interfaces (PPIs) are required for interfacing to Infrared (IR) flight computer/processors and complex IR projectors in the hardware-in-the-loop (HWIL) simulation facilities. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost effective PPIs to interface to COTS scene generators. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC) researchers have developed such a PPI to reside between the AMCOM MRDEC IR Scene Generator (IRSG) and either a missile flight computer or the dynamic Laser Diode Array Projector (LDAP). AMCOM MRDEC has developed several PPIs for the first and second generation IRSGs (IRSG1 and IRSG2), which are based on Silicon Graphics Incorporated (SGI) Onyx and Onyx2 computers with Reality Engine 2 (RE2) and Infinite Reality (IR/IR2) graphics engines. This paper provides an overview of PPIs designed, integrated, tested, and verified at AMCOM MRDEC, specifically the IRSG2's PPI.

  6. Atmosphere-based image classification through luminance and hue

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Zhang, Yujin

    2005-07-01

    In this paper a novel image classification system is proposed. Atmosphere serves an important role in generating the scene"s topic or in conveying the message behind the scene"s story, which belongs to abstract attribute level in semantic levels. At first, five atmosphere semantic categories are defined according to rules of photo and film grammar, followed by global luminance and hue features. Then the hierarchical SVM classifiers are applied. In each classification stage, corresponding features are extracted and the trained linear SVM is implemented, resulting in two classes. After three stages of classification, five atmosphere categories are obtained. At last, the text annotation of the atmosphere semantics and the corresponding features by Extensible Markup Language (XML) in MPEG-7 is defined, which can be integrated into more multimedia applications (such as searching, indexing and accessing of multimedia content). The experiment is performed on Corel images and film frames. The classification results prove the effectiveness of the definition of atmosphere semantic classes and the corresponding features.

  7. Generation, recognition, and consistent fusion of partial boundary representations from range images

    NASA Astrophysics Data System (ADS)

    Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang

    1994-10-01

    This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.

  8. Three-dimensional scene encryption and display based on computer-generated holograms.

    PubMed

    Kong, Dezhao; Cao, Liangcai; Jin, Guofan; Javidi, Bahram

    2016-10-10

    An optical encryption and display method for a three-dimensional (3D) scene is proposed based on computer-generated holograms (CGHs) using a single phase-only spatial light modulator. The 3D scene is encoded as one complex Fourier CGH. The Fourier CGH is then decomposed into two phase-only CGHs with random distributions by the vector stochastic decomposition algorithm. Two CGHs are interleaved as one final phase-only CGH for optical encryption and reconstruction. The proposed method can support high-level nonlinear optical 3D scene security and complex amplitude modulation of the optical field. The exclusive phase key offers strong resistances of decryption attacks. Experimental results demonstrate the validity of the novel method.

  9. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.

    PubMed

    Sohn, Bong-Soo

    2017-03-11

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

  10. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones

    PubMed Central

    Sohn, Bong-Soo

    2017-01-01

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487

  11. 3D Traffic Scene Understanding From Movable Platforms.

    PubMed

    Geiger, Andreas; Lauer, Martin; Wojek, Christian; Stiller, Christoph; Urtasun, Raquel

    2014-05-01

    In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

  12. New scene change control scheme based on pseudoskipped picture

    NASA Astrophysics Data System (ADS)

    Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.

    1997-01-01

    A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.

  13. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Chen, Qian; Gu, Guohua; Feng, Shijie; Feng, Fangxiaoyu; Li, Rubin; Shen, Guochen

    2013-08-01

    This paper introduces a high-speed three-dimensional (3-D) shape measurement technique for dynamic scenes by using bi-frequency tripolar pulse-width-modulation (TPWM) fringe projection. Two wrapped phase maps with different wavelengths can be obtained simultaneously by our bi-frequency phase-shifting algorithm. Then the two phase maps are unwrapped using a simple look-up-table based number-theoretical approach. To guarantee the robustness of phase unwrapping as well as the high sinusoidality of projected patterns, TPWM technique is employed to generate ideal fringe patterns with slight defocus. We detailed our technique, including its principle, pattern design, and system setup. Several experiments on dynamic scenes were performed, verifying that our method can achieve a speed of 1250 frames per second for fast, dense, and accurate 3-D measurements.

  14. "Disorganized in time": impact of bottom-up and top-down negative emotion generation on memory formation among healthy and traumatized adolescents.

    PubMed

    Guillery-Girard, Bérengère; Clochon, Patrice; Giffard, Bénédicte; Viard, Armelle; Egler, Pierre-Jean; Baleyte, Jean-Marc; Eustache, Francis; Dayan, Jacques

    2013-09-01

    "Travelling in time," a central feature of episodic memory is severely affected among individuals with Post Traumatic Stress Disorder (PTSD) with two opposite effects: vivid traumatic memories are unorganized in temporality (bottom-up processes), non-traumatic personal memories tend to lack spatio-temporal details and false recognitions occur more frequently that in the general population (top-down processes). To test the effect of these two types of processes (i.e. bottom-up and top-down) on emotional memory, we conducted two studies in healthy and traumatized adolescents, a period of life in which vulnerability to emotion is particularly high. Using negative and neutral images selected from the international affective picture system (IAPS), stimuli were divided into perceptual images (emotion generated by perceptual details) and conceptual images (emotion generated by the general meaning of the material). Both categories of stimuli were then used, along with neutral pictures, in a memory task with two phases (encoding and recognition). In both populations, we reported a differential effect of the emotional material on encoding and recognition. Negative perceptual scenes induced an attentional capture effect during encoding and enhanced the recollective distinctiveness. Conversely, the encoding of conceptual scenes was similar to neutral ones, but the conceptual relatedness induced false memories at retrieval. However, among individuals with PTSD, two subgroups of patients were identified. The first subgroup processed the scenes faster than controls, except for the perceptual scenes, and obtained similar performances to controls in the recognition task. The second subgroup group desmonstrated an attentional deficit in the encoding task with no benefit from the distinctiveness associated with negative perceptual scenes on memory performances. These findings provide a new perspective on how negative emotional information may have opposite influences on memory in normal and traumatized individuals. It also gives clues to understand how intrusive memories and overgeneralization takes place in PTSD. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies

    NASA Astrophysics Data System (ADS)

    Moorhead, Ian R.; Gilmore, Marilyn A.; Houlbrook, Alexander W.; Oxford, David E.; Filbee, David R.; Stroud, Colin A.; Hutchings, G.; Kirk, Albert

    2001-09-01

    Assessment of camouflage, concealment, and deception (CCD) methodologies is not a trivial problem; conventionally the only method has been to carry out field trials, which are both expensive and subject to the vagaries of the weather. In recent years computing power has increased, such that there are now many research programs using synthetic environments for CCD assessments. Such an approach is attractive; the user has complete control over the environmental parameters and many more scenarios can be investigated. The UK Ministry of Defence is currently developing a synthetic scene generation tool for assessing the effectiveness of air vehicle camouflage schemes. The software is sufficiently flexible to allow it to be used in a broader range of applications, including full CCD assessment. The synthetic scene simulation system (CAMEO- SIM) has been developed, as an extensible system, to provide imagery within the 0.4 to 14 micrometers spectral band with as high a physical fidelity as possible. it consists of a scene design tool, an image generator, that incorporates both radiosity and ray-tracing process, and an experimental trials tool. The scene design tool allows the user to develop a 3D representation of the scenario of interest from a fixed viewpoint. Target(s) of interest can be placed anywhere within this 3D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects. The user has complete control over the level of fidelity of the final image. The output from the rendering tool is a sequence of radiance maps, which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks. The software also maintains an audit trail of all data selected to generate a particular image, both in terms of material properties used and the rendering options chosen. A range of verification tests has shown that the software computes the correct values for analytically tractable scenarios. Validation test using simple scenes have also been undertaken. More complex validation tests using observer trials are planned. The current version of CAMEO-SIM and how its images are used for camouflage assessment is described. The verification and validation tests undertaken are discussed. In addition, example images will be used to demonstrate the significance of different effects, such as spectral rendering and shadows. Planned developments of CAMEO-SIM are also outlined.

  16. Bulk silicon as photonic dynamic infrared scene projector

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Bogatyrenko, V. V.; Malyutenko, O. Yu.

    2013-04-01

    A Si-based fast (frame rate >1 kHz), large-scale (scene area 100 cm2), broadband (3-12 μm), dynamic contactless infrared (IR) scene projector is demonstrated. An IR movie appears on a scene because of the conversion of a visible scenario projected at a scene kept at elevated temperature. Light down conversion comes as a result of free carrier generation in a bulk Si scene followed by modulation of its thermal emission output in the spectral band of free carrier absorption. The experimental setup, an IR movie, figures of merit, and the process's advantages in comparison to other projector technologies are discussed.

  17. Correlated Topic Vector for Scene Classification.

    PubMed

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  18. Bag of Lines (BoL) for Improved Aerial Scene Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less

  19. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    NASA Astrophysics Data System (ADS)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  20. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  1. Phase 1 Development Report for the SESSA Toolkit.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knowlton, Robert G.; Melton, Brad J; Anderson, Robert J.

    The Site Exploitation System for Situational Awareness ( SESSA ) tool kit , developed by Sandia National Laboratories (SNL) , is a comprehensive de cision support system for crime scene data acquisition and Sensitive Site Exploitation (SSE). SESSA is an outgrowth of another SNL developed decision support system , the Building R estoration Operations Optimization Model (BROOM), a hardware/software solution for data acquisition, data management, and data analysis. SESSA was designed to meet forensic crime scene needs as defined by the DoD's Military Criminal Investigation Organiza tion (MCIO) . SESSA is a very comprehensive toolki t with a considerable amountmore » of database information managed through a Microsoft SQL (Structured Query Language) database engine, a Geographical Information System (GIS) engine that provides comprehensive m apping capabilities, as well as a an intuitive Graphical User Interface (GUI) . An electronic sketch pad module is included. The system also has the ability to efficiently generate necessary forms for forensic crime scene investigations (e.g., evidence submittal, laboratory requests, and scene notes). SESSA allows the user to capture photos on site, and can read and generate ba rcode labels that limit transcription errors. SESSA runs on PC computers running Windows 7, but is optimized for touch - screen tablet computers running Windows for ease of use at crime scenes and on SSE deployments. A prototype system for 3 - dimensional (3 D) mapping and measur e ments was also developed to complement the SESSA software. The mapping system employs a visual/ depth sensor that captures data to create 3D visualizations of an interior space and to make distance measurements with centimeter - level a ccuracy. Output of this 3D Model Builder module provides a virtual 3D %22walk - through%22 of a crime scene. The 3D mapping system is much less expensive and easier to use than competitive systems. This document covers the basic installation and operation of th e SESSA tool kit in order to give the user enough information to start using the tool kit . SESSA is currently a prototype system and this documentation covers the initial release of the tool kit . Funding for SESSA was provided by the Department of Defense (D oD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL) . ACKNOWLEDGEMENTS The authors wish to acknowledge the funding support for the development of the Site Exploitation System for Situational Awareness (SESSA) toolkit from the Department of Defense (DoD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL). Special thanks to Mr. Garold Warner, of DFSC, who served as the Project Manager. Individuals that worked on the design, functional attributes, algorithm development, system arc hitecture, and software programming include: Robert Knowlton, Brad Melton, Robert Anderson, and Wendy Amai.« less

  2. System for Continuous Delivery of MODIS Imagery to Internet Mapping Applications

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2008-01-01

    This software represents a complete, unsupervised processing chain that generates a continuously updating global image of the Earth from the most recent available MODIS Level 1B scenes. The software constantly updates a global image of the Earth at 250 m per pixel.

  3. Generation and physical characteristics of the LANDSAT-1, -2 and -3 MSS computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1977-01-01

    The generation and format of the LANDSAT 1, 2, and 3 system corrected multispectral scanner computer compatible tapes are discussed. Included in the discussion are the spacecraft sensors, scene characteristics, the transmission of data, and the conversion of the data to computer compatible tapes. Also included in the discussion are geometric and radiometric corrections, tape formats, and the physical characteristics of the tape.

  4. Generation and physical characteristics of the Landsat 1 and 2 MSS computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1975-01-01

    The generation and format is discussed of the Landsat 1 and 2 system corrected multispectral scanner computer compatible tapes. Included in the discussion are the spacecraft sensors, scene characteristics, the transmission of data, and the conversion of the data to computer compatible tapes at the NASA Data Processing Facility. Geometric and radiometric corrections, tape formats, and the physical characteristics of the tape are also described.

  5. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    NASA Astrophysics Data System (ADS)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  6. An integrated software system for geometric correction of LANDSAT MSS imagery

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Esilva, A. J. F. M.; Camara-Neto, G.; Serra, P. R. M.; Desousa, R. C. M.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A system for geometrically correcting LANDSAT MSS imagery includes all phases of processing, from receiving a raw computer compatible tape (CCT) to the generation of a corrected CCT (or UTM mosaic). The system comprises modules for: (1) control of the processing flow; (2) calculation of satellite ephemeris and attitude parameters, (3) generation of uncorrected files from raw CCT data; (4) creation, management and maintenance of a ground control point library; (5) determination of the image correction equations, using attitude and ephemeris parameters and existing ground control points; (6) generation of corrected LANDSAT file, using the equations determined beforehand; (7) union of LANDSAT scenes to produce and UTM mosaic; and (8) generation of output tape, in super-structure format.

  7. Direct Y-STR amplification of body fluids deposited on commonly found crime scene substrates.

    PubMed

    Dargay, Amanda; Roy, Reena

    2016-04-01

    Body fluids detected on commonly found crime scene substrates require extraction, purification and quantitation of DNA prior to amplification and generation of short tandem repeat (STR) DNA profiles. In this research Y-STR profiles were generated via direct amplification of blood and saliva deposited on 12 different substrates. These included cigarette butts, straws, grass, leaves, woodchips and seven different types of fabric. After depositing either 0.1 μL of blood or 0.5 μL of saliva, each substrate containing the dry body fluid stain was punched using a Harris 1.2 mm micro-punch. Each of these punched substrates, a total of 720 samples, containing minute amount of blood or saliva was either amplified directly without any pre-treatment, or was treated with one of the four washing reagents or buffer. In each of these five experimental groups the substrates containing the body fluid remained in the amplification reagent during the thermal cycling process. Each sample was amplified with the three direct Y-STR amplification kits; AmpFℓSTR(®) Yfiler(®) Direct, Yfiler(®) Plus Amplification Kits and the PowerPlex(®) Y23 System. Complete and concordant Y-STR profiles were successfully obtained from most of these 12 challenging crime scene objects when the stains were analyzed by at least one of the five experimental groups. The reagents and buffer were interchangeable among the three amplification kits, however, pre-treatment with these solutions did not appear to enhance the quality or the number of the full profiles generated with direct amplification. This study demonstrates that blood and saliva deposited on these simulated crime scene objects can be amplified directly. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People

    NASA Astrophysics Data System (ADS)

    Thomik, Andreas; Faisal, A. Aldo

    2015-03-01

    Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).

  9. Clandestine laboratory scene investigation and processing using portable GC/MS

    NASA Astrophysics Data System (ADS)

    Matejczyk, Raymond J.

    1997-02-01

    This presentation describes the use of portable gas chromatography/mass spectrometry for on-scene investigation and processing of clandestine laboratories. Clandestine laboratory investigations present special problems to forensic investigators. These crime scenes contain many chemical hazards that must be detected, identified and collected as evidence. Gas chromatography/mass spectrometry performed on-scene with a rugged, portable unit is capable of analyzing a variety of matrices for drugs and chemicals used in the manufacture of illicit drugs, such as methamphetamine. Technologies used to detect various materials at a scene have particular applications but do not address the wide range of samples, chemicals, matrices and mixtures that exist in clan labs. Typical analyses performed by GC/MS are for the purpose of positively establishing the identity of starting materials, chemicals and end-product collected from clandestine laboratories. Concerns for the public and investigator safety and the environment are also important factors for rapid on-scene data generation. Here is described the implementation of a portable multiple-inlet GC/MS system designed for rapid deployment to a scene to perform forensic investigations of clandestine drug manufacturing laboratories. GC/MS has long been held as the 'gold standard' in performing forensic chemical analyses. With the capability of GC/MS to separate and produce a 'chemical fingerprint' of compounds, it is utilized as an essential technique for detecting and positively identifying chemical evidence. Rapid and conclusive on-scene analysis of evidence will assist the forensic investigators in collecting only pertinent evidence thereby reducing the amount of evidence to be transported, reducing chain of custody concerns, reducing costs and hazards, maintaining sample integrity and speeding the completion of the investigative process.

  10. Quick realization of a ship steering training simulation system by virtual reality

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Zhi, Pinghua; Nie, Weiguo

    2003-09-01

    This paper addresses two problems of a ship handling simulator. Firstly, 360 scene generation, especially 3D dynamic sea wave modeling, is described. Secondly, a multi-computer complementation of ship handling simulator. This paper also gives the experimental results of the proposed ship handling simulator.

  11. Tree growth visualization

    Treesearch

    L. Linsen; B.J. Karis; E.G. McPherson; B. Hamann

    2005-01-01

    In computer graphics, models describing the fractal branching structure of trees typically exploit the modularity of tree structures. The models are based on local production rules, which are applied iteratively and simultaneously to create a complex branching system. The objective is to generate three-dimensional scenes of often many realistic- looking and non-...

  12. mPano: cloud-based mobile panorama view from single picture

    NASA Astrophysics Data System (ADS)

    Li, Hongzhi; Zhu, Wenwu

    2013-09-01

    Panorama view provides people an informative and natural user experience to represent the whole scene. The advances on mobile augmented reality, mobile-cloud computing, and mobile internet can enable panorama view on mobile phone with new functionalities, such as anytime anywhere query where a landmark picture is and what the whole scene looks like. To generate and explore panorama view on mobile devices faces significant challenges due to the limitations of computing capacity, battery life, and memory size of mobile phones, as well as the bandwidth of mobile Internet connection. To address the challenges, this paper presents a novel cloud-based mobile panorama view system that can generate and view panorama-view on mobile devices from a single picture, namely "Pano". In our system, first, we propose a novel iterative multi-modal image retrieval (IMIR) approach to get spatially adjacent images using both tag and content information from the single picture. Second, we propose a cloud-based parallel server synthing approach to generate panorama view in cloud, against today's local-client synthing approach that is almost impossible for mobile phones. Third, we propose predictive-cache solution to reduce latency of image delivery from cloud server to the mobile client. We have built a real mobile panorama view system and perform experiments. The experimental results demonstrated the effectiveness of our system and the proposed key component technologies, especially for landmark images.

  13. Inverting a dispersive scene's side-scanned image

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1983-01-01

    Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.

  14. Evaluation of Alternate Concepts for Synthetic Vision Flight Displays With Weather-Penetrating Sensor Image Inserts During Simulated Landing Approaches

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.

    2003-01-01

    A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.

  15. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  16. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  17. Plenoptic layer-based modeling for image based rendering.

    PubMed

    Pearson, James; Brookes, Mike; Dragotti, Pier Luigi

    2013-09-01

    Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.

  18. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  19. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    NASA Astrophysics Data System (ADS)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons & Fractals, 28, 337-360 (2006).[5] Mangiarotti, Coudret, Drapeau, & Jarlan, Polynomial search and global modeling, Phys. Rev. E 86(4), 046205 (2012).[6] Mangiarotti, Modélisation globale et Caractérisation Topologique de dynamiques environnementales. Habilitation à Diriger des Recherches, Univ. Toulouse 3 (2014).

  20. Robot Vision

    NASA Technical Reports Server (NTRS)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.

  1. Automatic generation of pictorial transcripts of video programs

    NASA Astrophysics Data System (ADS)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  2. The effects of scene content parameters, compression, and frame rate on the performance of analytics systems

    NASA Astrophysics Data System (ADS)

    Tsifouti, A.; Triantaphillidou, S.; Larabi, M. C.; Doré, G.; Bilissi, E.; Psarrou, A.

    2015-01-01

    In this investigation we study the effects of compression and frame rate reduction on the performance of four video analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack) entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4 AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems. Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence. Findings could contribute to the improvement of VA systems.

  3. Guided exploration in virtual environments

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas

    2001-06-01

    We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.

  4. Three-dimensional information hierarchical encryption based on computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Kong, Dezhao; Shen, Xueju; Cao, Liangcai; Zhang, Hao; Zong, Song; Jin, Guofan

    2016-12-01

    A novel approach for encrypting three-dimensional (3-D) scene information hierarchically based on computer-generated holograms (CGHs) is proposed. The CGHs of the layer-oriented 3-D scene information are produced by angular-spectrum propagation algorithm at different depths. All the CGHs are then modulated by different chaotic random phase masks generated by the logistic map. Hierarchical encryption encoding is applied when all the CGHs are accumulated one by one, and the reconstructed volume of the 3-D scene information depends on permissions of different users. The chaotic random phase masks could be encoded into several parameters of the chaotic sequences to simplify the transmission and preservation of the keys. Optical experiments verify the proposed method and numerical simulations show the high key sensitivity, high security, and application flexibility of the method.

  5. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  6. Generative technique for dynamic infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Cao, Zhiguo; Zhang, Tianxu

    2001-09-01

    The generative technique of the dynamic infrared image was discussed in this paper. Because infrared sensor differs from CCD camera in imaging mechanism, it generates the infrared image by incepting the infrared radiation of scene (including target and background). The infrared imaging sensor is affected deeply by the atmospheric radiation, the environmental radiation and the attenuation of atmospheric radiation transfers. Therefore at first in this paper the imaging influence of all kinds of the radiations was analyzed and the calculation formula of radiation was provided, in addition, the passive scene and the active scene were analyzed separately. Then the methods of calculation in the passive scene were provided, and the functions of the scene model, the atmospheric transmission model and the material physical attribute databases were explained. Secondly based on the infrared imaging model, the design idea, the achievable way and the software frame for the simulation software of the infrared image sequence were introduced in SGI workstation. Under the guidance of the idea above, in the third segment of the paper an example of simulative infrared image sequences was presented, which used the sea and sky as background and used the warship as target and used the aircraft as eye point. At last the simulation synthetically was evaluated and the betterment scheme was presented.

  7. See It With Your Own Eyes: Markerless Mobile Augmented Reality for Radiation Awareness in the Hybrid Room.

    PubMed

    Loy Rodas, Nicolas; Barrera, Fernando; Padoy, Nicolas

    2017-02-01

    We present an approach to provide awareness to the harmful ionizing radiation generated during X-ray-guided minimally invasive procedures. A hand-held screen is used to display directly in the user's view information related to radiation safety in a mobile augmented reality (AR) manner. Instead of using markers, we propose a method to track the observer's viewpoint, which relies on the use of multiple RGB-D sensors and combines equipment detection for tracking initialization with a KinectFusion-like approach for frame-to-frame tracking. Two of the sensors are ceiling-mounted and a third one is attached to the hand-held screen. The ceiling cameras keep an updated model of the room's layout, which is used to exploit context information and improve the relocalization procedure. The system is evaluated on a multicamera dataset generated inside an operating room (OR) and containing ground-truth poses of the AR display. This dataset includes a wide variety of sequences with different scene configurations, occlusions, motion in the scene, and abrupt viewpoint changes. Qualitative results illustrating the different AR visualization modes for radiation awareness provided by the system are also presented. Our approach allows the user to benefit from a large AR visualization area and permits to recover from tracking failure caused by vast motion or changes in the scene just by looking at a piece of equipment. The system enables the user to see the 3-D propagation of radiation, the medical staff's exposure, and/or the doses deposited on the patient's surface as seen through his own eyes.

  8. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  9. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.

  10. Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory

    NASA Astrophysics Data System (ADS)

    Dichter, W.; Doris, K.; Conkling, C.

    1982-06-01

    A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.

  11. High-temperature MIRAGE XL (LFRA) IRSP system development

    NASA Astrophysics Data System (ADS)

    McHugh, Steve; Franks, Greg; LaVeigne, Joe

    2017-05-01

    The development of very-large format infrared detector arrays has challenged the IR scene projector community to develop larger-format infrared emitter arrays. Many scene projector applications also require much higher simulated temperatures than can be generated with current technology. This paper will present an overview of resistive emitterbased (broadband) IR scene projector system development, as well as describe recent progress in emitter materials and pixel designs applicable for legacy MIRAGE XL Systems to achieve apparent temperatures >1000K in the MWIR. These new high temperature MIRAGE XL (LFRA) Digital Emitter Engines (DEE) will be "plug and play" equivalent with legacy MIRAGE XL DEEs, the rest of the system is reusable. Under the High Temperature Dynamic Resistive Array (HDRA) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>2k x 2k) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1500 K. These new emitter materials can be utilized with legacy RIICs to produce pixels that can achieve 7X the radiance of the legacy systems with low cost and low risk. A 'scalable' Read-In Integrated Circuit (RIIC) is also being developed under the same HDRA program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. These quilted arrays can be fabricated in any N x M size in 512 steps.

  12. Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory

    NASA Technical Reports Server (NTRS)

    Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.

    2005-01-01

    Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.

  13. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, D.

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  14. Three-camera stereo vision for intelligent transportation systems

    NASA Astrophysics Data System (ADS)

    Bergendahl, Jason; Masaki, Ichiro; Horn, Berthold K. P.

    1997-02-01

    A major obstacle in the application of stereo vision to intelligent transportation system is high computational cost. In this paper, a PC based three-camera stereo vision system constructed with off-the-shelf components is described. The system serves as a tool for developing and testing robust algorithms which approach real-time performance. We present an edge based, subpixel stereo algorithm which is adapted to permit accurate distance measurements to objects in the field of view using a compact camera assembly. Once computed, the 3D scene information may be directly applied to a number of in-vehicle applications, such as adaptive cruise control, obstacle detection, and lane tracking. Moreover, since the largest computational costs is incurred in generating the 3D scene information, multiple applications that leverage this information can be implemented in a single system with minimal cost. On-road applications, such as vehicle counting and incident detection, are also possible. Preliminary in-vehicle road trial results are presented.

  15. A comparison of directed search target detection versus in-scene target detection in Worldview-2 datasets

    NASA Astrophysics Data System (ADS)

    Grossman, S.

    2015-05-01

    Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.

  16. NASA Fundamental Remote Sensing Science Research Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The NASA Fundamental Remote Sensing Research Program is described. The program provides a dynamic scientific base which is continually broadened and from which future applied research and development can draw support. In particular, the overall objectives and current studies of the scene radiation and atmospheric effect characterization (SRAEC) project are reviewed. The SRAEC research can be generically structured into four types of activities including observation of phenomena, empirical characterization, analytical modeling, and scene radiation analysis and synthesis. The first three activities are the means by which the goal of scene radiation analysis and synthesis is achieved, and thus are considered priority activities during the early phases of the current project. Scene radiation analysis refers to the extraction of information describing the biogeophysical attributes of the scene from the spectral, spatial, and temporal radiance characteristics of the scene including the atmosphere. Scene radiation synthesis is the generation of realistic spectral, spatial, and temporal radiance values for a scene with a given set of biogeophysical attributes and atmospheric conditions.

  17. Systems and Methods for Automated Water Detection Using Visible Sensors

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L. (Inventor); Matthies, Larry H. (Inventor); Bellutta, Paolo (Inventor)

    2016-01-01

    Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.

  18. Study on general design of dual-DMD based infrared two-band scene simulation system

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  19. Towards surgeon-authored VR training: the scene-development cycle.

    PubMed

    Dindar, Saleh; Nguyen, Thien; Peters, Jörg

    2016-01-01

    Enabling surgeon-educators to themselves create virtual reality (VR) training units promises greater variety, specialization, and relevance of the units. This paper describes a software bridge that semi-automates the scene-generation cycle, a key bottleneck in authoring, modeling, and developing VR units. Augmenting an open source modeling environment with physical behavior attachment and collision specifications yields single-click testing of the full force-feedback enabled anatomical scene.

  20. Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language.

    PubMed

    Falomir, Zoe; Kluth, Thomas

    2017-06-24

    The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.

  1. A Low-cost System for Generating Near-realistic Virtual Actors

    NASA Astrophysics Data System (ADS)

    Afifi, Mahmoud; Hussain, Khaled F.; Ibrahim, Hosny M.; Omar, Nagwa M.

    2015-06-01

    Generating virtual actors is one of the most challenging fields in computer graphics. The reconstruction of a realistic virtual actor has been paid attention by the academic research and the film industry to generate human-like virtual actors. Many movies were acted by human-like virtual actors, where the audience cannot distinguish between real and virtual actors. The synthesis of realistic virtual actors is considered a complex process. Many techniques are used to generate a realistic virtual actor; however they usually require expensive hardware equipment. In this paper, a low-cost system that generates near-realistic virtual actors is presented. The facial features of the real actor are blended with a virtual head that is attached to the actor's body. Comparing with other techniques that generate virtual actors, the proposed system is considered a low-cost system that requires only one camera that records the scene without using any expensive hardware equipment. The results of our system show that the system generates good near-realistic virtual actors that can be used on many applications.

  2. Design of two-DMD based zoom MW and LW dual-band IRSP using pixel fusion

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Xu, Xiping; Qiao, Yang

    2018-06-01

    In order to test the anti-jamming ability of mid-wave infrared (MWIR) and long-wave infrared (LWIR) dual-band imaging system, a zoom mid-wave (MW) and long-wave (LW) dual-band infrared scene projector (IRSP) based on two-digital micro-mirror device (DMD) was designed by using a projection method of pixel fusion. Two illumination systems, which illuminate the two DMDs directly with Kohler telecentric beam respectively, were combined with projection system by a spatial layout way. The distances of projection entrance pupil and illumination exit pupil were also analyzed separately. MWIR and LWIR virtual scenes were generated respectively by two DMDs and fused by a dichroic beam combiner (DBC), resulting in two radiation distributions in projected image. The optical performance of each component was evaluated by ray tracing simulations. Apparent temperature and image contrast were demonstrated by imaging experiments. On the basis of test and simulation results, the aberrations of optical system were well corrected, and the quality of projected image meets test requirements.

  3. Development of a high-definition IR LED scene projector

    NASA Astrophysics Data System (ADS)

    Norton, Dennis T.; LaVeigne, Joe; Franks, Greg; McHugh, Steve; Vengel, Tony; Oleson, Jim; MacDougal, Michael; Westerfeld, David

    2016-05-01

    Next-generation Infrared Focal Plane Arrays (IRFPAs) are demonstrating ever increasing frame rates, dynamic range, and format size, while moving to smaller pitch arrays.1 These improvements in IRFPA performance and array format have challenged the IRFPA test community to accurately and reliably test them in a Hardware-In-the-Loop environment utilizing Infrared Scene Projector (IRSP) systems. The rapidly-evolving IR seeker and sensor technology has, in some cases, surpassed the capabilities of existing IRSP technology. To meet the demands of future IRFPA testing, Santa Barbara Infrared Inc. is developing an Infrared Light Emitting Diode IRSP system. Design goals of the system include a peak radiance >2.0W/cm2/sr within the 3.0-5.0μm waveband, maximum frame rates >240Hz, and >4million pixels within a form factor supported by pixel pitches <=32μm. This paper provides an overview of our current phase of development, system design considerations, and future development work.

  4. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  5. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  6. Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation

    NASA Astrophysics Data System (ADS)

    Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward

    1988-08-01

    A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.

  7. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.

    PubMed

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-08-27

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.

  8. A Methodology to Assess the Impact of Optical and Electronic Crosstalk in a New Generation of Sensors Using Heritage Sensors

    NASA Technical Reports Server (NTRS)

    Oudrari, Hassan; Schwarting, Thomas; Chiang, Kwo-Fu; McIntire, Jeff; Pan, Chunhui; Xiong, Xiaoxiong; Butler, James

    2010-01-01

    Electronic and optical crosstalk are radiometric challenges that often exist in the focal plane design in many sensors Such as MODIS. A methodology is described to assess the impact due to optical and electronic crosstalk on the measured radiance, and thereafter, the retrieval of geophysical products using MODIS Level I data sets. Based on a postulated set of electronic and optical crosstalk coefficients, and a set of MODIS scenes, we have simulated a system signal contamination on any detector on a focal plane when another detector on that focal plane is stimulated with a geophysical signal. The original MODIS scenes and the crosstalk impacted scenes can be used with validated geophysical algorithms to derive the final data products. Products contaminated with crosstalk are then compared to those without contamination to assess the impact magnitude and location, and will allow us to separate Out-Of-Band (OOB) leaks from hand-to-hand optical crosstalk, and identify potential failures to meet climate research requirements.

  9. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes

    PubMed Central

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-01-01

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656

  10. Rotation-invariant features for multi-oriented text detection in natural images.

    PubMed

    Yao, Cong; Zhang, Xin; Bai, Xiang; Liu, Wenyu; Ma, Yi; Tu, Zhuowen

    2013-01-01

    Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes.

  11. The simulation of automatic ladar sensor control during flight operations using USU LadarSIM software

    NASA Astrophysics Data System (ADS)

    Pack, Robert T.; Saunders, David; Fullmer, Rees; Budge, Scott

    2006-05-01

    USU LadarSIM Release 2.0 is a ladar simulator that has the ability to feed high-level mission scripts into a processor that automatically generates scan commands during flight simulations. The scan generation depends on specified flight trajectories and scenes consisting of terrain and targets. The scenes and trajectories can either consist of simulated or actual data. The first modeling step produces an outline of scan footprints in xyz space. Once mission goals have been analyzed and it is determined that the scan footprints are appropriately distributed or placed, specific scans can then be chosen for the generation of complete radiometry-based range images and point clouds. The simulation is capable of quickly modeling ray-trace geometry associated with (1) various focal plane arrays and scanner configurations and (2) various scene and trajectories associated with particular maneuvers or missions.

  12. The forensic holodeck: an immersive display for forensic crime scene reconstructions.

    PubMed

    Ebert, Lars C; Nguyen, Tuan T; Breitbeck, Robert; Braun, Marcel; Thali, Michael J; Ross, Steffen

    2014-12-01

    In forensic investigations, crime scene reconstructions are created based on a variety of three-dimensional image modalities. Although the data gathered are three-dimensional, their presentation on computer screens and paper is two-dimensional, which incurs a loss of information. By applying immersive virtual reality (VR) techniques, we propose a system that allows a crime scene to be viewed as if the investigator were present at the scene. We used a low-cost VR headset originally developed for computer gaming in our system. The headset offers a large viewing volume and tracks the user's head orientation in real-time, and an optical tracker is used for positional information. In addition, we created a crime scene reconstruction to demonstrate the system. In this article, we present a low-cost system that allows immersive, three-dimensional and interactive visualization of forensic incident scene reconstructions.

  13. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  14. Dynamic Geometry Capture with a Multi-View Structured-Light System

    DTIC Science & Technology

    2014-12-19

    funding was never a problem during my studies . One of the best parts of my time at UC Berkeley has been working with colleagues within the Video and...scientific and medical applications such as quantifying improvement in physical therapy and measuring unnatural poses in ergonomic studies . Specifically... cases with limited scene texture. This direct generation of surface geometry provides us with a distinct advantage over multi-camera based systems. For

  15. MONET: multidimensional radiative cloud scene model

    NASA Astrophysics Data System (ADS)

    Chervet, Patrick

    1999-12-01

    All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.

  16. Texture generation for use in synthetic infrared scenes

    NASA Astrophysics Data System (ADS)

    Ota, Clem Z.; Rollins, John M.; Bleiweiss, Max P.

    1996-06-01

    In the process of creating synthetic scenes for use in simulations/visualizations, texture is used as a surrogate to 'high' spatial definition. For example, if one were to measure the location of every blade of grass and all of the characteristics of each blade of grass in a lawn, then in the process of composing a scene of the lawn, it would be expected that the result would appear 'real;' however, because this process is excruciatingly laborious, various techniques have been devised to place the required details in the scene through the use of texturing. Experience gained during the recent Smart Weapons Operability Enhancement Joint Test and Evaluation (SWOE JT&E) has shown the need for higher fidelity texturing algorithms and a better parameterization of those that are in use. In this study, four aspects of the problem have been analyzed: texture extraction, texture insertion, texture metrics, and texture creation algorithms. The results of extracting real texture from an image, measuring it with a variety of metrics, and generating similar texture with three different algorithms is presented. These same metrics can be used to define clutter and to make comparisons between 'real' and synthetic (or artificial) scenes in an objective manner.

  17. A Photo Album of Earth Scheduling Landsat 7 Mission Daily Activities

    NASA Technical Reports Server (NTRS)

    Potter, William; Gasch, John; Bauer, Cynthia

    1998-01-01

    Landsat7 is a member of a new generation of Earth observation satellites. Landsat7 will carry on the mission of the aging Landsat 5 spacecraft by acquiring high resolution, multi-spectral images of the Earth surface for strategic, environmental, commercial, agricultural and civil analysis and research. One of the primary mission goals of Landsat7 is to accumulate and seasonally refresh an archive of global images with full coverage of Earth's landmass, less the central portion of Antarctica. This archive will enable further research into seasonal, annual and long-range trending analysis in such diverse research areas as crop yields, deforestation, population growth, and pollution control, to name just a few. A secondary goal of Landsat7 is to fulfill imaging requests from our international partners in the mission. Landsat7 will transmit raw image data from the spacecraft to 25 ground stations in 20 subscribing countries. Whereas earlier Landsat missions were scheduled manually (as are the majority of current low-orbit satellite missions), the task of manually planning and scheduling Landsat7 mission activities would be overwhelmingly complex when considering the large volume of image requests, the limited resources available, spacecraft instrument limitations, and the limited ground image processing capacity, not to mention avoidance of foul weather systems. The Landsat7 Mission Operation Center (MOC) includes an image scheduler subsystem that is designed to automate the majority of mission planning and scheduling, including selection of the images to be acquired, managing the recording and playback of the images by the spacecraft, scheduling ground station contacts for downlink of images, and generating the spacecraft commands for controlling the imager, recorder, transmitters and antennas. The image scheduler subsystem autonomously generates 90% of the spacecraft commanding with minimal manual intervention. The image scheduler produces a conflict-free schedule for acquiring images of the "best" 250 scenes daily for refreshing the global archive. It then equitably distributes the remaining resources for acquiring up to 430 scenes to satisfy requests by international subscribers. The image scheduler selects candidate scenes based on priority and age of the requests, and predicted cloud cover and sun angle at each scene. It also selects these scenes to avoid instrument constraint violations and maximizes efficiency of resource usage by encouraging acquisition of scenes in clusters. Of particular interest to the mission planners, it produces the resulting schedule in a reasonable time, typically within 15 minutes.

  18. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  19. Creating Three-Dimensional Scenes

    ERIC Educational Resources Information Center

    Krumpe, Norm

    2005-01-01

    Persistence of Vision Raytracer (POV-Ray), a free computer program for creating photo-realistic, three-dimensional scenes and a link for Mathematica users interested in generating POV-Ray files from within Mathematica, is discussed. POV-Ray has great potential in secondary mathematics classrooms and helps in strengthening students' visualization…

  20. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  1. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  2. Basic level scene understanding: categories, attributes and structures

    PubMed Central

    Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude

    2013-01-01

    A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590

  3. Semantic guidance of eye movements in real-world scenes

    PubMed Central

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914

  4. Semantic guidance of eye movements in real-world scenes.

    PubMed

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Real-time 3D change detection of IEDs

    NASA Astrophysics Data System (ADS)

    Wathen, Mitch; Link, Norah; Iles, Peter; Jinkerson, John; Mrstik, Paul; Kusevic, Kresimir; Kovats, David

    2012-06-01

    Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted, prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the same roadways. The prototype VISR system is briefly described. The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time processing of scene alignment plus change detection.

  6. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  7. Computer Generated Image: Relative Training Effectiveness of Day Versus Night Visual Scenes. Final Report.

    ERIC Educational Resources Information Center

    Martin, Elizabeth L.; Cataneo, Daniel F.

    A study was conducted by the Air Force to determine the extent to which takeoff/landing skills learned in a simulator equipped with a night visual system would transfer to daytime performance in the aircraft. A transfer-of-training design was used to assess the differential effectiveness of simulator training with a day versus a night…

  8. Materials learning from life: concepts for active, adaptive and autonomous molecular systems.

    PubMed

    Merindol, Rémi; Walther, Andreas

    2017-09-18

    Bioinspired out-of-equilibrium systems will set the scene for the next generation of molecular materials with active, adaptive, autonomous, emergent and intelligent behavior. Indeed life provides the best demonstrations of complex and functional out-of-equilibrium systems: cells keep track of time, communicate, move, adapt, evolve and replicate continuously. Stirred by the understanding of biological principles, artificial out-of-equilibrium systems are emerging in many fields of soft matter science. Here we put in perspective the molecular mechanisms driving biological functions with the ones driving synthetic molecular systems. Focusing on principles that enable new levels of functionalities (temporal control, autonomous structures, motion and work generation, information processing) rather than on specific material classes, we outline key cross-disciplinary concepts that emerge in this challenging field. Ultimately, the goal is to inspire and support new generations of autonomous and adaptive molecular devices fueled by self-regulating chemistry.

  9. Smart Camera System for Aircraft and Spacecraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; White, Janis; Abernathy, Michael F.

    2003-01-01

    This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SC3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results. Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and awareness. The system created to date provides a real-time operations personnel an appropriate level of situation 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using "static" data acquired by an aircraft or satellite at some point in the past. The SC3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1. The SC3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.

  10. Video coding for next-generation surveillance systems

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.

  11. Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera

    NASA Astrophysics Data System (ADS)

    Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi

    2015-12-01

    This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.

  12. Automated content and quality assessment of full-motion-video for the generation of meta data

    NASA Astrophysics Data System (ADS)

    Harguess, Josh

    2015-05-01

    Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.

  13. Satellite markers: a simple method for ground truth car pose on stereo video

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  14. Design, optimisation and preliminary validation of a human specific loop-mediated amplification assay for the rapid detection of human DNA at forensic crime scenes.

    PubMed

    Hird, H J; Brown, M K

    2017-11-01

    The identification of samples at a crime scene which require forensic DNA typing has been the focus of recent research interest. We propose a simple, but sensitive analysis system which can be deployed at a crime scene to identify crime scene stains as human or non-human. The proposed system uses the isothermal amplification of DNA in a rapid assay format, which returns results in as little as 30min from sampling. The assay system runs on the Genie II device, a proven in-field detection system which could be deployed at a crime scene. The results presented here demonstrate that the system was sufficiently specific and sensitive and was able to detect the presence of human blood, semen and saliva on mock forensic samples. Copyright © 2017. Published by Elsevier B.V.

  15. Development of HWIL Testing Capabilities for Satellite Target Emulation at AEDC

    NASA Astrophysics Data System (ADS)

    Lowry, H.; Crider, D.; Burns, J.; Thompson, R.; Goldsmith, G., II; Sholes, W.

    Programs involved in Space Situational Awareness (SSA) need the capability to test satellite sensors in a Hardware-in-the-Loop (HWIL) environment. Testing in a ground system avoids the significant cost of on-orbit test targets and the resulting issues such as debris mitigation, and in-space testing implications. The space sensor test facilities at AEDC consist of cryo-vacuum chambers that have been developed to project simulated targets to air-borne, space-borne, and ballistic platforms. The 7V chamber performs calibration and characterization of surveillance and seeker systems, as well as some mission simulation. The 10V chamber is being upgraded to provide real-time target simulation during the detection, acquisition, discrimination, and terminal phases of a seeker mission. The objective of the Satellite Emulation project is to upgrade this existing capability to support the ability to discern and track other satellites and orbital debris in a HWIL capability. It would provide a baseline for realistic testing of satellite surveillance sensors, which would be operated in a controlled environment. Many sensor functions could be tested, including scene recognition and maneuvering control software, using real interceptor hardware and software. Statistically significant and repeatable datasets produced by the satellite emulation system can be acquired during such test and saved for further analysis. In addition, the robustness of the discrimination and tracking algorithms can be investigated by a parametric analysis using slightly different scenarios; this will be used to determine critical points where a sensor system might fail. The radiometric characteristics of satellites are expected to be similar to the targets and decoys that make up a typical interceptor mission scenario, since they are near ambient temperature. Their spectral reflectivity, emissivity, and shape must also be considered, but the projection systems employed in the 7V and 10V chambers should be capable of providing the simulation of satellites as well. There may also be a need for greater radiometric intensity or shorter time response. An appropriate satellite model is integral to the scene generation process to meet the requirements of SSA programs. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) facility and the Guided Weapons Evaluation Facility (GWEF), both at Eglin Air Force Base, FL are assisting in developing the scene projection hardware, based on their significant test experience using resistive emitter arrays to test interceptors in a real-time environment. Army Aviation and Missile Research & Development Command (AMRDEC) will develop the Scene Generation System for the real-time mission simulation.

  16. Where's Wally: the influence of visual salience on referring expression generation.

    PubMed

    Clarke, Alasdair D F; Elsner, Micha; Rohde, Hannah

    2013-01-01

    REFERRING EXPRESSION GENERATION (REG) PRESENTS THE CONVERSE PROBLEM TO VISUAL SEARCH: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such "good" descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Where's Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  17. Exploiting current-generation graphics hardware for synthetic-scene generation

    NASA Astrophysics Data System (ADS)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  18. A knowledge-based machine vision system for space station automation

    NASA Technical Reports Server (NTRS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-01-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  19. Research on three-dimensional real scene technology of Sichuan-Tibet highway

    NASA Astrophysics Data System (ADS)

    Yin, Peng; Bo, Xianglei; Liu, Fen

    2018-04-01

    This paper studies the three-dimensional real scene technology in the application of highway simulation, and a system to realize three-dimensional real scene of Sichuan-Tibet highway is presented. This system can improve the defect of the traditional Sichuan-Tibet highway geographic information system from performance and feeling. The Tibet forces can use this system to improve motor adaptive training effect and command decision-making ability.

  20. Three-dimensional measurement system for crime scene documentation

    NASA Astrophysics Data System (ADS)

    Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert

    2017-10-01

    Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.

  1. Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates

    NASA Astrophysics Data System (ADS)

    Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.

    2002-03-01

    A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.

  2. Unique digital imagery interface between a silicon graphics computer and the kinetic kill vehicle hardware-in-the-loop simulator (KHILS) wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Erickson, Ricky A.; Moren, Stephen E.; Skalka, Marion S.

    1998-07-01

    Providing a flexible and reliable source of IR target imagery is absolutely essential for operation of an IR Scene Projector in a hardware-in-the-loop simulation environment. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB provides the capability, and requisite interfaces, to supply target IR imagery to its Wideband IR Scene Projector (WISP) from three separate sources at frame rates ranging from 30 - 120 Hz. Video can be input from a VCR source at the conventional 30 Hz frame rate. Pre-canned digital imagery and test patterns can be downloaded into stored memory from the host processor and played back as individual still frames or movie sequences up to a 120 Hz frame rate. Dynamic real-time imagery to the KHILS WISP projector system, at a 120 Hz frame rate, can be provided from a Silicon Graphics Onyx computer system normally used for generation of digital IR imagery through a custom CSA-built interface which is available for either the SGI/DVP or SGI/DD02 interface port. The primary focus of this paper is to describe our technical approach and experience in the development of this unique SGI computer and WISP projector interface.

  3. Using virtual reality to test the regularity priors used by the human visual system

    NASA Astrophysics Data System (ADS)

    Palmer, Eric; Kwon, TaeKyu; Pizlo, Zygmunt

    2017-09-01

    Virtual reality applications provide an opportunity to test human vision in well-controlled scenarios that would be difficult to generate in real physical spaces. This paper presents a study intended to evaluate the importance of the regularity priors used by the human visual system. Using a CAVE simulation, subjects viewed virtual objects in a variety of experimental manipulations. In the first experiment, the subject was asked to count the objects in a scene that was viewed either right-side-up or upside-down for 4 seconds. The subject counted more accurately in the right-side-up condition regardless of the presence of binocular disparity or color. In the second experiment, the subject was asked to reconstruct the scene from a different viewpoint. Reconstructions were accurate, but the position and orientation error was twice as high when the scene was rotated by 45°, compared to 22.5°. Similarly to the first experiment, there was little difference between monocular and binocular viewing. In the third experiment, the subject was asked to adjust the position of one object to match the depth extent to the frontal extent among three objects. Performance was best with symmetrical objects and became poorer with asymmetrical objects and poorest with only small circular markers on the floor. Finally, in the fourth experiment, we demonstrated reliable performance in monocular and binocular recovery of 3D shapes of objects standing naturally on the simulated horizontal floor. Based on these results, we conclude that gravity, horizontal ground, and symmetry priors play an important role in veridical perception of scenes.

  4. Probability distributions of whisker-surface contact: quantifying elements of the rat vibrissotactile natural scene.

    PubMed

    Hobbs, Jennifer A; Towal, R Blythe; Hartmann, Mitra J Z

    2015-08-01

    Analysis of natural scene statistics has been a powerful approach for understanding neural coding in the auditory and visual systems. In the field of somatosensation, it has been more challenging to quantify the natural tactile scene, in part because somatosensory signals are so tightly linked to the animal's movements. The present work takes a step towards quantifying the natural tactile scene for the rat vibrissal system by simulating rat whisking motions to systematically investigate the probabilities of whisker-object contact in naturalistic environments. The simulations permit an exhaustive search through the complete space of possible contact patterns, thereby allowing for the characterization of the patterns that would most likely occur during long sequences of natural exploratory behavior. We specifically quantified the probabilities of 'concomitant contact', that is, given that a particular whisker makes contact with a surface during a whisk, what is the probability that each of the other whiskers will also make contact with the surface during that whisk? Probabilities of concomitant contact were quantified in simulations that assumed increasingly naturalistic conditions: first, the space of all possible head poses; second, the space of behaviorally preferred head poses as measured experimentally; and third, common head poses in environments such as cages and burrows. As environments became more naturalistic, the probability distributions shifted from exhibiting a 'row-wise' structure to a more diagonal structure. Results also reveal that the rat appears to use motor strategies (e.g. head pitches) that generate contact patterns that are particularly well suited to extract information in the presence of uncertainty. © 2015. Published by The Company of Biologists Ltd.

  5. Template construction grammar: from visual scene description to language comprehension and agrammatism.

    PubMed

    Barrès, Victor; Lee, Jinyong

    2014-01-01

    How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.

  6. An efficient framework for modeling clouds from Landsat8 images

    NASA Astrophysics Data System (ADS)

    Yuan, Chunqiang; Guo, Jing

    2015-03-01

    Cloud plays an important role in creating realistic outdoor scenes for video game and flight simulation applications. Classic methods have been proposed for cumulus cloud modeling. However, these methods are not flexible for modeling large cloud scenes with hundreds of clouds in that the user must repeatedly model each cloud and adjust its various properties. This paper presents a meteorologically based method to reconstruct cumulus clouds from high resolution Landsat8 satellite images. From these input satellite images, the clouds are first segmented from the background. Then, the cloud top surface is estimated from the temperature of the infrared image. After that, under a mild assumption of flat base for cumulus cloud, the base height of each cloud is computed by averaging the top height for pixels on the cloud edge. Then, the extinction is generated from the visible image. Finally, we enrich the initial shapes of clouds using a fractal method and represent the recovered clouds as a particle system. The experimental results demonstrate our method can yield realistic cloud scenes resembling those in the satellite images.

  7. Signature simulation of mixed materials

    NASA Astrophysics Data System (ADS)

    Carson, Tyler D.; Salvaggio, Carl

    2015-05-01

    Soil target signatures vary due to geometry, chemical composition, and scene radiometry. Although radiative transfer models and function-fit physical models may describe certain targets in limited depth, the ability to incorporate all three signature variables is difficult. This work describes a method to simulate the transient signatures of soil by first considering scene geometry synthetically created using 3D physics engines. Through the assignment of spectral data from the Nonconventional Exploitation Factors Data System (NEFDS), the synthetic scene is represented as a physical mixture of particles. Finally, first principles radiometry is modeled using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. With DIRSIG, radiometric and sensing conditions were systematically manipulated to produce and record goniometric signatures. The implementation of this virtual goniometer allows users to examine how a target bidirectional reflectance distribution function (BRDF) will change with geometry, composition, and illumination direction. By using 3D computer graphics models, this process does not require geometric assumptions that are native to many radiative transfer models. It delivers a discrete method to circumnavigate the significant cost of time and treasure associated with hardware-based goniometric data collections.

  8. Robotics On-Board Trainer (ROBoT)

    NASA Technical Reports Server (NTRS)

    Johnson, Genevieve; Alexander, Greg

    2013-01-01

    ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.

  9. Using Film in Multicultural and Social Justice Faculty Development: Scenes from "Crash"

    ERIC Educational Resources Information Center

    Ross, Paula T.; Kumagai, Arno K.; Joiner, Terence A.; Lypson, Monica L.

    2011-01-01

    We designed a faculty development workshop integrating scene excerpts from the Academy Award-winning movie Crash and active learning methods to encourage faculty participation and generate participant dialogue. The aims of this workshop were to enhance awareness of issues related to teaching in a multicultural classroom; stimulate discussion on…

  10. From Seeing to Saying: Perceiving, Planning, Producing

    ERIC Educational Resources Information Center

    Kuchinsky, Stefanie Ellen

    2009-01-01

    Given the amount of visual information in a scene, how do speakers determine what to talk about first? One hypothesis is that speakers start talking about what has attentional priority, while another is that speakers first extract the scene gist, using the obtained relational information to generate a rudimentary sentence plan before retrieving…

  11. Control electronics for a multi-laser/multi-detector scanning system

    NASA Technical Reports Server (NTRS)

    Kennedy, W.

    1980-01-01

    The Mars Rover Laser Scanning system uses a precision laser pointing mechanism, a photodetector array, and the concept of triangulation to perform three dimensional scene analysis. The system is used for real time terrain sensing and vision. The Multi-Laser/Multi-Detector laser scanning system is controlled by a digital device called the ML/MD controller. A next generation laser scanning system, based on the Level 2 controller, is microprocessor based. The new controller capabilities far exceed those of the ML/MD device. The first draft circuit details and general software structure are presented.

  12. Color Helmet Mounted Display System with Real Time Computer Generated and Video Imagery for In-Flight Simulation

    NASA Technical Reports Server (NTRS)

    Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)

    1995-01-01

    NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).

  13. Ground data handling for Landsat-D. [for thematic mapper

    NASA Technical Reports Server (NTRS)

    Lynch, T. J.

    1977-01-01

    The present plans for the Landsat-D ground data handling are described in relationship to the mission objectives and the planned spacecraft system. The end-to-end data system is presented with particular emphasis on the data handling plans for the new instrument, the Thematic Mapper. This instrument generates ten times the amount of data per scene as the present Multispectral Scanner and this resulting data rate and volume are discussed as well as possible new data techniques to handle them - such as image compression.

  14. Ground data handling for LANDSAT-D

    NASA Technical Reports Server (NTRS)

    Lynch, T. J.

    1976-01-01

    The present plans for the LANDSAT D ground data handling are described in relationship to the mission objectives and the planned spacecraft system. The end to end data system is presented with particular emphasis on the data handling plans for the new instrument, the Thematic Mapper. This instrument generates ten times the amount of data per scene as the present Multispectral Scanner, and this resulting data rate and volume are discussed as well as possible new data techniques to handle them such as image compression.

  15. L5 TM radiometric recalibration procedure using the internal calibration trends from the NLAPS trending database

    USGS Publications Warehouse

    Chander, G.; Haque, Md. O.; Micijevic, E.; Barsi, J.A.

    2008-01-01

    From the Landsat program's inception in 1972 to the present, the earth science user community has benefited from a historical record of remotely sensed data. The multispectral data from the Landsat 5 (L5) Thematic Mapper (TM) sensor provide the backbone for this extensive archive. Historically, the radiometric calibration procedure for this imagery used the instrument's response to the Internal Calibrator (IC) on a scene-by-scene basis to determine the gain and offset for each detector. The IC system degraded with time causing radiometric calibration errors up to 20 percent. In May 2003 the National Landsat Archive Production System (NLAPS) was updated to use a gain model rather than the scene acquisition specific IC gains to calibrate TM data processed in the United States. Further modification of the gain model was performed in 2007. L5 TM data that were processed using IC prior to the calibration update do not benefit from the recent calibration revisions. A procedure has been developed to give users the ability to recalibrate their existing Level-1 products. The best recalibration results are obtained if the work order report that was originally included in the standard data product delivery is available. However, many users may not have the original work order report. In such cases, the IC gain look-up table that was generated using the radiometric gain trends recorded in the NLAPS database can be used for recalibration. This paper discusses the procedure to recalibrate L5 TM data when the work order report originally used in processing is not available. A companion paper discusses the generation of the NLAPS IC gain and bias look-up tables required to perform the recalibration.

  16. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    NASA Astrophysics Data System (ADS)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  17. A Method of Sharing Tacit Knowledge by a Bulletin Board Link to Video Scene and an Evaluation in the Field of Nursing Skill

    NASA Astrophysics Data System (ADS)

    Shimada, Satoshi; Azuma, Shouzou; Teranaka, Sayaka; Kojima, Akira; Majima, Yukie; Maekawa, Yasuko

    We developed the system that knowledge could be discovered and shared cooperatively in the organization based on the SECI model of knowledge management. This system realized three processes by the following method. (1)A video that expressed skill is segmented into a number of scenes according to its contents. Tacit knowledge is shared in each scene. (2)Tacit knowledge is extracted by bulletin board linked to each scene. (3)Knowledge is acquired by repeatedly viewing the video scene with the comment that shows the technical content to be practiced. We conducted experiments that the system was used by nurses working for general hospitals. Experimental results show that the nursing practical knack is able to be collected by utilizing bulletin board linked to video scene. Results of this study confirmed the possibility of expressing the tacit knowledge of nurses' empirical nursing skills sensitively with a clue of video images.

  18. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.

    PubMed

    Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  19. Hierarchical video summarization based on context clustering

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  20. A dual-waveband dynamic IR scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang

    2016-10-01

    Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.

  1. Scene text recognition in mobile applications by character descriptor and structure configuration.

    PubMed

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  2. Crime scene investigation, reporting, and reconstuction (CSIRR)

    NASA Astrophysics Data System (ADS)

    Booth, John F.; Young, Jeffrey M.; Corrigan, Paul

    1997-02-01

    Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.

  3. We're Still Here: Community-Based Art, the Scene of Education, and the Formation of Scene

    ERIC Educational Resources Information Center

    Kim, Charles; Miyamoto, Nobuko

    2013-01-01

    In this cross-generational dialogue, authors Charles Kim and Nobuko Miyamoto engage in a creative exploration of community-based art, contemporary Asian American identity, and the possibilities of creativity within educational spaces. Using the ideas of John Dewey as a foundation, Kim and Miyamoto offer their dialogues, experiences, and analyses…

  4. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  5. A Heterogeneous Multiprocessor Graphics System Using Processor-Enhanced Memories

    DTIC Science & Technology

    1989-02-01

    frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form factors. The hardware consists of...generality for rendering curved surfaces, volume data, objects dcscri id with Constructive Solid Geometry, for rendering scenes using the radiosity ...f.aces and for computing a spherical radiosity lighting model (see Section 7.6). Custom Memory Chips \\ 208 bits x 128 pixels - Renderer Board ix p o a

  6. EL68D Wasteway Watershed Land-Cover Generation

    USGS Publications Warehouse

    Ruhl, Sheila; Usery, E. Lynn; Finn, Michael P.

    2007-01-01

    Classification of land cover from Landsat Enhanced Thematic Mapper Plus (ETM+) for the EL68D Wasteway Watershed in the State of Washington is documented. The procedures for classification include use of two ETM+ scenes in a simultaneous unsupervised classification process supported by extensive field data collection using Global Positioning System receivers and digital photos. The procedure resulted in a detailed classification at the individual crop species level.

  7. The Shuttle Mission Simulator computer generated imagery

    NASA Technical Reports Server (NTRS)

    Henderson, T. H.

    1984-01-01

    Equipment available in the primary training facility for the Space Transportation System (STS) flight crews includes the Fixed Base Simulator, the Motion Base Simulator, the Spacelab Simulator, and the Guidance and Navigation Simulator. The Shuttle Mission Simulator (SMS) consists of the Fixed Base Simulator and the Motion Base Simulator. The SMS utilizes four visual Computer Generated Image (CGI) systems. The Motion Base Simulator has a forward crew station with six-degrees of freedom motion simulation. Operation of the Spacelab Simulator is planned for the spring of 1983. The Guidance and Navigation Simulator went into operation in 1982. Aspects of orbital visual simulation are discussed, taking into account the earth scene, payload simulation, the generation and display of 1079 stars, the simulation of sun glare, and Reaction Control System jet firing plumes. Attention is also given to landing site visual simulation, and night launch and landing simulation.

  8. A bio-inspired method and system for visual object-based attention and segmentation

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  9. Dsm Based Orientation of Large Stereo Satellite Image Blocks

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Reinartz, P.

    2012-07-01

    High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.

  10. LWIR pupil imaging and prospects for background compensation

    NASA Astrophysics Data System (ADS)

    LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg

    2015-08-01

    A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.

  11. Wrap-Around Out-the-Window Sensor Fusion System

    NASA Technical Reports Server (NTRS)

    Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.

    2009-01-01

    The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.

  12. Effect of Display Color on Pilot Performance and Describing Functions

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1997-01-01

    A study has been conducted with the full-spectrum, calligraphic, computer-generated display system to determine the effect of chromatic content of the visual display upon pilot performance during the landing approach maneuver. This study utilizes a new digital chromatic display system, which has previously been shown to improve the perceived fidelity of out-the-window display scenes, and presents the results of an experiment designed to determine the effects of display color content by the measurement of both vertical approach performance and pilot-describing functions. This method was selected to more fully explore the effects of visual color cues used by the pilot. Two types of landing approaches were made: dynamic and frozen range, with either a landing approach scene or a perspective array display. The landing approach scene was presented with either red runway lights and blue taxiway lights or with the colors reversed, and the perspective array with red lights, blue lights, or red and blue lights combined. The vertical performance measures obtained in this experiment indicated that the pilots performed best with the blue and red/blue displays. and worst with the red displays. The describing-function system analysis showed more variation with the red displays. The crossover frequencies were lowest with the red displays and highest with the combined red/blue displays, which provided the best overall tracking, performance. Describing-function performance measures, vertical performance measures, and pilot opinion support the hypothesis that specific colors in displays can influence the pilots' control characteristics during the final approach.

  13. Advanced interactive display formats for terminal area traffic control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.

    1995-01-01

    The basic design considerations for perspective Air Traffic Control displays are described. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. The MVPS system is based on indirect manipulation of the viewing parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of screen. This arrangement has been chosen, in order to preserve the correspondence between the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer generated scene. Current, ongoing efforts deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the Air Traffic Control scene can be viewed, for a given traffic situation.

  14. Research on simulation technology of full-path infrared tail flame tracking of photoelectric theodolite in complicated environment

    NASA Astrophysics Data System (ADS)

    Wu, Hai-ying; Zhang, San-xi; Liu, Biao; Yue, Peng; Weng, Ying-hui

    2018-02-01

    The photoelectric theodolite is an important scheme to realize the tracking, detection, quantitative measurement and performance evaluation of weapon systems in ordnance test range. With the improvement of stability requirements for target tracking in complex environment, infrared scene simulation with high sense of reality and complex interference has become an indispensable technical way to evaluate the track performance of photoelectric theodolite. And the tail flame is the most important infrared radiation source of the weapon system. The dynamic tail flame with high reality is a key element for the photoelectric theodolite infrared scene simulation and imaging tracking test. In this paper, an infrared simulation method for the full-path tracking of tail flame by photoelectric theodolite is proposed aiming at the faint boundary, irregular, multi-regulated points. In this work, real tail images are employed. Simultaneously, infrared texture conversion technology is used to generate DDS texture for a particle system map. Thus, dynamic real-time tail flame simulation results with high fidelity from the theodolite perspective can be gained in the tracking process.

  15. Evaluation of ZY-3 for Dsm and Ortho Image Generation

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.

    2013-04-01

    DSM generation using stereo satellites is an important topic for many applications. China has launched the three line ZY-3 stereo mapping satellite last year. This paper evaluates the ZY-3 performance for DSM and orthophoto generation on two scenes east of Munich. The direct georeferencing performance is tested using survey points, and the 3D RMSE is 4.5 m for the scene evaluated in this paper. After image orientation with GCPs and tie points, a DSM is generated using the Semi-Global Matching algorithm. For two 5 × 5 km2 test areas, a LIDAR reference DTM was available. After masking out forest areas, the overall RMSE between ZY-3 DSM and LIDAR reference is 2.0 m (RMSE). Additionally, qualitative comparison between ZY-3 and Cartosat-1 DSMs is performed.

  16. Scene-based nonuniformity correction with video sequences and registration.

    PubMed

    Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B

    2000-03-10

    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.

  17. Optical-to-Tactile Translator

    NASA Technical Reports Server (NTRS)

    Langevin, Maurice L. (Inventor); Moynihan, Philip I. (Inventor)

    2000-01-01

    An optical-to-tactile translator provides an aid for the visually impaired by translating a near-field scene to a tactile signal corresponding to said near-field scene. An optical sensor using a plurality of active pixel sensors (APS) converts the optical image within the near-field scene to a digital signal. The digital signal is then processed by a microprocessor and a simple shape signal is generated based on the digital signal. The shape signal is then communicated to a tactile transmitter where the shape signal is converted into a tactile signal using a series of contacts. The shape signal may be an outline of the significant shapes determined in the near-field scene, or the shape signal may comprise a simple symbolic representation of common items encountered repeatedly. The user is thus made aware of the unseen near-field scene, including potential obstacles and dangers, through a series of tactile contacts. In a preferred embodiment, a range determining device such as those commonly found on auto-focusing cameras is included to limit the distance that the optical sensor interprets the near-field scene.

  18. SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.

    PubMed

    Öhlschläger, Sabine; Võ, Melissa Le-Hoa

    2017-10-01

    Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .

  19. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns

    PubMed Central

    Shakespeare, Timothy J.; Yong, Keir X. X.; Frost, Chris; Kim, Lois G.; Warrington, Elizabeth K.; Crutch, Sebastian J.

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes. PMID:24106469

  20. Development of an ultra-high temperature infrared scene projector at Santa Barbara Infrared Inc.

    NASA Astrophysics Data System (ADS)

    Franks, Greg; Laveigne, Joe; Danielson, Tom; McHugh, Steve; Lannon, John; Goodwin, Scott

    2015-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.

  1. Burned areas for the conterminous U.S. from 1984 through 2015, an automated approach using dense time-series of Landsat data

    NASA Astrophysics Data System (ADS)

    Hawbaker, T. J.; Vanderhoof, M.; Beal, Y. J. G.; Takacs, J. D.; Schmidt, G.; Falgout, J.; Brunner, N. M.; Caldwell, M. K.; Picotte, J. J.; Howard, S. M.; Stitt, S.; Dwyer, J. L.

    2016-12-01

    Complete and accurate burned area data are needed to document patterns of fires, to quantify relationships between the patterns and drivers of fire occurrence, and to assess the impacts of fires on human and natural systems. Unfortunately, many existing fire datasets in the United States are known to be incomplete and that complicates efforts to understand burned area patterns and introduces a large amount of uncertainty in efforts to identify their driving processes and impacts. Because of this, the need to systematically collect burned area information has been recognized by the United Nations Framework Convention on Climate Change and the Intergovernmental Panel on Climate Change, which have both called for the production of essential climate variables. To help meet this need, we developed a novel algorithm that automatically identifies burned areas in temporally-dense time series of Landsat image stacks to produce Landsat Burned Area Essential Climate Variable (BAECV) products. The algorithm makes use of predictors derived from individual Landsat scenes, lagged reference conditions, and change metrics between the scene and reference predictors. Outputs of the BAECV algorithm, generated for the conterminous United States for 1984 through 2015, consist of burn probabilities for each Landsat scene, in addition to, annual composites including: the maximum burn probability, burn classification, and the Julian date of the first Landsat scene a burn was observed. The BAECV products document patterns of fire occurrence that are not well characterized by existing fire datasets in the United States. We anticipate that these data could help to better understand past patterns of fire occurrence, the drivers that created them, and the impacts fires had on natural and human systems.

  2. Description of the dynamic infrared background/target simulator (DIBS)

    NASA Astrophysics Data System (ADS)

    Lujan, Ignacio

    1988-01-01

    The purpose of the Dynamic Infrared Background/Target Simulator (DIBS) is to project dynamic infrared scenes to a test sensor; e.g., a missile seeker that is sensitive to infrared energy. The projected scene will include target(s) and background. This system was designed to present flicker-free infrared scenes in the 8 micron to 12 micron wavelength region. The major subassemblies of the DIBS are the laser write system (LWS), vanadium dioxide modulator assembly, scene data buffer (SDB), and the optical image translator (OIT). This paper describes the overall concept and design of the infrared scene projector followed by some details of the LWS and VO2 modulator. Also presented are brief descriptions of the SDB and OIT.

  3. Forensic 3D Scene Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3Dmore » measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.« less

  4. Multi- and hyperspectral scene modeling

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  5. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  6. Scene construction in schizophrenia.

    PubMed

    Raffard, Stéphane; D'Argembeau, Arnaud; Bayard, Sophie; Boulenger, Jean-Philippe; Van der Linden, Martial

    2010-09-01

    Recent research has revealed that schizophrenia patients are impaired in remembering the past and imagining the future. In this study, we examined patients' ability to engage in scene construction (i.e., the process of mentally generating and maintaining a complex and coherent scene), which is a key part of retrieving past experiences and episodic future thinking. 24 participants with schizophrenia and 25 healthy controls were asked to imagine new fictitious experiences and described their mental representations of the scenes in as much detail as possible. Descriptions were scored according to various dimensions (e.g., sensory details, spatial reference), and participants also provided ratings of their subjective experience when imagining the scenes (e.g., their sense of presence, the perceived similarity of imagined events to past experiences). Imagined scenes contained less phenomenological details (d = 1.11) and were more fragmented (d = 2.81) in schizophrenia patients compared to controls. Furthermore, positive symptoms were positively correlated to the sense of presence (r = .43) and the perceived similarity of imagined events to past episodes (r = .47), whereas negative symptoms were negatively related to the overall richness of the imagined scenes (r = -.43). The results suggest that schizophrenic patients' impairments in remembering the past and imagining the future are, at least in part, due to deficits in the process of scene construction. The relationships between the characteristics of imagined scenes and positive and negative symptoms could be related to reality monitoring deficits and difficulties in strategic retrieval processes, respectively. Copyright 2010 APA, all rights reserved.

  7. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    NASA Astrophysics Data System (ADS)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  8. Modern Methods for fast generation of digital holograms

    NASA Astrophysics Data System (ADS)

    Tsang, P. W. M.; Liu, J. P.; Cheung, K. W. K.; Poon, T.-C.

    2010-06-01

    With the advancement of computers, digital holography (DH) has become an area of interest that has gained much popularity. Research findings derived from this technology enables holograms representing three dimensional (3-D) scenes to be acquired with optical means, or generated with numerical computation. In both cases, the holograms are in the form of numerical data that can be recorded, transmitted, and processed with digital techniques. On top of that, the availability of high capacity digital storage and wide-band communication technologies also cast light on the emergence of real time video holographic systems, enabling animated 3-D contents to be encoded as holographic data, and distributed via existing medium. At present, development in DH has reached a reasonable degree of maturity, but at the same time the heavy computation involved also imposes difficulty in practical applications. In this paper, a summary on a number of successful accomplishments that have been made recently in overcoming this problem is presented. Subsequently, we shall propose an economical framework that is suitable for real time generation and transmission of holographic video signals over existing distribution media. The proposed framework includes an aspect of extending the depth range of the object scene, which is important for the display of large-scale objects. [Figure not available: see fulltext.

  9. Irma 5.1 multisensor signature prediction model

    NASA Astrophysics Data System (ADS)

    Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Yamaoka, Neil; Kim, Charles

    2005-05-01

    The Irma synthetic signature prediction code is being developed to facilitate the research and development of multisensor systems. Irma was one of the first high resolution Infrared (IR) target and background signature models to be developed for tactical weapon application. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel and the doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Since 2000, additional capabilities and enhancements have been added to the ladar channel including polarization and speckle effect. Work is still ongoing to add time-jittering model to the ladar channel. A new user interface has been introduced to aid users in the mechanism of scene generation and running the Irma code. The user interface provides a canvas where a user can add and remove objects using mouse clicks to construct a scene. The scene can then be visualized to find the desired sensor position. The synthetic ladar signatures have been validated twice and underwent a third validation test near the end of 04. These capabilities will be integrated into the next release, Irma 5.1, scheduled for completion in the summer of FY05. Irma is currently being used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. The purpose of this paper is to report the progress of the Irma 5.1 development effort.

  10. Achieving ultra-high temperatures with a resistive emitter array

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott

    2016-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.

  11. The use of morphological characteristics and texture analysis in the identification of tissue composition in prostatic neoplasia.

    PubMed

    Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W

    2004-09-01

    Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.

  12. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  13. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  14. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  15. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  16. Background characterization techniques for target detection using scene metrics and pattern recognition

    NASA Astrophysics Data System (ADS)

    Noah, Paul V.; Noah, Meg A.; Schroeder, John W.; Chernick, Julian A.

    1990-09-01

    The U.S. Army has a requirement to develop systems for the detection and identification of ground targets in a clutter environment. Autonomous Homing Munitions (AHM) using infrared, visible, millimeter wave and other sensors are being investigated for this application. Advanced signal processing and computational approaches using pattern recognition and artificial intelligence techniques combined with multisensor data fusion have the potential to meet the Army's requirements for next generation ARM.

  17. You shall know an object by the company it keeps: An investigation of semantic representations derived from object co-occurrence in visual scenes.

    PubMed

    Sadeghi, Zahra; McClelland, James L; Hoffman, Paul

    2015-09-01

    An influential position in lexical semantics holds that semantic representations for words can be derived through analysis of patterns of lexical co-occurrence in large language corpora. Firth (1957) famously summarised this principle as "you shall know a word by the company it keeps". We explored whether the same principle could be applied to non-verbal patterns of object co-occurrence in natural scenes. We performed latent semantic analysis (LSA) on a set of photographed scenes in which all of the objects present had been manually labelled. This resulted in a representation of objects in a high-dimensional space in which similarity between two objects indicated the degree to which they appeared in similar scenes. These representations revealed similarities among objects belonging to the same taxonomic category (e.g., items of clothing) as well as cross-category associations (e.g., between fruits and kitchen utensils). We also compared representations generated from this scene dataset with two established methods for elucidating semantic representations: (a) a published database of semantic features generated verbally by participants and (b) LSA applied to a linguistic corpus in the usual fashion. Statistical comparisons of the three methods indicated significant association between the structures revealed by each method, with the scene dataset displaying greater convergence with feature-based representations than did LSA applied to linguistic data. The results indicate that information about the conceptual significance of objects can be extracted from their patterns of co-occurrence in natural environments, opening the possibility for such data to be incorporated into existing models of conceptual representation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Simulating optoelectronic systems for remote sensing with SENSOR

    NASA Astrophysics Data System (ADS)

    Boerner, Anko

    2003-04-01

    The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.

  19. Setting the Scene--Introduction to Quality in Peer Production of eLearning

    ERIC Educational Resources Information Center

    Auvinen, Ari-Matti

    2008-01-01

    The "Setting the scene" deliverable of the QMPP project (www.qmpp.net) has been authored by Mr. Ari-Matti Auvinen (HCI Productions Oy) and many QMPP project partners. In addition to the definition of the scope of the project, it includes also a good list of references to the literature of user-generated content as well as good web links…

  20. Cross-sensor comparisons between Landsat 5 TM and IRS-P6 AWiFS and disturbance detection using integrated Landsat and AWiFS time-series images

    USGS Publications Warehouse

    Chen, Xuexia; Vogelmann, James E.; Chander, Gyanesh; Ji, Lei; Tolk, Brian; Huang, Chengquan; Rollins, Matthew

    2013-01-01

    Routine acquisition of Landsat 5 Thematic Mapper (TM) data was discontinued recently and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) has an ongoing problem with the scan line corrector (SLC), thereby creating spatial gaps when covering images obtained during the process. Since temporal and spatial discontinuities of Landsat data are now imminent, it is therefore important to investigate other potential satellite data that can be used to replace Landsat data. We thus cross-compared two near-simultaneous images obtained from Landsat 5 TM and the Indian Remote Sensing (IRS)-P6 Advanced Wide Field Sensor (AWiFS), both captured on 29 May 2007 over Los Angeles, CA. TM and AWiFS reflectances were compared for the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands, as well as the normalized difference vegetation index (NDVI) based on manually selected polygons in homogeneous areas. All R2 values of linear regressions were found to be higher than 0.99. The temporally invariant cluster (TIC) method was used to calculate the NDVI correlation between the TM and AWiFS images. The NDVI regression line derived from selected polygons passed through several invariant cluster centres of the TIC density maps and demonstrated that both the scene-dependent polygon regression method and TIC method can generate accurate radiometric normalization. A scene-independent normalization method was also used to normalize the AWiFS data. Image agreement assessment demonstrated that the scene-dependent normalization using homogeneous polygons provided slightly higher accuracy values than those obtained by the scene-independent method. Finally, the non-normalized and relatively normalized ‘Landsat-like’ AWiFS 2007 images were integrated into 1984 to 2010 Landsat time-series stacks (LTSS) for disturbance detection using the Vegetation Change Tracker (VCT) model. Both scene-dependent and scene-independent normalized AWiFS data sets could generate disturbance maps similar to what were generated using the LTSS data set, and their kappa coefficients were higher than 0.97. These results indicate that AWiFS can be used instead of Landsat data to detect multitemporal disturbance in the event of Landsat data discontinuity.

  1. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

  2. Infectious Disease Information Collection System at the Scene of Disaster Relief Based on a Personal Digital Assistant.

    PubMed

    Li, Ya-Pin; Gao, Hong-Wei; Fan, Hao-Jun; Wei, Wei; Xu, Bo; Dong, Wen-Long; Li, Qing-Feng; Song, Wen-Jing; Hou, Shi-Ke

    2017-12-01

    The objective of this study was to build a database to collect infectious disease information at the scene of a disaster through the use of 128 epidemiological questionnaires and 47 types of options, with rapid acquisition of information regarding infectious disease and rapid questionnaire customization at the scene of disaster relief by use of a personal digital assistant (PDA). SQL Server 2005 (Microsoft Corp, Redmond, WA) was used to create the option database for the infectious disease investigation, to develop a client application for the PDA, and to deploy the application on the server side. The users accessed the server for data collection and questionnaire customization with the PDA. A database with a set of comprehensive options was created and an application system was developed for the Android operating system (Google Inc, Mountain View, CA). On this basis, an infectious disease information collection system was built for use at the scene of disaster relief. The creation of an infectious disease information collection system and rapid questionnaire customization through the use of a PDA was achieved. This system integrated computer technology and mobile communication technology to develop an infectious disease information collection system and to allow for rapid questionnaire customization at the scene of disaster relief. (Disaster Med Public Health Preparedness. 2017;11:668-673).

  3. Transient cardio-respiratory responses to visually induced tilt illusions

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.

    2000-01-01

    Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.

  4. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  5. Reconstruction and simplification of urban scene models based on oblique images

    NASA Astrophysics Data System (ADS)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  6. Local statistics of retinal optic flow for self-motion through natural sceneries.

    PubMed

    Calow, Dirk; Lappe, Markus

    2007-12-01

    Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.

  7. SeeCoast: persistent surveillance and automated scene understanding for ports and coastal areas

    NASA Astrophysics Data System (ADS)

    Rhodes, Bradley J.; Bomberger, Neil A.; Freyman, Todd M.; Kreamer, William; Kirschner, Linda; L'Italien, Adam C.; Mungovan, Wendy; Stauffer, Chris; Stolzar, Lauren; Waxman, Allen M.; Seibert, Michael

    2007-04-01

    SeeCoast is a prototype US Coast Guard port and coastal area surveillance system that aims to reduce operator workload while maintaining optimal domain awareness by shifting their focus from having to detect events to being able to analyze and act upon the knowledge derived from automatically detected anomalous activities. The automated scene understanding capability provided by the baseline SeeCoast system (as currently installed at the Joint Harbor Operations Center at Hampton Roads, VA) results from the integration of several components. Machine vision technology processes the real-time video streams provided by USCG cameras to generate vessel track and classification (based on vessel length) information. A multi-INT fusion component generates a single, coherent track picture by combining information available from the video processor with that from surface surveillance radars and AIS reports. Based on this track picture, vessel activity is analyzed by SeeCoast to detect user-defined unsafe, illegal, and threatening vessel activities using a rule-based pattern recognizer and to detect anomalous vessel activities on the basis of automatically learned behavior normalcy models. Operators can optionally guide the learning system in the form of examples and counter-examples of activities of interest, and refine the performance of the learning system by confirming alerts or indicating examples of false alarms. The fused track picture also provides a basis for automated control and tasking of cameras to detect vessels in motion. Real-time visualization combining the products of all SeeCoast components in a common operating picture is provided by a thin web-based client.

  8. Dynamic thermal signature prediction for real-time scene generation

    NASA Astrophysics Data System (ADS)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.; Swierkowski, Leszek

    2013-05-01

    At DSTO, a real-time scene generation framework, VIRSuite, has been developed in recent years, within which trials data are predominantly used for modelling the radiometric properties of the simulated objects. Since in many cases the data are insufficient, a physics-based simulator capable of predicting the infrared signatures of objects and their backgrounds has been developed as a new VIRSuite module. It includes transient heat conduction within the materials, and boundary conditions that take into account the heat fluxes due to solar radiation, wind convection and radiative transfer. In this paper, an overview is presented, covering both the steady-state and transient performance.

  9. Robust colour constancy in red-green dichromats

    PubMed Central

    Linhares, João M. M.; Moreira, Humberto; Lillo, Julio; Nascimento, Sérgio M. C.

    2017-01-01

    Colour discrimination has been widely studied in red-green (R-G) dichromats but the extent to which their colour constancy is affected remains unclear. This work estimated the extent of colour constancy for four normal trichromatic observers and seven R-G dichromats when viewing natural scenes under simulated daylight illuminants. Hyperspectral imaging data from natural scenes were used to generate the stimuli on a calibrated CRT display. In experiment 1, observers viewed a reference scene illuminated by daylight with a correlated colour temperature (CCT) of 6700K; observers then viewed sequentially two versions of the same scene, one illuminated by either a higher or lower CCT (condition 1, pure CCT change with constant luminance) or a higher or lower average luminance (condition 2, pure luminance change with a constant CCT). The observers’ task was to identify the version of the scene that looked different from the reference scene. Thresholds for detecting a pure CCT change or a pure luminance change were estimated, and it was found that those for R-G dichromats were marginally higher than for normal trichromats regarding CCT. In experiment 2, observers viewed sequentially a reference scene and a comparison scene with a CCT change or a luminance change above threshold for each observer. The observers’ task was to identify whether or not the change was an intensity change. No significant differences were found between the responses of normal trichromats and dichromats. These data suggest robust colour constancy mechanisms along daylight locus in R-G dichromacy. PMID:28662218

  10. Robust colour constancy in red-green dichromats.

    PubMed

    Álvaro, Leticia; Linhares, João M M; Moreira, Humberto; Lillo, Julio; Nascimento, Sérgio M C

    2017-01-01

    Colour discrimination has been widely studied in red-green (R-G) dichromats but the extent to which their colour constancy is affected remains unclear. This work estimated the extent of colour constancy for four normal trichromatic observers and seven R-G dichromats when viewing natural scenes under simulated daylight illuminants. Hyperspectral imaging data from natural scenes were used to generate the stimuli on a calibrated CRT display. In experiment 1, observers viewed a reference scene illuminated by daylight with a correlated colour temperature (CCT) of 6700K; observers then viewed sequentially two versions of the same scene, one illuminated by either a higher or lower CCT (condition 1, pure CCT change with constant luminance) or a higher or lower average luminance (condition 2, pure luminance change with a constant CCT). The observers' task was to identify the version of the scene that looked different from the reference scene. Thresholds for detecting a pure CCT change or a pure luminance change were estimated, and it was found that those for R-G dichromats were marginally higher than for normal trichromats regarding CCT. In experiment 2, observers viewed sequentially a reference scene and a comparison scene with a CCT change or a luminance change above threshold for each observer. The observers' task was to identify whether or not the change was an intensity change. No significant differences were found between the responses of normal trichromats and dichromats. These data suggest robust colour constancy mechanisms along daylight locus in R-G dichromacy.

  11. Data-Driven Multiresolution Camera Using the Foveal Adaptive Pyramid

    PubMed Central

    González, Martin; Sánchez-Pedraza, Antonio; Marfil, Rebeca; Rodríguez, Juan A.; Bandera, Antonio

    2016-01-01

    There exist image processing applications, such as tracking or pattern recognition, that are not necessarily precise enough to maintain the same resolution across the whole image sensor. In fact, they must only keep it as high as possible in a relatively small region, but covering a wide field of view. This is the aim of foveal vision systems. Briefly, they propose to sense a large field of view at a spatially-variant resolution: one relatively small region, the fovea, is mapped at a high resolution, while the rest of the image is captured at a lower resolution. In these systems, this fovea must be moved, from one region of interest to another one, to scan a visual scene. It is interesting that the part of the scene that is covered by the fovea should not be merely spatial, but closely related to perceptual objects. Segmentation and attention are then intimately tied together: while the segmentation process is responsible for extracting perceptively-coherent entities from the scene (proto-objects), attention can guide segmentation. From this loop, the concept of foveal attention arises. This work proposes a hardware system for mapping a uniformly-sampled sensor to a space-variant one. Furthermore, this mapping is tied with a software-based, foveal attention mechanism that takes as input the stream of generated foveal images. The whole hardware/software architecture has been designed to be embedded within an all programmable system on chip (AP SoC). Our results show the flexibility of the data port for exchanging information between the mapping and attention parts of the architecture and the good performance rates of the mapping procedure. Experimental evaluation also demonstrates that the segmentation method and the attention model provide results comparable to other more computationally-expensive algorithms. PMID:27898029

  12. Data-Driven Multiresolution Camera Using the Foveal Adaptive Pyramid.

    PubMed

    González, Martin; Sánchez-Pedraza, Antonio; Marfil, Rebeca; Rodríguez, Juan A; Bandera, Antonio

    2016-11-26

    There exist image processing applications, such as tracking or pattern recognition, that are not necessarily precise enough to maintain the same resolution across the whole image sensor. In fact, they must only keep it as high as possible in a relatively small region, but covering a wide field of view. This is the aim of foveal vision systems. Briefly, they propose to sense a large field of view at a spatially-variant resolution: one relatively small region, the fovea, is mapped at a high resolution, while the rest of the image is captured at a lower resolution. In these systems, this fovea must be moved, from one region of interest to another one, to scan a visual scene. It is interesting that the part of the scene that is covered by the fovea should not be merely spatial, but closely related to perceptual objects. Segmentation and attention are then intimately tied together: while the segmentation process is responsible for extracting perceptively-coherent entities from the scene (proto-objects), attention can guide segmentation. From this loop, the concept of foveal attention arises. This work proposes a hardware system for mapping a uniformly-sampled sensor to a space-variant one. Furthermore, this mapping is tied with a software-based, foveal attention mechanism that takes as input the stream of generated foveal images. The whole hardware/software architecture has been designed to be embedded within an all programmable system on chip (AP SoC). Our results show the flexibility of the data port for exchanging information between the mapping and attention parts of the architecture and the good performance rates of the mapping procedure. Experimental evaluation also demonstrates that the segmentation method and the attention model provide results comparable to other more computationally-expensive algorithms.

  13. ASTER cloud coverage reassessment using MODIS cloud mask products

    NASA Astrophysics Data System (ADS)

    Tonooka, Hideyuki; Omagari, Kunjuro; Yamamoto, Hirokazu; Tachikawa, Tetsushi; Fujita, Masaru; Paitaer, Zaoreguli

    2010-10-01

    In the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) Project, two kinds of algorithms are used for cloud assessment in Level-1 processing. The first algorithm based on the LANDSAT-5 TM Automatic Cloud Cover Assessment (ACCA) algorithm is used for a part of daytime scenes observed with only VNIR bands and all nighttime scenes, and the second algorithm based on the LANDSAT-7 ETM+ ACCA algorithm is used for most of daytime scenes observed with all spectral bands. However, the first algorithm does not work well for lack of some spectral bands sensitive to cloud detection, and the two algorithms have been less accurate over snow/ice covered areas since April 2008 when the SWIR subsystem developed troubles. In addition, they perform less well for some combinations of surface type and sun elevation angle. We, therefore, have developed the ASTER cloud coverage reassessment system using MODIS cloud mask (MOD35) products, and have reassessed cloud coverage for all ASTER archived scenes (>1.7 million scenes). All of the new cloud coverage data are included in Image Management System (IMS) databases of the ASTER Ground Data System (GDS) and NASA's Land Process Data Active Archive Center (LP DAAC) and used for ASTER product search by users, and cloud mask images are distributed to users through Internet. Daily upcoming scenes (about 400 scenes per day) are reassessed and inserted into the IMS databases in 5 to 7 days after each scene observation date. Some validation studies for the new cloud coverage data and some mission-related analyses using those data are also demonstrated in the present paper.

  14. The Design of the Trading Mechanism to Adapt the Development of Mixed Cooling Heating and Power

    NASA Astrophysics Data System (ADS)

    Liu, D. N.; Li, Z. H.; Zhou, H. M.; Zhao, Q.; Xu, X. F.

    2017-08-01

    The enterprise who has combined cooling heating and power system has both the customer group and the power generation resources. Therefore, it can be used as a power user, and can also be used as a power generation enterprise to participate in the direct purchase of electricity. This paper combines characteristics of mixed cooling heating and power, designs application business model of mixed cooling heating and power, and puts forward to the scene of cooling heating and power trading scheme, helping the enterprise according to the power supply and demand situation in the region adjust their positions and participate in the electricity market.

  15. Planarity constrained multi-view depth map reconstruction for urban scenes

    NASA Astrophysics Data System (ADS)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  16. Enhanced backgrounds in scene rendering with GTSIMS

    NASA Astrophysics Data System (ADS)

    Prussing, Keith F.; Pierson, Oliver; Cordell, Chris; Stewart, John; Nielson, Kevin

    2018-05-01

    A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.

  17. "Getting out of downtown": a longitudinal study of how street-entrenched youth attempt to exit an inner city drug scene.

    PubMed

    Knight, Rod; Fast, Danya; DeBeck, Kora; Shoveller, Jean; Small, Will

    2017-05-02

    Urban drug "scenes" have been identified as important risk environments that shape the health of street-entrenched youth. New knowledge is needed to inform policy and programing interventions to help reduce youths' drug scene involvement and related health risks. The aim of this study was to identify how young people envisioned exiting a local, inner-city drug scene in Vancouver, Canada, as well as the individual, social and structural factors that shaped their experiences. Between 2008 and 2016, we draw on 150 semi-structured interviews with 75 street-entrenched youth. We also draw on data generated through ethnographic fieldwork conducted with a subgroup of 25 of these youth between. Youth described that, in order to successfully exit Vancouver's inner city drug scene, they would need to: (a) secure legitimate employment and/or obtain education or occupational training; (b) distance themselves - both physically and socially - from the urban drug scene; and (c) reduce their drug consumption. As youth attempted to leave the scene, most experienced substantial social and structural barriers (e.g., cycling in and out of jail, the need to access services that are centralized within a place that they are trying to avoid), in addition to managing complex individual health issues (e.g., substance dependence). Factors that increased youth's capacity to successfully exit the drug scene included access to various forms of social and cultural capital operating outside of the scene, including supportive networks of friends and/or family, as well as engagement with addiction treatment services (e.g., low-threshold access to methadone) to support cessation or reduction of harmful forms of drug consumption. Policies and programming interventions that can facilitate young people's efforts to reduce engagement with Vancouver's inner-city drug scene are critically needed, including meaningful educational and/or occupational training opportunities, 'low threshold' addiction treatment services, as well as access to supportive housing outside of the scene.

  18. Evaluating the design of an earth radiation budget instrument with system simulations. Part 2: Minimization of instantaneous sampling errors for CERES-I

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert

    1994-01-01

    Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.

  19. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.

  20. An optical systems analysis approach to image resampling

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    1997-01-01

    All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.

  1. Investigation of several aspects of LANDSAT-4 data quality. [Sacramento, San Francisco, and NE Arkansas

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C. (Principal Investigator)

    1984-01-01

    The Thematic Mapper scene of Sacramento, CA acquired during the TDRSS test was received in TIPS format. Quadrants for both scenes were tested for band-to-band registration using reimplemented block correlation techniques. Summary statistics for band-to-band registrations of TM band combinations for Quadrant 4 of the NE Arkansas scene in TIPS format are tabulated as well as those for Quadrant 1 of the Sacramento scene. The system MTF analysis for the San Francisco scene is completed. The thermal band did not have sufficient contrast for the targets used and was not analyzed.

  2. Real-time range acquisition by adaptive structured light.

    PubMed

    Koninckx, Thomas P; Van Gool, Luc

    2006-03-01

    The goal of this paper is to provide a "self-adaptive" system for real-time range acquisition. Reconstructions are based on a single frame structured light illumination. Instead of using generic, static coding that is supposed to work under all circumstances, system adaptation is proposed. This occurs on-the-fly and renders the system more robust against instant scene variability and creates suitable patterns at startup. A continuous trade-off between speed and quality is made. A weighted combination of different coding cues--based upon pattern color, geometry, and tracking--yields a robust way to solve the correspondence problem. The individual coding cues are automatically adapted within a considered family of patterns. The weights to combine them are based on the average consistency with the result within a small time-window. The integration itself is done by reformulating the problem as a graph cut. Also, the camera-projector configuration is taken into account for generating the projection patterns. The correctness of the range maps is not guaranteed, but an estimation of the uncertainty is provided for each part of the reconstruction. Our prototype is implemented using unmodified consumer hardware only and, therefore, is cheap. Frame rates vary between 10 and 25 fps, dependent on scene complexity.

  3. Improved linearity using harmonic error rejection in a full-field range imaging system

    NASA Astrophysics Data System (ADS)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2008-02-01

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

  4. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  5. Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments

    NASA Technical Reports Server (NTRS)

    Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.

    1992-01-01

    Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.

  6. Real-time generation of infrared ocean scene based on GPU

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu

    2007-12-01

    Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.

  7. Spatial Modulation Improves Performance in CTIS

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H.; Wilson, Daniel W.; Johnson, William R.

    2009-01-01

    Suitably formulated spatial modulation of a scene imaged by a computed-tomography imaging spectrometer (CTIS) has been found to be useful as a means of improving the imaging performance of the CTIS. As used here, "spatial modulation" signifies the imposition of additional, artificial structure on a scene from within the CTIS optics. The basic principles of a CTIS were described in "Improvements in Computed- Tomography Imaging Spectrometry" (NPO-20561) NASA Tech Briefs, Vol. 24, No. 12 (December 2000), page 38 and "All-Reflective Computed-Tomography Imaging Spectrometers" (NPO-20836), NASA Tech Briefs, Vol. 26, No. 11 (November 2002), page 7a. To recapitulate: A CTIS offers capabilities for imaging a scene with spatial, spectral, and temporal resolution. The spectral disperser in a CTIS is a two-dimensional diffraction grating. It is positioned between two relay lenses (or on one of two relay mirrors) in a video imaging system. If the disperser were removed, the system would produce ordinary images of the scene in its field of view. In the presence of the grating, the image on the focal plane of the system contains both spectral and spatial information because the multiple diffraction orders of the grating give rise to multiple, spectrally dispersed images of the scene. By use of algorithms adapted from computed tomography, the image on the focal plane can be processed into an image cube a three-dimensional collection of data on the image intensity as a function of the two spatial dimensions (x and y) in the scene and of wavelength (lambda). Thus, both spectrally and spatially resolved information on the scene at a given instant of time can be obtained, without scanning, from a single snapshot; this is what makes the CTIS such a potentially powerful tool for spatially, spectrally, and temporally resolved imaging. A CTIS performs poorly in imaging some types of scenes in particular, scenes that contain little spatial or spectral variation. The computed spectra of such scenes tend to approximate correct values to within acceptably small errors near the edges of the field of view but to be poor approximations away from the edges. The additional structure imposed on a scene according to the present method enables the CTIS algorithms to reconstruct acceptable approximations of the spectral data throughout the scene.

  8. Frogs Exploit Statistical Regularities in Noisy Acoustic Scenes to Solve Cocktail-Party-like Problems.

    PubMed

    Lee, Norman; Ward, Jessica L; Vélez, Alejandro; Micheyl, Christophe; Bee, Mark A

    2017-03-06

    Noise is a ubiquitous source of errors in all forms of communication [1]. Noise-induced errors in speech communication, for example, make it difficult for humans to converse in noisy social settings, a challenge aptly named the "cocktail party problem" [2]. Many nonhuman animals also communicate acoustically in noisy social groups and thus face biologically analogous problems [3]. However, we know little about how the perceptual systems of receivers are evolutionarily adapted to avoid the costs of noise-induced errors in communication. In this study of Cope's gray treefrog (Hyla chrysoscelis; Hylidae), we investigated whether receivers exploit a potential statistical regularity present in noisy acoustic scenes to reduce errors in signal recognition and discrimination. We developed an anatomical/physiological model of the peripheral auditory system to show that temporal correlation in amplitude fluctuations across the frequency spectrum ("comodulation") [4-6] is a feature of the noise generated by large breeding choruses of sexually advertising males. In four psychophysical experiments, we investigated whether females exploit comodulation in background noise to mitigate noise-induced errors in evolutionarily critical mate-choice decisions. Subjects experienced fewer errors in recognizing conspecific calls and in selecting the calls of high-quality mates in the presence of simulated chorus noise that was comodulated. These data show unequivocally, and for the first time, that exploiting statistical regularities present in noisy acoustic scenes is an important biological strategy for solving cocktail-party-like problems in nonhuman animal communication. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A secure mobile multimedia system to assist emergency response teams.

    PubMed

    Belala, Yacine; Issa, Omneya; Gregoire, Jean-Charles; Wong, James

    2008-08-01

    Long wait times after injury and greater distances to travel between accident scenes and medical facilities contribute to increased, possibly unnecessary deaths. This paper describes a mobile emergency system aimed at reducing mortality by improving the readiness of hospital personnel, therefore allowing for more efficient treatment procedures to be performed when the victim arrives. The system is designed to provide a secure transmission of voice, medical data, and video in real-time over third-generation cellular networks. Test results obtained on a commercial network under real-life conditions demonstrate the ability to effectively transmit medical data over 3G networks, making them a viable option available to healthcare professionals.

  10. Coding of navigational affordances in the human visual system

    PubMed Central

    Epstein, Russell A.

    2017-01-01

    A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669

  11. Optical system design of dynamic infrared scene projector based on DMD

    NASA Astrophysics Data System (ADS)

    Lu, Jing; Fu, Yuegang; Liu, Zhiying; Li, Yandong

    2014-09-01

    Infrared scene simulator is now widely used to simulate infrared scene practicality in the laboratory, which can greatly reduce the research cost of the optical electrical system and offer economical experiment environment. With the advantage of large dynamic range and high spatial resolution, dynamic infrared projection technology, which is the key part of the infrared scene simulator, based on digital micro-mirror device (DMD) has been rapidly developed and widely applied in recent years. In this paper, the principle of the digital micro-mirror device is briefly introduced and the characteristics of the DLP (Digital Light Procession) system based on digital micromirror device (DMD) are analyzed. The projection system worked at 8~12μm with 1024×768 pixel DMD is designed by ZEMAX. The MTF curve is close to the diffraction limited curve and the radius of the spot diagram is smaller than that of the airy disk. The result indicates that the system meets the design requirements.

  12. Capturing the plenoptic function in a swipe

    NASA Astrophysics Data System (ADS)

    Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi

    2016-09-01

    Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.

  13. A read-in IC for infrared scene projectors with voltage drop compensation for improved uniformity of emitter current

    NASA Astrophysics Data System (ADS)

    Cho, Min Ji; Shin, Uisub; Lee, Hee Chul

    2017-05-01

    This paper proposes a read-in integrated circuit (RIIC) for infrared scene projectors, which compensates for the voltage drops in ground lines in order to improve the uniformity of the emitter current. A current output digital-to-analog converter is utilized to convert digital scene data into scene data currents. The unit cells in the array receive the scene data current and convert it into data voltage, which simultaneously self-adjusts to account for the voltage drop in the ground line in order to generate the desired emitter current independently of variations in the ground voltage. A 32 × 32 RIIC unit cell array was designed and fabricated using a 0.18-μm CMOS process. The experimental results demonstrate that the proposed RIIC can output a maximum emitter current of 150 μA and compensate for a voltage drop in the ground line of up to 500 mV under a 3.3-V supply. The uniformity of the emitter current is significantly improved compared to that of a conventional RIIC.

  14. A scheme for racquet sports video analysis with the combination of audio-visual information

    NASA Astrophysics Data System (ADS)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  15. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  16. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  17. Vehicle-network defensive aids suite

    NASA Astrophysics Data System (ADS)

    Rapanotti, John

    2005-05-01

    Defensive Aids Suites (DAS) developed for vehicles can be extended to the vehicle network level. The vehicle network, typically comprising four platoon vehicles, will benefit from improved communications and automation based on low latency response to threats from a flexible, dynamic, self-healing network environment. Improved DAS performance and reliability relies on four complementary sensor technologies including: acoustics, visible and infrared optics, laser detection and radar. Long-range passive threat detection and avoidance is based on dual-purpose optics, primarily designed for manoeuvring, targeting and surveillance, combined with dazzling, obscuration and countermanoeuvres. Short-range active armour is based on search and track radar and intercepting grenades to defeat the threat. Acoustic threat detection increases the overall robustness of the DAS and extends the detection range to include small calibers. Finally, detection of active targeting systems is carried out with laser and radar warning receivers. Synthetic scene generation will provide the integrated environment needed to investigate, develop and validate these new capabilities. Computer generated imagery, based on validated models and an acceptable set of benchmark vignettes, can be used to investigate and develop fieldable sensors driven by real-time algorithms and countermeasure strategies. The synthetic scene environment will be suitable for sensor and countermeasure development in hardware-in-the-loop simulation. The research effort focuses on two key technical areas: a) computing aspects of the synthetic scene generation and b) and development of adapted models and databases. OneSAF is being developed for research and development, in addition to the original requirement of Simulation and Modelling for Acquisition, Rehearsal, Requirements and Training (SMARRT), and is becoming useful as a means for transferring technology to other users, researchers and contractors. This procedure eliminates the need to construct ad hoc models and databases. The vehicle network can be modelled phenomenologically until more information is available. These concepts and approach will be discussed in the paper.

  18. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    DTIC Science & Technology

    2015-03-26

    camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene

  19. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  20. Speed Limits: Orientation and Semantic Context Interactions Constrain Natural Scene Discrimination Dynamics

    ERIC Educational Resources Information Center

    Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen

    2008-01-01

    The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…

  1. Guest Editor's introduction: Special issue on distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Lea, Rodger

    1998-09-01

    Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com

  2. A Low-Visibility Force Multiplier: Assessing China’s Cruise Missile Ambitions

    DTIC Science & Technology

    2014-04-01

    terminal sensor to achieve 10–15 meter (m) accuracy. • The second-generation DH-10 has a GPS/inertial guidance system but may also use terrain...contour mapping for redundant midcourse guidance and a digital scene-matching sensor to permit an accuracy of 10 m. • Development of the Chinese Beidou...pictures of the target as seen from different perspectives. DSMAC permits LACMs to achieve accuracies of about 1 m. Other (for example, thermal) sensors

  3. Movable Cameras And Monitors For Viewing Telemanipulator

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Venema, Steven C.

    1993-01-01

    Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.

  4. Deconstructing Visual Scenes in Cortex: Gradients of Object and Spatial Layout Information

    PubMed Central

    Kravitz, Dwight J.; Baker, Chris I.

    2013-01-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity. PMID:22473894

  5. Meta Data Mining in Earth Remote Sensing Data Archives

    NASA Astrophysics Data System (ADS)

    Davis, B.; Steinwand, D.

    2014-12-01

    Modern search and discovery tools for satellite based remote sensing data are often catalog based and rely on query systems which use scene- (or granule-) based meta data for those queries. While these traditional catalog systems are often robust, very little has been done in the way of meta data mining to aid in the search and discovery process. The recently coined term "Big Data" can be applied in the remote sensing world's efforts to derive information from the vast data holdings of satellite based land remote sensing data. Large catalog-based search and discovery systems such as the United States Geological Survey's Earth Explorer system and the NASA Earth Observing System Data and Information System's Reverb-ECHO system provide comprehensive access to these data holdings, but do little to expose the underlying scene-based meta data. These catalog-based systems are extremely flexible, but are manually intensive and often require a high level of user expertise. Exposing scene-based meta data to external, web-based services can enable machine-driven queries to aid in the search and discovery process. Furthermore, services which expose additional scene-based content data (such as product quality information) are now available and can provide a "deeper look" into remote sensing data archives too large for efficient manual search methods. This presentation shows examples of the mining of Landsat and Aster scene-based meta data, and an experimental service using OPeNDAP to extract information from quality band from multiple granules in the MODIS archive.

  6. Statistics of Natural Communication Signals Observed in the Wild Identify Important Yet Neglected Stimulus Regimes in Weakly Electric Fish.

    PubMed

    Henninger, Jörg; Krahe, Rüdiger; Kirschbaum, Frank; Grewe, Jan; Benda, Jan

    2018-06-13

    Sensory systems evolve in the ecological niches that each species is occupying. Accordingly, encoding of natural stimuli by sensory neurons is expected to be adapted to the statistics of these stimuli. For a direct quantification of sensory scenes, we tracked natural communication behavior of male and female weakly electric fish, Apteronotus rostratus , in their Neotropical rainforest habitat with high spatiotemporal resolution over several days. In the context of courtship, we observed large quantities of electrocommunication signals. Echo responses, acknowledgment signals, and their synchronizing role in spawning demonstrated the behavioral relevance of these signals. In both courtship and aggressive contexts, we observed robust behavioral responses in stimulus regimes that have so far been neglected in electrophysiological studies of this well characterized sensory system and that are well beyond the range of known best frequency and amplitude tuning of the electroreceptor afferents' firing rate modulation. Our results emphasize the importance of quantifying sensory scenes derived from freely behaving animals in their natural habitats for understanding the function and evolution of neural systems. SIGNIFICANCE STATEMENT The processing mechanisms of sensory systems have evolved in the context of the natural lives of organisms. To understand the functioning of sensory systems therefore requires probing them in the stimulus regimes in which they evolved. We took advantage of the continuously generated electric fields of weakly electric fish to explore electrosensory stimulus statistics in their natural Neotropical habitat. Unexpectedly, many of the electrocommunication signals recorded during courtship, spawning, and aggression had much smaller amplitudes or higher frequencies than stimuli used so far in neurophysiological characterizations of the electrosensory system. Our results demonstrate that quantifying sensory scenes derived from freely behaving animals in their natural habitats is essential to avoid biases in the choice of stimuli used to probe brain function. Copyright © 2018 the authors 0270-6474/18/385456-11$15.00/0.

  7. Initial progress in the recording of crime scene simulations using 3D laser structured light imagery techniques for law enforcement and forensic applications

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Monson, Keith L.

    1998-03-01

    Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a fixed optical bench with simulated crime scene models of the people and furniture to assess feasibility, requirements and utility of such a system for crime scene documentation and analysis.

  8. Motion picture history of the erection and operation of the Smith-Putnam wind generator

    NASA Technical Reports Server (NTRS)

    Wilcox, C.

    1973-01-01

    A color movie presentation is discussed that presents the various stages in assemblying the major subsystems of a synchronous wind generator, such as installing the rotor blades and the rotating platform at the top of the tower. In addition scenes are shown of the wind generator in operation.

  9. ROSE: the road simulation environment

    NASA Astrophysics Data System (ADS)

    Liatsis, Panos; Mitronikas, Panogiotis

    1997-05-01

    Evaluation of advanced sensing systems for autonomous vehicle navigation (AVN) is currently carried out off-line with prerecorded image sequences taken by physically attaching the sensors to the ego-vehicle. The data collection process is cumbersome and costly as well as highly restricted to specific road environments and weather conditions. This work proposes the use of scientific animation in modeling and representation of real-world traffic scenes and aims to produce an efficient, reliable and cost-effective concept evaluation suite for AVN sensing algorithms. ROSE is organized in a modular fashion consisting of the route generator, the journey generator, the sequence description generator and the renderer. The application was developed in MATLAB and POV-Ray was selected as the rendering module. User-friendly graphical user interfaces have been designed to allow easy selection of animation parameters and monitoring of the generation proces. The system, in its current form, allows the generation of various traffic scenarios, providing for an adequate number of static/dynamic objects, road types and environmental conditions. Initial tests on the robustness of various image processing algorithms to varying lighting and weather conditions have been already carried out.

  10. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  11. 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach

    PubMed Central

    Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m2. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. PMID:27854315

  12. Low Cost Multi-Sensor Robot Laser Scanning System and its Accuracy Investigations for Indoor Mapping Application

    NASA Astrophysics Data System (ADS)

    Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.

    2017-11-01

    In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.

  13. Experimental Simulation-Based Performance Evaluation of an SMS-Based Emergency Geolocation Notification System

    PubMed Central

    Osebor, Isibor

    2017-01-01

    In an emergency, a prompt response can save the lives of victims. This statement generates an imperative issue in emergency medical services (EMS). Designing a system that brings simplicity in locating emergency scenes is a step towards improving response time. This paper therefore implemented and evaluated the performance of an SMS-based emergency geolocation notification system with emphasis on its SMS delivery time and the system's geolocation and dispatch time. Using the RAS metrics recommended by IEEE for evaluation, the designed system was found to be efficient and effective as its reliability stood within 62.7% to 70.0% while its availability stood at 99% with a downtime of 3.65 days/year. PMID:29065643

  14. Single-shot thermal ghost imaging using wavelength-division multiplexing

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Suo, Jinli; Wang, Yuwang; Zhang, Zhili; Dai, Qionghai

    2018-01-01

    Ghost imaging (GI) is an emerging technique that reconstructs the target scene from its correlated measurements with a sequence of patterns. Restricted by the multi-shot principle, GI usually requires long acquisition time and is limited in observation of dynamic scenes. To handle this problem, this paper proposes a single-shot thermal ghost imaging scheme via a wavelength-division multiplexing technique. Specifically, we generate thousands of correlated patterns simultaneously by modulating a broadband light source with a wavelength dependent diffuser. These patterns carry the scene's spatial information and then the correlated photons are coupled into a spectrometer for the final reconstruction. This technique increases the speed of ghost imaging and promotes the applications in dynamic ghost imaging with high scalability and compatibility.

  15. Purification of crime scene DNA extracts using centrifugal filter devices

    PubMed Central

    2013-01-01

    Background The success of forensic DNA analysis is limited by the size, quality and purity of biological evidence found at crime scenes. Sample impurities can inhibit PCR, resulting in partial or negative DNA profiles. Various DNA purification methods are applied to remove impurities, for example, employing centrifugal filter devices. However, irrespective of method, DNA purification leads to DNA loss. Here we evaluate the filter devices Amicon Ultra 30 K and Microsep 30 K with respect to recovery rate and general performance for various types of PCR-inhibitory crime scene samples. Methods Recovery rates for DNA purification using Amicon Ultra 30 K and Microsep 30 K were gathered using quantitative PCR. Mock crime scene DNA extracts were analyzed using quantitative PCR and short tandem repeat (STR) profiling to test the general performance and inhibitor-removal properties of the two filter devices. Additionally, the outcome of long-term routine casework DNA analysis applying each of the devices was evaluated. Results Applying Microsep 30 K, 14 to 32% of the input DNA was recovered, whereas Amicon Ultra 30 K retained 62 to 70% of the DNA. The improved purity following filter purification counteracted some of this DNA loss, leading to slightly increased electropherogram peak heights for blood on denim (Amicon Ultra 30 K and Microsep 30 K) and saliva on envelope (Amicon Ultra 30 K). Comparing Amicon Ultra 30 K and Microsep 30 K for purification of DNA extracts from mock crime scene samples, the former generated significantly higher peak heights for rape case samples (P-values <0.01) and for hairs (P-values <0.036). In long-term routine use of the two filter devices, DNA extracts purified with Amicon Ultra 30 K were considerably less PCR-inhibitory in Quantifiler Human qPCR analysis compared to Microsep 30 K. Conclusions Amicon Ultra 30 K performed better than Microsep 30 K due to higher DNA recovery and more efficient removal of PCR-inhibitory substances. The different performances of the filter devices are likely caused by the quality of the filters and plastic wares, for example, their DNA binding properties. DNA purification using centrifugal filter devices can be necessary for successful DNA profiling of impure crime scene samples and for consistency between different PCR-based analysis systems, such as quantification and STR analysis. In order to maximize the possibility to obtain complete STR DNA profiles and to create an efficient workflow, the level of DNA purification applied should be correlated to the inhibitor-tolerance of the STR analysis system used. PMID:23618387

  16. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  17. Neural Correlates of Fixation Duration during Real-world Scene Viewing: Evidence from Fixation-related (FIRE) fMRI.

    PubMed

    Henderson, John M; Choi, Wonil

    2015-06-01

    During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.

  18. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  19. Recognition of 3-D Scene with Partially Occluded Objects

    NASA Astrophysics Data System (ADS)

    Lu, Siwei; Wong, Andrew K. C...

    1987-03-01

    This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.

  20. Generating Text from Functional Brain Images

    PubMed Central

    Pereira, Francisco; Detre, Greg; Botvinick, Matthew

    2011-01-01

    Recent work has shown that it is possible to take brain images acquired during viewing of a scene and reconstruct an approximation of the scene from those images. Here we show that it is also possible to generate text about the mental content reflected in brain images. We began with images collected as participants read names of concrete items (e.g., “Apartment’’) while also seeing line drawings of the item named. We built a model of the mental semantic representation of concrete concepts from text data and learned to map aspects of such representation to patterns of activation in the corresponding brain image. In order to validate this mapping, without accessing information about the items viewed for left-out individual brain images, we were able to generate from each one a collection of semantically pertinent words (e.g., “door,” “window” for “Apartment’’). Furthermore, we show that the ability to generate such words allows us to perform a classification task and thus validate our method quantitatively. PMID:21927602

  1. Camera pose estimation for augmented reality in a small indoor dynamic scene

    NASA Astrophysics Data System (ADS)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  2. Large-scale building scenes reconstruction from close-range images based on line and plane feature

    NASA Astrophysics Data System (ADS)

    Ding, Yi; Zhang, Jianqing

    2007-11-01

    Automatic generate 3D models of buildings and other man-made structures from images has become a topic of increasing importance, those models may be in applications such as virtual reality, entertainment industry and urban planning. In this paper we address the main problems and available solution for the generation of 3D models from terrestrial images. We first generate a coarse planar model of the principal scene planes and then reconstruct windows to refine the building models. There are several points of novelty: first we reconstruct the coarse wire frame model use the line segments matching with epipolar geometry constraint; Secondly, we detect the position of all windows in the image and reconstruct the windows by established corner points correspondences between images, then add the windows to the coarse model to refine the building models. The strategy is illustrated on image triple of college building.

  3. Multispectral system analysis through modeling and simulation

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Gleason, J. M.; Cicone, R. C.

    1977-01-01

    The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in Landsat data, examining system design and operational configuration, and development of information extraction techniques.

  4. Multispectral system analysis through modeling and simulation

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Gleason, J. M.; Cicone, R. C.

    1977-01-01

    The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in LANDSAT data, examining system design and operational configuration, and development of information extraction techniques.

  5. Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang; Thomas, Maikael A.

    We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less

  6. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  7. Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.

  8. Space-time light field rendering.

    PubMed

    Wang, Huamin; Sun, Mingxuan; Yang, Ruigang

    2007-01-01

    In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.

  9. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  10. Learning a generative model of images by factoring appearance and shape.

    PubMed

    Le Roux, Nicolas; Heess, Nicolas; Shotton, Jamie; Winn, John

    2011-03-01

    Computer vision has grown tremendously in the past two decades. Despite all efforts, existing attempts at matching parts of the human visual system's extraordinary ability to understand visual scenes lack either scope or power. By combining the advantages of general low-level generative models and powerful layer-based and hierarchical models, this work aims at being a first step toward richer, more flexible models of images. After comparing various types of restricted Boltzmann machines (RBMs) able to model continuous-valued data, we introduce our basic model, the masked RBM, which explicitly models occlusion boundaries in image patches by factoring the appearance of any patch region from its shape. We then propose a generative model of larger images using a field of such RBMs. Finally, we discuss how masked RBMs could be stacked to form a deep model able to generate more complicated structures and suitable for various tasks such as segmentation or object recognition.

  11. [Preliminary construction of three-dimensional visual educational system for clinical dentistry based on world wide web webpage].

    PubMed

    Hu, Jian; Xu, Xiang-yang; Song, En-min; Tan, Hong-bao; Wang, Yi-ning

    2009-09-01

    To establish a new visual educational system of virtual reality for clinical dentistry based on world wide web (WWW) webpage in order to provide more three-dimensional multimedia resources to dental students and an online three-dimensional consulting system for patients. Based on computer graphics and three-dimensional webpage technologies, the software of 3Dsmax and Webmax were adopted in the system development. In the Windows environment, the architecture of whole system was established step by step, including three-dimensional model construction, three-dimensional scene setup, transplanting three-dimensional scene into webpage, reediting the virtual scene, realization of interactions within the webpage, initial test, and necessary adjustment. Five cases of three-dimensional interactive webpage for clinical dentistry were completed. The three-dimensional interactive webpage could be accessible through web browser on personal computer, and users could interact with the webpage through rotating, panning and zooming the virtual scene. It is technically feasible to implement the visual educational system of virtual reality for clinical dentistry based on WWW webpage. Information related to clinical dentistry can be transmitted properly, visually and interactively through three-dimensional webpage.

  12. Rangeland Brush Estimation Toolbox (RaBET): An Approach for Evaluating Brush Management Conservation Efforts in Western Grazing Lands

    NASA Astrophysics Data System (ADS)

    Holifield Collins, C.; Kautz, M. A.; Skirvin, S. M.; Metz, L. J.

    2016-12-01

    There are over 180 million hectares of rangelands and grazed forests in the central and western United States. Due to the loss of perennial grasses and subsequent increased runoff and erosion that can degrade the system, woody cover species cannot be allowed to proliferate unchecked. The USDA-Natural Resources Conservation Service (NRCS) has allocated extensive resources to employ brush management (removal) as a conservation practice to control woody species encroachment. The Rangeland-Conservation Effects Assessment Project (CEAP) has been tasked with determining how effective the practice has been, however their land managers lack a cost-effective means to conduct these assessments at the necessary scale. An ArcGIS toolbox for generating large-scale, Landsat-based, spatial maps of woody cover on grazing lands in the western United States was developed through a collaboration with NRCS Rangeland-CEAP. The toolbox contains two main components of operation, image generation and temporal analysis, and utilizes simple interfaces requiring minimum user inputs. The image generation tool utilizes geographically specific algorithms developed from combining moderate-resolution (30-m) Landsat imagery and high-resolution (1-m) National Agricultural Imagery Program (NAIP) aerial photography to produce the woody cover scenes at the Major Land Resource (MLRA) scale. The temporal analysis tool can be used on these scenes to assess treatment effectiveness and monitor woody cover reemergence. RaBET provides rangeland managers an operational, inexpensive decision support tool to aid in the application of brush removal treatments and assessing their effectiveness.

  13. Multi-modal cockpit interface for improved airport surface operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)

    2010-01-01

    A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.

  14. Does object view influence the scene consistency effect?

    PubMed

    Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2015-04-01

    Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.

  15. Hybrid-mode read-in integrated circuit for infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Cho, Min Ji; Shin, Uisub; Lee, Hee Chul

    2017-05-01

    The infrared scene projector (IRSP) is a tool for evaluating infrared sensors by producing infrared images. Because sensor testing with IRSPs is safer than field testing, the usefulness of IRSPs is widely recognized at present. The important performance characteristics of IRSPs are the thermal resolution and the thermal dynamic range. However, due to an existing trade-off between these requirements, it is often difficult to find a workable balance between them. The conventional read-in integrated circuit (RIIC) can be classified into two types: voltage-mode and current-mode types. An IR emitter driven by a voltage-mode RIIC offers a fine thermal resolution. On the other hand, an emitter driven by the current-mode RIIC has the advantage of a wide thermal dynamic range. In order to provide various scenes, i.e., from highresolution scenes to high-temperature scenes, both of the aforementioned advantages are required. In this paper, a hybridmode RIIC which is selectively operated in two modes is proposed. The mode-selective characteristic of the proposed RIIC allows users to generate high-fidelity scenes regardless of the scene content. A prototype of the hybrid-mode RIIC was fabricated using a 0.18-μm 1-poly 6-metal CMOS process. The thermal range and the thermal resolution of the IR emitter driven by the proposed circuit were calculated based on measured data. The estimated thermal dynamic range of the current mode was from 261K to 790K, and the estimated thermal resolution of the voltage mode at 300K was 23 mK with a 12-bit gray-scale resolution.

  16. Digital forensics: an analytical crime scene procedure model (ACSPM).

    PubMed

    Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut

    2013-12-10

    In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic investigations each case is unique and needs special examination, it is not possible to cover every aspect of crime scene digital forensics, but the proposed procedure model is supposed to be a general guideline for practitioners. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. High spatial sampling light-guide snapshot spectrometer

    PubMed Central

    Wang, Ye; Pawlowski, Michal E.; Tkaczyk, Tomasz S.

    2017-01-01

    A prototype fiber-based imaging spectrometer was developed to provide snapshot hyperspectral imaging tuned for biomedical applications. The system is designed for imaging in the visible spectral range from 400 to 700 nm for compatibility with molecular imaging applications as well as satellite and remote sensing. An 81 × 96 pixel spatial sampling density is achieved by using a custom-made fiber-optic bundle. The design considerations and fabrication aspects of the fiber bundle and imaging spectrometer are described in detail. Through the custom fiber bundle, the image of a scene of interest is collected and divided into discrete spatial groups, with spaces generated in between groups for spectral dispersion. This reorganized image is scaled down by an image taper for compatibility with following optical elements, dispersed by a prism, and is finally acquired by a CCD camera. To obtain an (x, y, λ) datacube from the snapshot measurement, a spectral calibration algorithm is executed for reconstruction of the spatial–spectral signatures of the observed scene. System characterization of throughput, resolution, and crosstalk was performed. Preliminary results illustrating changes in oxygen-saturation in an occluded human finger are presented to demonstrate the system’s capabilities. PMID:29238115

  18. System and method for extracting dominant orientations from a scene

    DOEpatents

    Straub, Julian; Rosman, Guy; Freifeld, Oren; Leonard, John J.; Fisher, III; , John W.

    2017-05-30

    In one embodiment, a method of identifying the dominant orientations of a scene comprises representing a scene as a plurality of directional vectors. The scene may comprise a three-dimensional representation of a scene, and the plurality of directional vectors may comprise a plurality of surface normals. The method further comprises determining, based on the plurality of directional vectors, a plurality of orientations describing the scene. The determined plurality of orientations explains the directionality of the plurality of directional vectors. In certain embodiments, the plurality of orientations may have independent axes of rotation. The plurality of orientations may be determined by representing the plurality of directional vectors as lying on a mathematical representation of a sphere, and inferring the parameters of a statistical model to adapt the plurality of orientations to explain the positioning of the plurality of directional vectors lying on the mathematical representation of the sphere.

  19. Reach Out and Touch Someone: West Alabama Designs a New Emergency Link.

    ERIC Educational Resources Information Center

    Coogan, Mercy Hardie

    1980-01-01

    Quality on-the-scene emergency care for a rural area is provided by West Alabama's Emergency Medical Services. The success of this delivery system is attributed to a radio/telephone communications system that provides quick, direct contact between paramedics at the scene and medical doctors miles away. (DS)

  20. 15 CFR 743.1 - Wassenaar Arrangement.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...' are defined as “focal plane arrays” designed for use with a scanning optical system that images a scene in a sequential manner to produce an image. 'Staring Arrays' are defined as “focal plane arrays” unfortunately designed for use with a non-scanning optical system that images a scene. h. Gallium Arsenide or...

  1. Real-time automatic inspection under adverse conditions

    NASA Astrophysics Data System (ADS)

    Carvalho, Fernando D.; Correia, Fernando C.; Freitas, Jose C. A.; Rodrigues, Fernando C.

    1991-03-01

    This paper presents the results of a R&D Program supported by a grant from the Ministry of Defense, devoted to the development of an inteffigent camera for surveillance in the open air. The effects of shadows, clouds and winds were problems to be solved without generating false alarm events. The system is based on a video CCD camera which generates a video CCIR signal. The signal is then processed in modular hardware which detects the changes in the scene and processes the image, in order to enhance the intruder image and path. Windows may be defined over the image in order to increase the information obtained about the intruder and a first approach to the classification of the type of intruder may be achieved. The paper describes the hardware used in the system, as well as the software, used for the installation of the camera and the software developed for the microprocessor which is responsible for the generation of the alarm signals. The paper also presents some results of surveillance tasks in the open air executed by the system with real time performance.

  2. A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model

    PubMed Central

    Lin, Qing; Han, Youngjoon

    2014-01-01

    A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. PMID:25302812

  3. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  4. Illumination discrimination in real and simulated scenes

    PubMed Central

    Radonjić, Ana; Pearce, Bradley; Aston, Stacey; Krieger, Avery; Dubin, Hilary; Cottaris, Nicolas P.; Brainard, David H.; Hurlbert, Anya C.

    2016-01-01

    Characterizing humans' ability to discriminate changes in illumination provides information about the visual system's representation of the distal stimulus. We have previously shown that humans are able to discriminate illumination changes and that sensitivity to such changes depends on their chromatic direction. Probing illumination discrimination further would be facilitated by the use of computer-graphics simulations, which would, in practice, enable a wider range of stimulus manipulations. There is no a priori guarantee, however, that results obtained with simulated scenes generalize to real illuminated scenes. To investigate this question, we measured illumination discrimination in real and simulated scenes that were well-matched in mean chromaticity and scene geometry. Illumination discrimination thresholds were essentially identical for the two stimulus types. As in our previous work, these thresholds varied with illumination change direction. We exploited the flexibility offered by the use of graphics simulations to investigate whether the differences across direction are preserved when the surfaces in the scene are varied. We show that varying the scene's surface ensemble in a manner that also changes mean scene chromaticity modulates the relative sensitivity to illumination changes along different chromatic directions. Thus, any characterization of sensitivity to changes in illumination must be defined relative to the set of surfaces in the scene. PMID:28558392

  5. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    NASA Astrophysics Data System (ADS)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  6. Scheduling time-critical graphics on multiple processors

    NASA Technical Reports Server (NTRS)

    Meyer, Tom W.; Hughes, John F.

    1995-01-01

    This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.

  7. Methods for destriping Landsat Thematic Mapper images - A feasibility study for an online destriping process in the Thematic Mapper Image Processing System (TIPS)

    NASA Technical Reports Server (NTRS)

    Poros, D. J.; Peterson, C. J.

    1985-01-01

    Methods for destriping TM images and results of the application of these methods to selected TM scenes with sensor and scan striping, which was not removed by the radiometric correction during the TM Archive Generation Phase in TIPS, are presented. These methods correct only for gain and offset differences between detectors over many image lines and do not consider within-line effects. The feasibility of implementing a destriping process online in TIPS is also described.

  8. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian

    2011-11-01

    The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.

  9. Error correction for IFSAR

    DOEpatents

    Doerry, Armin W.; Bickel, Douglas L.

    2002-01-01

    IFSAR images of a target scene are generated by compensating for variations in vertical separation between collection surfaces defined for each IFSAR antenna by adjusting the baseline projection during image generation. In addition, height information from all antennas is processed before processing range and azimuth information in a normal fashion to create the IFSAR image.

  10. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  11. The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.

    PubMed

    Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc

    2014-12-01

    A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Unconscious analyses of visual scenes based on feature conjunctions.

    PubMed

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  13. Modeling Of Object- And Scene-Prototypes With Hierarchically Structured Classes

    NASA Astrophysics Data System (ADS)

    Ren, Z.; Jensch, P.; Ameling, W.

    1989-03-01

    The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.

  14. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  15. Teledyne H1RG, H2RG, and H4RG Noise Generator

    NASA Technical Reports Server (NTRS)

    Rauscher, Bernard J.

    2015-01-01

    This paper describes the near-infrared detector system noise generator (NG) that we wrote for the James Webb Space Telescope (JWST) Near Infrared Spectrograph (NIRSpec). NG simulates many important noise components including; (1) white "read noise", (2) residual bias drifts, (3) pink 1/f noise, (4) alternating column noise, and (5) picture frame noise. By adjusting the input parameters, NG can simulate noise for Teledyne's H1RG, H2RG, and H4RG detectors with and without Teledyne's SIDECAR ASIC IR array controller. NG can be used as a starting point for simulating astronomical scenes by adding dark current, scattered light, and astronomical sources into the results from NG. NG is written in Python-3.4.

  16. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  17. Preparation of pyrolysis reference samples: evaluation of a standard method using a tube furnace.

    PubMed

    Sandercock, P Mark L

    2012-05-01

    A new, simple method for the reproducible creation of pyrolysis products from different materials that may be found at a fire scene is described. A temperature programmable steady-state tube furnace was used to generate pyrolysis products from different substrates, including softwoods, paper, vinyl sheet flooring, and carpet. The temperature profile of the tube furnace was characterized, and the suitability of the method to reproducibly create pyrolysates similar to those found in real fire debris was assessed. The use of this method to create proficiency tests to realistically test an examiner's ability to interpret complex gas chromatograph-mass spectrometric fire debris data, and to create a library of pyrolsates generated from materials commonly found at a fire scene, is demonstrated. © 2011 American Academy of Forensic Sciences.

  18. Cybersickness in the presence of scene rotational movements along different axes.

    PubMed

    Lo, W T; So, R H

    2001-02-01

    Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.

  19. Landscape preference assessment of Louisiana river landscapes: a methodological study

    Treesearch

    Michael S. Lee

    1979-01-01

    The study pertains to the development of an assessment system for the analysis of visual preference attributed to Louisiana river landscapes. The assessment system was utilized in the evaluation of 20 Louisiana river scenes. Individuals were tested for their free choice preference for the same scenes. A statistical analysis was conducted to examine the relationship...

  20. Space Launch System Booster Test- Behind the Scenes

    NASA Image and Video Library

    2016-06-24

    Get a sneak peek behind the scenes of how engineers and technicians at Orbital ATK in Promontory, Utah, are coming together to test the most powerful booster for NASA’s new rocket, the Space Launch System. SLS will make missions possible to an asteroid and the journey to Mars. For more information on SLS, visit www.nasa.gov/sls.

  1. Multi-Sensor Scene Synthesis and Analysis

    DTIC Science & Technology

    1981-09-01

    Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database

  2. Using GIS databases for simulated nightlight imagery

    NASA Astrophysics Data System (ADS)

    Zollweg, Joshua D.; Gartley, Michael; Roskovensky, John; Mercier, Jeffery

    2012-06-01

    Proposed is a new technique for simulating nighttime scenes with realistically-modelled urban radiance. While nightlight imagery is commonly used to measure urban sprawl,1 it is uncommon to use urbanization as metric to develop synthetic nighttime scenes. In the developed methodology, the open-source Open Street Map (OSM) Geographic Information System (GIS) database is used. The database is comprised of many nodes, which are used to dene the position of dierent types of streets, buildings, and other features. These nodes are the driver used to model urban nightlights, given several assumptions. The rst assumption is that the spatial distribution of nodes is closely related to the spatial distribution of nightlights. Work by Roychowdhury et al has demonstrated the relationship between urban lights and development. 2 So, the real assumption being made is that the density of nodes corresponds to development, which is reasonable. Secondly, the local density of nodes must relate directly to the upwelled radiance within the given locality. Testing these assumptions using Albuquerque and Indianapolis as example cities revealed that dierent types of nodes produce more realistic results than others. Residential street nodes oered the best performance for any single node type, among the types tested in this investigation. Other node types, however, still provide useful supplementary data. Using streets and buildings dened in the OSM database allowed automated generation of simulated nighttime scenes of Albuquerque and Indianapolis in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The simulation was compared to real data from the recently deployed National Polar-orbiting Operational Environmental Satellite System(NPOESS) Visible Infrared Imager Radiometer Suite (VIIRS) platform. As a result of the comparison, correction functions were used to correct for discrepancies between simulated and observed radiance. Future work will include investigating more advanced approaches for mapping the spatial extent of nightlights, based on the distribution of dierent node types in local neighbourhoods. This will allow the spectral prole of each region to be dynamically adjusted, in addition to simply modifying the magnitude of a single source type.

  3. Action perception as hypothesis testing.

    PubMed

    Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni

    2017-04-01

    We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  5. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  6. Space Shuttle Columbia views the world with imaging radar: The SIR-A experiment

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Cimino, J. B.; Elachi, C.

    1983-01-01

    Images acquired by the Shuttle Imaging Radar (SIR-A) in November 1981, demonstrate the capability of this microwave remote sensor system to perceive and map a wide range of different surface features around the Earth. A selection of 60 scenes displays this capability with respect to Earth resources - geology, hydrology, agriculture, forest cover, ocean surface features, and prominent man-made structures. The combined area covered by the scenes presented amounts to about 3% of the total acquired. Most of the SIR-A images are accompanied by a LANDSAT multispectral scanner (MSS) or SEASAT synthetic-aperture radar (SAR) image of the same scene for comparison. Differences between the SIR-A image and its companion LANDSAT or SEASAT image at each scene are related to the characteristics of the respective imaging systems, and to seasonal or other changes that occurred in the time interval between acquisition of the images.

  7. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, Donald P.

    1998-01-01

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.

  8. Method and apparatus for coherent imaging of infrared energy

    DOEpatents

    Hutchinson, D.P.

    1998-05-12

    A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.

  9. Reduction of background clutter in structured lighting systems

    DOEpatents

    Carlson, Jeffrey J.; Giles, Michael K.; Padilla, Denise D.; Davidson, Jr., Patrick A.; Novick, David K.; Wilson, Christopher W.

    2010-06-22

    Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging device and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.

  10. Modeling repetitive motions using structured light.

    PubMed

    Xu, Yi; Aliaga, Daniel G

    2010-01-01

    Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.

  11. Hubble Space Telescope Deploy, Cuba, Bahamas and Gulf of Mexico

    NASA Image and Video Library

    1990-04-29

    STS031-151-010 (25 April 1990) --- The Hubble Space Telescope (HST), still in the grasp of Discovery's Remote Manipulator System (RMS), is backdropped over Cuba and the Bahama Islands. In this scene, it has yet to have deployment of its solar array panels and its high gain antennae. This scene was captured with a large format Aero Linhof camera used by several previous flight crews to record Earth scenes.

  12. Canceled to Be Called Back: A Retrospective Cohort Study of Canceled Helicopter Emergency Medical Service Scene Calls That Are Later Transferred to a Trauma Center.

    PubMed

    Nolan, Brodie; Ackery, Alun; Nathens, Avery; Sawadsky, Bruce; Tien, Homer

    In our trauma system, helicopter emergency medical services (HEMS) can be requested to attend a scene call for an injured patient before arrival by land paramedics. Land paramedics can cancel this response if they deem it unnecessary. The purpose of this study is to describe the frequency of canceled HEMS scene calls that were subsequently transferred to 2 trauma centers and to assess for any impact on morbidity and mortality. Probabilistic matching was used to identify canceled HEMS scene call patients who were later transported to 2 trauma centers over a 48-month period. Registry data were used to compare canceled scene call patients with direct from scene patients. There were 290 requests for HEMS scene calls, of which 35.2% were canceled. Of those canceled, 24.5% were later transported to our trauma centers. Canceled scene call patients were more likely to be older and to be discharged home from the trauma center without being admitted. There is a significant amount of undertriage of patients for whom an HEMS response was canceled and later transported to a trauma center. These patients face similar morbidity and mortality as patients who are brought directly from scene to a trauma center. Copyright © 2018 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  13. Wide-area continuous offender monitoring

    NASA Astrophysics Data System (ADS)

    Hoshen, Joseph; Drake, George; Spencer, Debra D.

    1997-02-01

    The corrections system in the U.S. is supervising over five million offenders. This number is rising fast and so are the direct and indirect costs to society. To improve supervision and reduce the cost of parole and probation, first generation home arrest systems were introduced in 1987. While these systems proved to be helpful to the corrections system, their scope is rather limited because they only cover an offender at a single location and provide only a partial time coverage. To correct the limitations of first- generation systems, second-generation wide area continuous electronic offender monitoring systems, designed to monitor the offender at all times and locations, are now on the drawing board. These systems use radio frequency location technology to track the position of offenders. The challenge for this technology is the development of reliable personal locator devices that are small, lightweight, with long operational battery life, and indoors/outdoors accuracy of 100 meters or less. At the center of a second-generation system is a database that specifies the offender's home, workplace, commute, and time the offender should be found in each. The database could also define areas from which the offender is excluded. To test compliance, the system would compare the observed coordinates of the offender with the stored location for a given time interval. Database logfiles will also enable law enforcement to determine if a monitored offender was present at a crime scene and thus include or exclude the offender as a potential suspect.

  14. Wide area continuous offender monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoshen, J.; Drake, G.; Spencer, D.

    The corrections system in the U.S. is supervising over five million offenders. This number is rising fast and so are the direct and indirect costs to society. To improve supervision and reduce the cost of parole and probation, first generation home arrest systems were introduced in 1987. While these systems proved to be helpful to the corrections system, their scope is rather limited because they only cover an offender at a single location and provide only a partial time coverage. To correct the limitations of first-generation systems, second-generation wide area continuous electronic offender monitoring systems, designed to monitor the offendermore » at all times and locations, are now on the drawing board. These systems use radio frequency location technology to track the position of offenders. The challenge for this technology is the development of reliable personal locator devices that are small, lightweight, with long operational battery life, and indoors/outdoors accuracy of 100 meters or less. At the center of a second-generation system is a database that specifies the offender`s home, workplace, commute, and time the offender should be found in each. The database could also define areas from which the offender is excluded. To test compliance, the system would compare the observed coordinates of the offender with the stored location for a given time interval. Database logfiles will also enable law enforcement to determine if a monitored offender was present at a crime scene and thus include or exclude the offender as a potential suspect.« less

  15. AgRISTARS. Supporting research: Algorithms for scene modelling

    NASA Technical Reports Server (NTRS)

    Rassbach, M. E. (Principal Investigator)

    1982-01-01

    The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.

  16. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  17. Text Extraction from Scene Images by Character Appearance and Structure Modeling

    PubMed Central

    Yi, Chucai; Tian, Yingli

    2012-01-01

    In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111

  18. Perspective Imagery in Synthetic Scenes used to Control and Guide Aircraft during Landing and Taxi: Some Issues and Concerns

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W.; Kaiser, Mary K.

    2003-01-01

    Perspective synthetic displays that supplement, or supplant, the optical windows traditionally used for guidance and control of aircraft are accompanied by potentially significant human factors problems related to the optical geometric conformality of the display. Such geometric conformality is broken when optical features are not in the location they would be if directly viewed through a window. This often occurs when the scene is relayed or generated from a location different from the pilot s eyepoint. However, assuming no large visual/vestibular effects, a pilot cad often learn to use such a display very effectively. Important problems may arise, however, when display accuracy or consistency is compromised, and this can usually be related to geometrical discrepancies between how the synthetic visual scene behaves and how the visual scene through a window behaves. In addition to these issues, this paper examines the potentially critical problem of the disorientation that can arise when both a synthetic display and a real window are present in a flight deck, and no consistent visual interpretation is available.

  19. Local search for optimal global map generation using mid-decadal landsat images

    USGS Publications Warehouse

    Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.

    2007-01-01

    NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

  20. Laser radar system for obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Bers, Karlheinz; Schulz, Karl R.; Armbruster, Walter

    2005-09-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser radars which are build by the EADS company and presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from objects at distances of military relevance with a high hit-and-detect probability. The development of advanced 3d-scene analysis algorithms had increased the recognition probability and reduced the false alarm rate by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The sensor system and the implemented algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition. This paper describes different 3D-imaging ladar sensors with unique system architecture but different components matched for different military application. Emphasis is laid on an obstacle warning system with a high probability of detection of thin wires, the real time processing of the measured range image data, obstacle classification und visualization.

  1. International Space Station (ISS)

    NASA Image and Video Library

    1995-04-17

    This computer generated scene of the International Space Station (ISS) represents the first addition of hardware following the completion of Phase II. The 8-A Phase shows the addition of the S-9 truss.

  2. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.

    PubMed

    Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G

    2017-05-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Processing the Viking lander camera data

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Tucker, R.; Green, W.; Jones, K. L.

    1977-01-01

    Over 1000 camera events were returned from the two Viking landers during the Primary Mission. A system was devised for processing camera data as they were received, in real time, from the Deep Space Network. This system provided a flexible choice of parameters for three computer-enhanced versions of the data for display or hard-copy generation. Software systems allowed all but 0.3% of the imagery scan lines received on earth to be placed correctly in the camera data record. A second-order processing system was developed which allowed extensive interactive image processing including computer-assisted photogrammetry, a variety of geometric and photometric transformations, mosaicking, and color balancing using six different filtered images of a common scene. These results have been completely cataloged and documented to produce an Experiment Data Record.

  4. Digital video timing analyzer for the evaluation of PC-based real-time simulation systems

    NASA Astrophysics Data System (ADS)

    Jones, Shawn R.; Crosby, Jay L.; Terry, John E., Jr.

    2009-05-01

    Due to the rapid acceleration in technology and the drop in costs, the use of commercial off-the-shelf (COTS) PC-based hardware and software components for digital and hardware-in-the-loop (HWIL) simulations has increased. However, the increase in PC-based components creates new challenges for HWIL test facilities such as cost-effective hardware and software selection, system configuration and integration, performance testing, and simulation verification/validation. This paper will discuss how the Digital Video Timing Analyzer (DiViTA) installed in the Aviation and Missile Research, Development and Engineering Center (AMRDEC) provides quantitative characterization data for PC-based real-time scene generation systems. An overview of the DiViTA is provided followed by details on measurement techniques, applications, and real-world examples of system benefits.

  5. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  6. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  7. Adaptive foveated single-pixel imaging with dynamic supersampling

    PubMed Central

    Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.

    2017-01-01

    In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538

  8. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  9. A prototype molecular interactive collaborative environment (MICE).

    PubMed

    Bourne, P; Gribskov, M; Johnson, G; Moreland, J; Wavra, S; Weissig, H

    1998-01-01

    Illustrations of macromolecular structure in the scientific literature contain a high level of semantic content through which the authors convey, among other features, the biological function of that macromolecule. We refer to these illustrations as molecular scenes. Such scenes, if available electronically, are not readily accessible for further interactive interrogation. The basic PDB format does not retain features of the scene; formats like PostScript retain the scene but are not interactive; and the many formats used by individual graphics programs, while capable of reproducing the scene, are neither interchangeable nor can they be stored in a database and queried for features of the scene. MICE defines a Molecular Scene Description Language (MSDL) which allows scenes to be stored in a relational database (a molecular scene gallery) and queried. Scenes retrieved from the gallery are rendered in Virtual Reality Modeling Language (VRML) and currently displayed in WebView, a VRML browser modified to support the Virtual Reality Behavior System (VRBS) protocol. VRBS provides communication between multiple client browsers, each capable of manipulating the scene. This level of collaboration works well over standard Internet connections and holds promise for collaborative research at a distance and distance learning. Further, via VRBS, the VRML world can be used as a visual cue to trigger an application such as a remote MEME search. MICE is very much work in progress. Current work seeks to replace WebView with Netscape, Cosmoplayer, a standard VRML plug-in, and a Java-based console. The console consists of a generic kernel suitable for multiple collaborative applications and additional application-specific controls. Further details of the MICE project are available at http:/(/)mice.sdsc.edu.

  10. Advanced Weapon System (AWS) Sensor Prediction Techniques Study. Volume II

    DTIC Science & Technology

    1981-09-01

    models are suggested. TV. 1-1 ’ICourant Com’p’uter Sctence Report #9 December 1975 Scene Analysis: A Survey Carl Weiman Cou rant Institute of...some crucial differences. In the psycho- logical model of mechanical vision, the aim of scene analysis is to perceive and understand 2-0 images of 3-D...scenes. The meaning of this analogy can be clarified using a rudimentary informational model ; this yields a natural hierarchy from physical

  11. Computer Vision Research and its Applications to Automated Cartography

    DTIC Science & Technology

    1985-09-01

    D Scene Geometry Thomas M. Strat and Martin A. Fischler Appendix D A New Sense for Depth of Field Alex P. Pentland iv 9.* qb CONTENTS (cont’d...D modeling. A. Baseline Stereo System As a framework for integration and evaluation of our research in modeling * 3-D scene geometry , as well as a...B. New Methods for Stereo Compilation As we previously indicated, the conventional approach to recovering scene geometry from a stereo pair of

  12. 47 CFR 80.1127 - On-scene communications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false On-scene communications. 80.1127 Section 80.1127 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS) Operating Procedures...

  13. Modeling Coniferous Canopy Structure over Extensive Areas for Ray Tracing Simulations: Scaling from the Leaf to the Stand Level

    NASA Astrophysics Data System (ADS)

    van Aardt, J. A.; van Leeuwen, M.; Kelbe, D.; Kampe, T.; Krause, K.

    2015-12-01

    Remote sensing is widely accepted as a useful technology for characterizing the Earth surface in an objective, reproducible, and economically feasible manner. To date, the calibration and validation of remote sensing data sets and biophysical parameter estimates remain challenging due to the requirements to sample large areas for ground-truth data collection, and restrictions to sample these data within narrow temporal windows centered around flight campaigns or satellite overpasses. The computer graphics community have taken significant steps to ameliorate some of these challenges by providing an ability to generate synthetic images based on geometrically and optically realistic representations of complex targets and imaging instruments. These synthetic data can be used for conceptual and diagnostic tests of instrumentation prior to sensor deployment or to examine linkages between biophysical characteristics of the Earth surface and at-sensor radiance. In the last two decades, the use of image generation techniques for remote sensing of the vegetated environment has evolved from the simulation of simple homogeneous, hypothetical vegetation canopies, to advanced scenes and renderings with a high degree of photo-realism. Reported virtual scenes comprise up to 100M surface facets; however, due to the tighter coupling between hardware and software development, the full potential of image generation techniques for forestry applications yet remains to be fully explored. In this presentation, we examine the potential computer graphics techniques have for the analysis of forest structure-function relationships and demonstrate techniques that provide for the modeling of extremely high-faceted virtual forest canopies, comprising billions of scene elements. We demonstrate the use of ray tracing simulations for the analysis of gap size distributions and characterization of foliage clumping within spatial footprints that allow for a tight matching between characteristics derived from these virtual scenes and typical pixel resolutions of remote sensing imagery.

  14. Advanced interactive display formats for terminal area traffic control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.

    1996-01-01

    This report describes the basic design considerations for perspective air traffic control displays. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. Two distinct modes of MVPS operations are considered, both of which utilize manipulation pointers imbedded in the three-dimensional scene: (1) direct manipulation of the viewing parameters -- in this mode the manipulation pointers act like the control-input device, through which the viewing parameter changes are made. Part of the parameters are rate controlled, and part of them position controlled. This mode is intended for making fast, iterative small changes in the parameters. (2) Indirect manipulation of the viewing parameters -- this mode is intended primarily for introducing large, predetermined changes in the parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of the screen. This arrangement has been chosen in order to preserve the correspondence between the spatial lay-outs of the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer-generated scene. The proposed, continued research efforts will deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the air traffic control scene can be viewed for a given traffic situation. They determine whether a change in viewing parameter setting is required and determine the dynamic path along which the change to the new viewing parameter setting should take place.

  15. New weather depiction technology for night vision goggle (NVG) training: 3D virtual/augmented reality scene-weather-atmosphere-target simulation

    NASA Astrophysics Data System (ADS)

    Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.

    2007-04-01

    US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic target - background scenes and display the results in a DirectX environment. This paper will describe our approach and show a brief demonstration of the software capabilities. The work is supported by the SBIR program under contract N61339-06-C-0113.

  16. Top-down control of visual perception: attention in natural vision.

    PubMed

    Rolls, Edmund T

    2008-01-01

    Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.

  17. Acceptable bit-rates for human face identification from CCTV imagery

    NASA Astrophysics Data System (ADS)

    Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker

    2013-01-01

    The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.

  18. Modelling Technology for Building Fire Scene with Virtual Geographic Environment

    NASA Astrophysics Data System (ADS)

    Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.

    2017-09-01

    Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.

  19. Development of an Infrared Remote Sensing System for Continuous Monitoring of Stromboli Volcano

    NASA Astrophysics Data System (ADS)

    Harig, R.; Burton, M.; Rausch, P.; Jordan, M.; Gorgas, J.; Gerhard, J.

    2009-04-01

    In order to monitor gases emitted by Stromboli volcano in the Eolian archipelago, Italy, a remote sensing system based on Fourier-transform infrared spectroscopy has been developed and installed on the summit of Stromboli volcano. Hot rocks and lava are used as sources of infrared radiation. The system is based on an interferometer with a single detector element in combination with an azimuth-elevation scanning mirror system. The mirror system is used to align the field of view of the instrument. In addition, the system is equipped with an infrared camera. Two basic modes of operation have been implemented: The user may use the infrared image to align the system to a vent that is to be examined. In addition, the scanning system may be used for (hyperspectral) imaging of the scene. In this mode, the scanning mirror is set sequentially move to all positions within a region of interest which is defined by the operator using the image generated from the infrared camera. The spectral range used for the measurements is 1600 - 4200 cm-1 allowing the quantification of many gases such as CO, CO2, SO2, and HCl. The spectral resolution is 0.5 cm-1. In order to protect the optical, mechanical and electrical parts of the system from the volcanic gases, all components are contained in a gas-tight aluminium housing. The system is controlled via TCP/IP (data transfer by WLAN), allowing the user to operate it from a remote PC. The infrared image of the scene and measured spectra are transferred to and displayed by a remote PC at INGV or TUHH in real-time. However, the system is capable of autonomous operation on the volcano, once a measurement has been started. Measurements are stored by an internal embedded PC.

  20. A 3D radiative transfer model based on lidar data and its application on hydrological and ecosystem modeling

    NASA Astrophysics Data System (ADS)

    Li, W.; Su, Y.; Harmon, T. C.; Guo, Q.

    2013-12-01

    Light Detection and Ranging (lidar) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant object. Due to its ability to generate 3-dimensional data with high spatial resolution and accuracy, lidar technology is being increasingly used in ecology, geography, geology, geomorphology, seismology, remote sensing, and atmospheric physics. In this study we construct a 3-dimentional (3D) radiative transfer model (RTM) using lidar data to simulate the spatial distribution of solar radiation (direct and diffuse) on the surface of water and mountain forests. The model includes three sub-models: a light model simulating the light source, a sensor model simulating the camera, and a scene model simulating the landscape. We use ground-based and airborne lidar data to characterize the 3D structure of the study area, and generate a detailed 3D scene model. The interactions between light and object are simulated using the Monte Carlo Ray Tracing (MCRT) method. A large number of rays are generated from the light source. For each individual ray, the full traveling path is traced until it is absorbed or escapes from the scene boundary. By locating the sensor at different positions and directions, we can simulate the spatial distribution of solar energy at the ground, vegetation and water surfaces. These outputs can then be incorporated into meteorological drivers for hydrologic and energy balance models to improve our understanding of hydrologic processes and ecosystem functions.

  1. Effect of predictive sign of acceleration on heart rate variability in passive translation situation: preliminary evidence using visual and vestibular stimuli in VR environment

    PubMed Central

    Watanabe, Hiroshi; Teramoto, Wataru; Umemura, Hiroyuki

    2007-01-01

    Objective We studied the effects of the presentation of a visual sign that warned subjects of acceleration around the yaw and pitch axes in virtual reality (VR) on their heart rate variability. Methods Synchronization of the immersive virtual reality equipment (CAVE) and motion base system generated a driving scene and provided subjects with dynamic and wide-ranging depth information and vestibular input. The heart rate variability of 21 subjects was measured while the subjects observed a simulated driving scene for 16 minutes under three different conditions. Results When the predictive sign of the acceleration appeared 3500 ms before the acceleration, the index of the activity of the autonomic nervous system (low/high frequency ratio; LF/HF ratio) of subjects did not change much, whereas when no sign appeared the LF/HF ratio increased over the observation time. When the predictive sign of the acceleration appeared 750 ms before the acceleration, no systematic change occurred. Conclusion The visual sign which informed subjects of the acceleration affected the activity of the autonomic nervous system when it appeared long enough before the acceleration. Also, our results showed the importance of the interval between the sign and the event and the relationship between the gradual representation of events and their quantity. PMID:17903267

  2. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    NASA Astrophysics Data System (ADS)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  3. Boat, wake, and wave real-time simulation

    NASA Astrophysics Data System (ADS)

    Świerkowski, Leszek; Gouthas, Efthimios; Christie, Chad L.; Williams, Owen M.

    2009-05-01

    We describe the extension of our real-time scene generation software VIRSuite to include the dynamic simulation of small boats and their wakes within an ocean environment. Extensive use has been made of the programmabilty available in the current generation of GPUs. We have demonstrated that real-time simulation is feasible, even including such complexities as dynamical calculation of the boat motion, wake generation and calculation of an FFTgenerated sea state.

  4. Network Security Visualization

    DTIC Science & Technology

    1999-09-27

    performing SQL generation and result-set binding, inserting acquired security events into the database and gathering the requested data for Console scene...objects is also auto-generated by a VBA script. Built into the auto-generated table access objects are the preferred join paths between tables. This...much of the Server itself) never have to deal with SQL directly. This is one aspect of laying the groundwork for supporting RDBMSs from multiple vendors

  5. System for real-time generation of georeferenced terrain models

    NASA Astrophysics Data System (ADS)

    Schultz, Howard J.; Hanson, Allen R.; Riseman, Edward M.; Stolle, Frank; Zhu, Zhigang; Hayward, Christopher D.; Slaymaker, Dana

    2001-02-01

    A growing number of law enforcement applications, especially in the areas of border security, drug enforcement and anti- terrorism require high-resolution wide area surveillance from unmanned air vehicles. At the University of Massachusetts we are developing an aerial reconnaissance system capable of generating high resolution, geographically registered terrain models (in the form of a seamless mosaic) in real-time from a single down-looking digital video camera. The efficiency of the processing algorithms, as well as the simplicity of the hardware, will provide the user with the ability to produce and roam through stereoscopic geo-referenced mosaic images in real-time, and to automatically generate highly accurate 3D terrain models offline in a fraction of the time currently required by softcopy conventional photogrammetry systems. The system is organized around a set of integrated sensor and software components. The instrumentation package is comprised of several inexpensive commercial-off-the-shelf components, including a digital video camera, a differential GPS, and a 3-axis heading and reference system. At the heart of the system is a set of software tools for image registration, mosaic generation, geo-location and aircraft state vector recovery. Each process is designed to efficiently handle the data collected by the instrument package. Particular attention is given to minimizing geospatial errors at each stage, as well as modeling propagation of errors through the system. Preliminary results for an urban and forested scene are discussed in detail.

  6. High resolution observations of low contrast phenomena from an Advanced Geosynchronous Platform (AGP)

    NASA Technical Reports Server (NTRS)

    Maxwell, M. S.

    1984-01-01

    Present technology allows radiometric monitoring of the Earth, ocean and atmosphere from a geosynchronous platform with good spatial, spectral and temporal resolution. The proposed system could provide a capability for multispectral remote sensing with a 50 m nadir spatial resolution in the visible bands, 250 m in the 4 micron band and 1 km in the 11 micron thermal infrared band. The diffraction limited telescope has a 1 m aperture, a 10 m focal length (with a shorter focal length in the infrared) and linear and area arrays of detectors. The diffraction limited resolution applies to scenes of any brightness but for a dark low contrast scenes, the good signal to noise ratio of the system contribute to the observation capability. The capabilities of the AGP system are assessed for quantitative observations of ocean scenes. Instrument and ground system configuration are presented and projected sensor capabilities are analyzed.

  7. Scene analysis for a breadboard Mars robot functioning in an indoor environment

    NASA Technical Reports Server (NTRS)

    Levine, M. D.

    1973-01-01

    The problem is delt with of computer perception in an indoor laboratory environment containing rocks of various sizes. The sensory data processing is required for the NASA/JPL breadboard mobile robot that is a test system for an adaptive variably-autonomous vehicle that will conduct scientific explorations on the surface of Mars. Scene analysis is discussed in terms of object segmentation followed by feature extraction, which results in a representation of the scene in the robot's world model.

  8. Analytical techniques for the study of some parameters of multispectral scanner systems for remote sensing

    NASA Technical Reports Server (NTRS)

    Wiswell, E. R.; Cooper, G. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.

  9. Improving GEOS-5 seven day forecast skill by assimilation of quality controlled AIRS temperature profiles

    NASA Astrophysics Data System (ADS)

    Susskind, J.; Rosenberg, R. I.

    2016-12-01

    The GEOS-5 Data Assimilation System (DAS) generates a global analysis every six hours by combining the previous six hour forecast for that time period with contemporaneous observations. These observations include in-situ observations as well as those taken by satellite borne instruments, such as AIRS/AMSU on EOS Aqua and CrIS/ATMS on S-NPP. Operational data assimilation methodology assimilates observed channel radiances Ri for IR sounding instruments such as AIRS and CrIS, but only for those channels i in a given scene whose radiances are thought to be unaffected by clouds. A limitation of this approach is that radiances in most tropospheric sounding channels are affected by clouds under partial cloud cover conditions, which occurs most of the time. The AIRS Science Team Version-6 retrieval algorithm generates cloud cleared radiances (CCR's) for each channel in a given scene, which represent the radiances AIRS would have observed if the scene were cloud free, and then uses them to determine quality controlled (QC'd) temperature profiles T(p) under all cloud conditions. There are potential advantages to assimilate either AIRS QC'd CCR's or QC'd T(p) instead of Ri in that the spatial coverage of observations is greater under partial cloud cover. We tested these two alternate data assimilation approaches by running three parallel data assimilation experiments over different time periods using GEOS-5. Experiment 1 assimilated all observations as done operationally, Experiment 2 assimilated QC'd values of AIRS CCRs in place of AIRS radiances, and Experiment 3 assimilated QC'd values of T(p) in place of observed radiances. Assimilation of QC'd AIRS T(p) resulted in significant improvement in seven day forecast skill compared to assimilation of CCR's or assimilation of observed radiances, especially in the Southern Hemisphere Extra-tropics.

  10. Structured Light Based 3d Scanning for Specular Surface by the Combination of Gray Code and Phase Shifting

    NASA Astrophysics Data System (ADS)

    Zhang, Yujia; Yilmaz, Alper

    2016-06-01

    Surface reconstruction using coded structured light is considered one of the most reliable techniques for high-quality 3D scanning. With a calibrated projector-camera stereo system, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are computed in the decoding process, which is used to generate 3D point cloud of the surface. However, the indirect illumination effects on the surface, such as subsurface scattering and interreflections, will raise the difficulties in reconstruction. In this paper, we apply maximum min-SW gray code to reduce the indirect illumination effects of the specular surface. We also analysis the errors when comparing the maximum min-SW gray code and the conventional gray code, which justifies that the maximum min-SW gray code has significant superiority to reduce the indirect illumination effects. To achieve sub-pixel accuracy, we project high frequency sinusoidal patterns onto the scene simultaneously. But for specular surface, the high frequency patterns are susceptible to decoding errors. Incorrect decoding of high frequency patterns will result in a loss of depth resolution. Our method to resolve this problem is combining the low frequency maximum min-SW gray code and the high frequency phase shifting code, which achieves dense 3D reconstruction for specular surface. Our contributions include: (i) A complete setup of the structured light based 3D scanning system; (ii) A novel combination technique of the maximum min-SW gray code and phase shifting code. First, phase shifting decoding with sub-pixel accuracy. Then, the maximum min-SW gray code is used to resolve the ambiguity resolution. According to the experimental results and data analysis, our structured light based 3D scanning system enables high quality dense reconstruction of scenes with a small number of images. Qualitative and quantitative comparisons are performed to extract the advantages of our new combined coding method.

  11. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  12. The Effect of Cumulus Cloud Field Anisotropy on Domain-Averaged Solar Fluxes and Atmospheric Heating Rates

    NASA Technical Reports Server (NTRS)

    Hinkelman, Laura M.; Evans, K. Franklin; Clothiaux, Eugene E.; Ackerman, Thomas P.; Stackhouse, Paul W., Jr.

    2006-01-01

    Cumulus clouds can become tilted or elongated in the presence of wind shear. Nevertheless, most studies of the interaction of cumulus clouds and radiation have assumed these clouds to be isotropic. This paper describes an investigation of the effect of fair-weather cumulus cloud field anisotropy on domain-averaged solar fluxes and atmospheric heating rate profiles. A stochastic field generation algorithm was used to produce twenty three-dimensional liquid water content fields based on the statistical properties of cloud scenes from a large eddy simulation. Progressively greater degrees of x-z plane tilting and horizontal stretching were imposed on each of these scenes, so that an ensemble of scenes was produced for each level of distortion. The resulting scenes were used as input to a three-dimensional Monte Carlo radiative transfer model. Domain-average transmission, reflection, and absorption of broadband solar radiation were computed for each scene along with the average heating rate profile. Both tilt and horizontal stretching were found to significantly affect calculated fluxes, with the amount and sign of flux differences depending strongly on sun position relative to cloud distortion geometry. The mechanisms by which anisotropy interacts with solar fluxes were investigated by comparisons to independent pixel approximation and tilted independent pixel approximation computations for the same scenes. Cumulus anisotropy was found to most strongly impact solar radiative transfer by changing the effective cloud fraction, i.e., the cloud fraction when the field is projected on a surface perpendicular to the direction of the incident solar beam.

  13. Generalized pipeline for preview and rendering of synthetic holograms

    NASA Astrophysics Data System (ADS)

    Pappu, Ravikanth; Sparrell, Carlton J.; Underkoffler, John S.; Kropp, Adam B.; Chen, Benjie; Plesniak, Wendy J.

    1997-04-01

    We describe a general pipeline for the computation and display of either fully-computed holograms or holographic stereograms using the same 3D database. A rendering previewer on a Silicon Graphics Onyx allows a user to specify viewing geometry, database transformations, and scene lighting. The previewer then generates one of two descriptions of the object--a series of perspective views or a polygonal model--which is then used by a fringe rendering engine to compute fringes specific to hologram type. The images are viewed on the second generation MIT Holographic Video System. This allows a viewer to compare holographic stereograms with fully-computed holograms originating from the same database and comes closer to the goal of a single pipeline being able to display the same data in different formats.

  14. Design and fabrication of an autonomous rendezvous and docking sensor using off-the-shelf hardware

    NASA Technical Reports Server (NTRS)

    Grimm, Gary E.; Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.

    1991-01-01

    NASA Marshall Space Flight Center (MSFC) has developed and tested an engineering model of an automated rendezvous and docking sensor system composed of a video camera ringed with laser diodes at two wavelengths and a standard remote manipulator system target that has been modified with retro-reflective tape and 830 and 780 mm optical filters. TRW has provided additional engineering analysis, design, and manufacturing support, resulting in a robust, low cost, automated rendezvous and docking sensor design. We have addressed the issue of space qualification using off-the-shelf hardware components. We have also addressed the performance problems of increased signal to noise ratio, increased range, increased frame rate, graceful degradation through component redundancy, and improved range calibration. Next year, we will build a breadboard of this sensor. The phenomenology of the background scene of a target vehicle as viewed against earth and space backgrounds under various lighting conditions will be simulated using the TRW Dynamic Scene Generator Facility (DSGF). Solar illumination angles of the target vehicle and candidate docking target ranging from eclipse to full sun will be explored. The sensor will be transportable for testing at the MSFC Flight Robotics Laboratory (EB24) using the Dynamic Overhead Telerobotic Simulator (DOTS).

  15. A panoramic coded aperture gamma camera for radioactive hotspots localization

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Amgarou, K.; Blanc De Lanaute, N.; Schoepff, V.; Amoyal, G.; Mahe, C.; Beltramello, O.; Liénard, E.

    2017-11-01

    A known disadvantage of the coded aperture imaging approach is its limited field-of-view (FOV), which often results insufficient when analysing complex dismantling scenes such as post-accidental scenarios, where multiple measurements are needed to fully characterize the scene. In order to overcome this limitation, a panoramic coded aperture γ-camera prototype has been developed. The system is based on a 1 mm thick CdTe detector directly bump-bonded to a Timepix readout chip, developed by the Medipix2 collaboration (256 × 256 pixels, 55 μm pitch, 14.08 × 14.08 mm2 sensitive area). A MURA pattern coded aperture is used, allowing for background subtraction without the use of heavy shielding. Such system is then combined with a USB color camera. The output of each measurement is a semi-spherical image covering a FOV of 360 degrees horizontally and 80 degrees vertically, rendered in spherical coordinates (θ,phi). The geometrical shapes of the radiation-emitting objects are preserved by first registering and stitching the optical images captured by the prototype, and applying, subsequently, the same transformations to their corresponding radiation images. Panoramic gamma images generated by using the technique proposed in this paper are described and discussed, along with the main experimental results obtained in laboratories campaigns.

  16. Push-Pull Receptive Field Organization and Synaptic Depression: Mechanisms for Reliably Encoding Naturalistic Stimuli in V1

    PubMed Central

    Kremkow, Jens; Perrinet, Laurent U.; Monier, Cyril; Alonso, Jose-Manuel; Aertsen, Ad; Frégnac, Yves; Masson, Guillaume S.

    2016-01-01

    Neurons in the primary visual cortex are known for responding vigorously but with high variability to classical stimuli such as drifting bars or gratings. By contrast, natural scenes are encoded more efficiently by sparse and temporal precise spiking responses. We used a conductance-based model of the visual system in higher mammals to investigate how two specific features of the thalamo-cortical pathway, namely push-pull receptive field organization and fast synaptic depression, can contribute to this contextual reshaping of V1 responses. By comparing cortical dynamics evoked respectively by natural vs. artificial stimuli in a comprehensive parametric space analysis, we demonstrate that the reliability and sparseness of the spiking responses during natural vision is not a mere consequence of the increased bandwidth in the sensory input spectrum. Rather, it results from the combined impacts of fast synaptic depression and push-pull inhibition, the later acting for natural scenes as a form of “effective” feed-forward inhibition as demonstrated in other sensory systems. Thus, the combination of feedforward-like inhibition with fast thalamo-cortical synaptic depression by simple cells receiving a direct structured input from thalamus composes a generic computational mechanism for generating a sparse and reliable encoding of natural sensory events. PMID:27242445

  17. Application of LC and LCoS in Multispectral Polarized Scene Projector (MPSP)

    NASA Astrophysics Data System (ADS)

    Yu, Haiping; Guo, Lei; Wang, Shenggang; Lippert, Jack; Li, Le

    2017-02-01

    A Multispectral Polarized Scene Projector (MPSP) had been developed in the short-wave infrared (SWIR) regime for the test & evaluation (T&E) of spectro-polarimetric imaging sensors. This MPSP generates multispectral and hyperspectral video images (up to 200 Hz) with 512×512 spatial resolution with active spatial, spectral, and polarization modulation with controlled bandwidth. It projects input SWIR radiant intensity scenes from stored memory with user selectable wavelength and bandwidth, as well as polarization states (six different states) controllable on a pixel level. The spectral contents are implemented by a tunable filter with variable bandpass built based on liquid crystal (LC) material, together with one passive visible and one passive SWIR cholesteric liquid crystal (CLC) notch filters, and one switchable CLC notch filter. The core of the MPSP hardware is the liquid-crystal-on-silicon (LCoS) spatial light modulators (SLMs) for intensity control and polarization modulation.

  18. The New LOTIS Test Facility

    NASA Technical Reports Server (NTRS)

    Bell, R. M.; Cuzner, G.; Eugeni, C.; Hutchison, S. B.; Merrick, A. J.; Robins, G. C.; Bailey, S. H.; Ceurden, B.; Hagen, J.; Kenagy, K.; hide

    2008-01-01

    The Large Optical Test and Integration Site (LOTIS) at the Lockheed Martin Space Systems Company in Sunnyvale, CA is designed for the verification and testing of optical systems. The facility consists of an 88 foot temperature stabilized vacuum chamber that also functions as a class 10k vertical flow cleanroom. Many problems were encountered in the design and construction phases. The industry capability to build large chambers is very weak. Through many delays and extra engineering efforts, the final product is very good. With 11 Thermal Conditioning Units and precision RTD s, temperature is uniform and stable within 1oF, providing an ideal environment for precision optical testing. Within this chamber and atop an advanced micro-g vibration-isolation bench is the 6.5 meter diameter LOTIS Collimator and Scene Generator, LOTIS alignment and support equipment. The optical payloads are also placed on the vibration bench in the chamber for testing. This optical system is designed to operate in both air and vacuum, providing test imagery in an adaptable suite of visible/near infrared (VNIR) and midwave infrared (MWIR) point sources, and combined bandwidth visible-through-MWIR point sources, for testing of large aperture optical payloads. The heart of the system is the LOTIS Collimator, a 6.5m f/15 telescope, which projects scenes with wavefront errors <85 nm rms out to a 0.75 mrad field of view (FOV). Using field lenses, performance can be extended to a maximum field of view of 3.2 mrad. The LOTIS Collimator incorporates an extensive integrated wavefront sensing and control system to verify the performance of the system.

  19. The Neural Dynamics of Attentional Selection in Natural Scenes.

    PubMed

    Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V

    2016-10-12

    The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.

  20. Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints.

    PubMed

    Alsubaie, Naif M; Youssef, Ahmed A; El-Sheimy, Naser

    2017-09-30

    This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution.

  1. Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints

    PubMed Central

    Alsubaie, Naif M.; Youssef, Ahmed A.; El-Sheimy, Naser

    2017-01-01

    This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution. PMID:28973958

  2. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  3. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  4. Effects of chromatic image statistics on illumination induced color differences.

    PubMed

    Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels

    2013-09-01

    We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.

  5. Large Area Scene Selection Interface (LASSI). Methodology of Selecting Landsat Imagery for the Global Land Survey 2005

    NASA Technical Reports Server (NTRS)

    Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry

    2009-01-01

    The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.

  6. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  7. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  8. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach.

    PubMed

    Liu, Mengyun; Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng; Pan, Yuanjin

    2017-12-08

    After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to "see" which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas.

  9. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

    PubMed Central

    Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng

    2017-01-01

    After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas. PMID:29292761

  10. Exocentric direction judgements in computer-generated displays and actual scenes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Smith, Stephen; Mcgreevy, Michael W.; Grunwald, Arthur J.

    1989-01-01

    One of the most remarkable perceptual properties of common experience is that the perceived shapes of known objects are constant despite movements about them which transform their projections on the retina. This perceptual ability is one aspect of shape constancy (Thouless, 1931; Metzger, 1953; Borresen and Lichte, 1962). It requires that the viewer be able to sense and discount his or her relative position and orientation with respect to a viewed object. This discounting of relative position may be derived directly from the ranging information provided from stereopsis, from motion parallax, from vestibularly sensed rotation and translation, or from corollary information associated with voluntary movement. It is argued that: (1) errors in exocentric judgements of the azimuth of a target generated on an electronic perspective display are not viewpoint-independent, but are influenced by the specific geometry of their perspective projection; (2) elimination of binocular conflict by replacing electronic displays with actual scenes eliminates a previously reported equidistance tendency in azimuth error, but the viewpoint dependence remains; (3) the pattern of exocentrically judged azimuth error in real scenes viewed with a viewing direction depressed 22 deg and rotated + or - 22 deg with respect to a reference direction could not be explained by overestimation of the depression angle, i.e., a slant overestimation.

  11. Tachistoscopic illumination and masking of real scenes.

    PubMed

    Chichka, David; Philbeck, John W; Gajewski, Daniel A

    2015-03-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.

  12. 4D light-field sensing system for people counting

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Zhang, Chi; Wang, Yunlong; Sun, Zhenan

    2016-03-01

    Counting the number of people is still an important task in social security applications, and a few methods based on video surveillance have been proposed in recent years. In this paper, we design a novel optical sensing system to directly acquire the depth map of the scene from one light-field camera. The light-field sensing system can count the number of people crossing the passageway, and record the direction and intensity of rays at a snapshot without any assistant light devices. Depth maps are extracted from the raw light-ray sensing data. Our smart sensing system is equipped with a passive imaging sensor, which is able to naturally discern the depth difference between the head and shoulders for each person. Then a human model is built. Through detecting the human model from light-field images, the number of people passing the scene can be counted rapidly. We verify the feasibility of the sensing system as well as the accuracy by capturing real-world scenes passing single and multiple people under natural illumination.

  13. Steering and positioning targets for HWIL IR testing at cryogenic conditions

    NASA Astrophysics Data System (ADS)

    Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.

    2006-05-01

    In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.

  14. An approach for brain-controlled prostheses based on Scene Graph Steady-State Visual Evoked Potentials.

    PubMed

    Li, Rui; Zhang, Xiaodong; Li, Hanzhe; Zhang, Liming; Lu, Zhufeng; Chen, Jiangcheng

    2018-08-01

    Brain control technology can restore communication between the brain and a prosthesis, and choosing a Brain-Computer Interface (BCI) paradigm to evoke electroencephalogram (EEG) signals is an essential step for developing this technology. In this paper, the Scene Graph paradigm used for controlling prostheses was proposed; this paradigm is based on Steady-State Visual Evoked Potentials (SSVEPs) regarding the Scene Graph of a subject's intention. A mathematic model was built to predict SSVEPs evoked by the proposed paradigm and a sinusoidal stimulation method was used to present the Scene Graph stimulus to elicit SSVEPs from subjects. Then, a 2-degree of freedom (2-DOF) brain-controlled prosthesis system was constructed to validate the performance of the Scene Graph-SSVEP (SG-SSVEP)-based BCI. The classification of SG-SSVEPs was detected via the Canonical Correlation Analysis (CCA) approach. To assess the efficiency of proposed BCI system, the performances of traditional SSVEP-BCI system were compared. Experimental results from six subjects suggested that the proposed system effectively enhanced the SSVEP responses, decreased the degradation of SSVEP strength and reduced the visual fatigue in comparison with the traditional SSVEP-BCI system. The average signal to noise ratio (SNR) of SG-SSVEP was 6.31 ± 2.64 dB, versus 3.38 ± 0.78 dB of traditional-SSVEP. In addition, the proposed system achieved good performances in prosthesis control. The average accuracy was 94.58% ± 7.05%, and the corresponding high information transfer rate (IRT) was 19.55 ± 3.07 bit/min. The experimental results revealed that the SG-SSVEP based BCI system achieves the good performance and improved the stability relative to the conventional approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  16. Computer-Generated Movies for Mission Planning

    NASA Technical Reports Server (NTRS)

    Roberts, P. H., Jr.; vanDillen, S. L.

    1973-01-01

    Computer-generated movies help the viewer to understand mission dynamics and get quantitative details. Sample movie frames demonstrate the uses and effectiveness of movies in mission planning. Tools needed for movie-making include computer programs to generate images on film and film processing to give the desired result. Planning scenes to make an effective product requires some thought and experience. Viewpoints and timing are particularly important. Lessons learned so far and problems still encountered are discussed.

  17. Human ocular responses to translation of the observer and of the scene: dependence on viewing distance.

    PubMed

    Busettini, C; Miles, F A; Schwarz, U; Carl, J R

    1994-01-01

    Recent experiments on monkeys have indicated that the eye movements induced by brief translation of either the observer or the visual scene are a linear function of the inverse of the viewing distance. For the movements of the observer, the room was dark and responses were attributed to a translational vestibulo-ocular reflex (TVOR) that senses the motion through the otolith organs; for the movements of the scene, which elicit ocular following, the scene was projected and adjusted in size and speed so that the retinal stimulation was the same at all distances. The shared dependence on viewing distance was consistent with the hypothesis that the TVOR and ocular following are synergistic and share central pathways. The present experiments looked for such dependencies on viewing distance in human subjects. When briefly accelerated along the interaural axis in the dark, human subjects generated compensatory eye movements that were also a linear function of the inverse of the viewing distance to a previously fixated target. These responses, which were attributed to the TVOR, were somewhat weaker than those previously recorded from monkeys using similar methods. When human subjects faced a tangent screen onto which patterned images were projected, brief motion of those images evoked ocular following responses that showed statistically significant dependence on viewing distance only with low-speed stimuli (10 degrees/s). This dependence was at best weak and in the reverse direction of that seen with the TVOR, i.e., responses increased as viewing distance increased. We suggest that in generating an internal estimate of viewing distance subjects may have used a confounding cue in the ocular-following paradigm--the size of the projected scene--which was varied directly with the viewing distance in these experiments (in order to preserve the size of the retinal image). When movements of the subject were randomly interleaved with the movements of the scene--to encourage the expectation of ego-motion--the dependence of ocular following on viewing distance altered significantly: with higher speed stimuli (40 degrees/s) many responses (63%) now increased significantly as viewing distance decreased, though less vigorously than the TVOR. We suggest that the expectation of motion results in the subject placing greater weight on cues such as vergence and accommodation that provide veridical distance information in our experimental situation: cue selection is context specific.

  18. The what, where and how of auditory-object perception.

    PubMed

    Bizley, Jennifer K; Cohen, Yale E

    2013-10-01

    The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.

  19. The what, where and how of auditory-object perception

    PubMed Central

    Bizley, Jennifer K.; Cohen, Yale E.

    2014-01-01

    The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177

  20. Situational awareness for unmanned ground vehicles in semi-structured environments

    NASA Astrophysics Data System (ADS)

    Goodsell, Thomas G.; Snorrason, Magnus; Stevens, Mark R.

    2002-07-01

    Situational Awareness (SA) is a critical component of effective autonomous vehicles, reducing operator workload and allowing an operator to command multiple vehicles or simultaneously perform other tasks. Our Scene Estimation & Situational Awareness Mapping Engine (SESAME) provides SA for mobile robots in semi-structured scenes, such as parking lots and city streets. SESAME autonomously builds volumetric models for scene analysis. For example, a SES-AME equipped robot can build a low-resolution 3-D model of a row of cars, then approach a specific car and build a high-resolution model from a few stereo snapshots. The model can be used onboard to determine the type of car and locate its license plate, or the model can be segmented out and sent back to an operator who can view it from different viewpoints. As new views of the scene are obtained, the model is updated and changes are tracked (such as cars arriving or departing). Since the robot's position must be accurately known, SESAME also has automated techniques for deter-mining the position and orientation of the camera (and hence, robot) with respect to existing maps. This paper presents an overview of the SESAME architecture and algorithms, including our model generation algorithm.

  1. a Super Voxel-Based Riemannian Graph for Multi Scale Segmentation of LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Li, Minglei

    2018-04-01

    Automatically segmenting LiDAR points into respective independent partitions has become a topic of great importance in photogrammetry, remote sensing and computer vision. In this paper, we cast the problem of point cloud segmentation as a graph optimization problem by constructing a Riemannian graph. The scale space of the observed scene is explored by an octree-based over-segmentation with different depths. The over-segmentation produces many super voxels which restrict the structure of the scene and will be used as nodes of the graph. The Kruskal coordinates are used to compute edge weights that are proportional to the geodesic distance between nodes. Then we compute the edge-weight matrix in which the elements reflect the sectional curvatures associated with the geodesic paths between super voxel nodes on the scene surface. The final segmentation results are generated by clustering similar super voxels and cutting off the weak edges in the graph. The performance of this method was evaluated on LiDAR point clouds for both indoor and outdoor scenes. Additionally, extensive comparisons to state of the art techniques show that our algorithm outperforms on many metrics.

  2. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  3. The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes

    PubMed Central

    Gygi, Brian; Shafiro, Valeriy

    2011-01-01

    The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about 5 percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naïve (untrained) listeners showed that this Incongruency Advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of −7.5 dB, but there is about 5 percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the Incongruency Advantage is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions. PMID:21355664

  4. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  5. System implications of the ambulance arrival-to-patient contact interval on response interval compliance.

    PubMed

    Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A

    1994-01-01

    In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.

  6. Hebbian learning in a model with dynamic rate-coded neurons: an alternative to the generative model approach for learning receptive fields from natural scenes.

    PubMed

    Hamker, Fred H; Wiltschut, Jan

    2007-09-01

    Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.

  7. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  8. Terrain detection and classification using single polarization SAR

    DOEpatents

    Chow, James G.; Koch, Mark W.

    2016-01-19

    The various technologies presented herein relate to identifying manmade and/or natural features in a radar image. Two radar images (e.g., single polarization SAR images) can be captured for a common scene. The first image is captured at a first instance and the second image is captured at a second instance, whereby the duration between the captures are of sufficient time such that temporal decorrelation occurs for natural surfaces in the scene, and only manmade surfaces, e.g., a road, produce correlated pixels. A LCCD image comprising the correlated and decorrelated pixels can be generated from the two radar images. A median image can be generated from a plurality of radar images, whereby any features in the median image can be identified. A superpixel operation can be performed on the LCCD image and the median image, thereby enabling a feature(s) in the LCCD image to be classified.

  9. Generalized parallel-perspective stereo mosaics from airborne video.

    PubMed

    Zhu, Zhigang; Hanson, Allen R; Riseman, Edward M

    2004-02-01

    In this paper, we present a new method for automatically and efficiently generating stereoscopic mosaics by seamless registration of images collected by a video camera mounted on an airborne platform. Using a parallel-perspective representation, a pair of geometrically registered stereo mosaics can be precisely constructed under quite general motion. A novel parallel ray interpolation for stereo mosaicing (PRISM) approach is proposed to make stereo mosaics seamless in the presence of obvious motion parallax and for rather arbitrary scenes. Parallel-perspective stereo mosaics generated with the PRISM method have better depth resolution than perspective stereo due to the adaptive baseline geometry. Moreover, unlike previous results showing that parallel-perspective stereo has a constant depth error, we conclude that the depth estimation error of stereo mosaics is in fact a linear function of the absolute depths of a scene. Experimental results on long video sequences are given.

  10. Photogrammetry and remote sensing for visualization of spatial data in a virtual reality environment

    NASA Astrophysics Data System (ADS)

    Bhagawati, Dwipen

    2001-07-01

    Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and digital image processing techniques were used for texturing the roadway features in this scene.

  11. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating.

    PubMed

    Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.

  12. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating

    PubMed Central

    Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100

  13. Functional neuroanatomy of intuitive physical inference

    PubMed Central

    Mikhael, John G.; Tenenbaum, Joshua B.; Kanwisher, Nancy

    2016-01-01

    To engage with the world—to understand the scene in front of us, plan actions, and predict what will happen next—we must have an intuitive grasp of the world’s physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events—a “physics engine” in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general “multiple demand” system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892

  14. Functional neuroanatomy of intuitive physical inference.

    PubMed

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.

  15. Terrain modeling for real-time simulation

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; McArthur, Donald E.

    1993-10-01

    There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.

  16. Remote Sensing of Martian Terrain Hazards via Visually Salient Feature Detection

    NASA Astrophysics Data System (ADS)

    Al-Milli, S.; Shaukat, A.; Spiteri, C.; Gao, Y.

    2014-04-01

    The main objective of the FASTER remote sensing system is the detection of rocks on planetary surfaces by employing models that can efficiently characterise rocks in terms of semantic descriptions. The proposed technique abates some of the algorithmic limitations of existing methods with no training requirements, lower computational complexity and greater robustness towards visual tracking applications over long-distance planetary terrains. Visual saliency models inspired from biological systems help to identify important regions (such as rocks) in the visual scene. Surface rocks are therefore completely described in terms of their local or global conspicuity pop-out characteristics. These local and global pop-out cues are (but not limited to); colour, depth, orientation, curvature, size, luminance intensity, shape, topology etc. The currently applied methods follow a purely bottom-up strategy of visual attention for selection of conspicuous regions in the visual scene without any topdown control. Furthermore the choice of models used (tested and evaluated) are relatively fast among the state-of-the-art and have very low computational load. Quantitative evaluation of these state-ofthe- art models was carried out using benchmark datasets including the Surrey Space Centre Lab Testbed, Pangu generated images, RAL Space SEEKER and CNES Mars Yard datasets. The analysis indicates that models based on visually salient information in the frequency domain (SRA, SDSR, PQFT) are the best performing ones for detecting rocks in an extra-terrestrial setting. In particular the SRA model seems to be the most optimum of the lot especially that it requires the least computational time while keeping errors competitively low. The salient objects extracted using these models can then be merged with the Digital Elevation Models (DEMs) generated from the same navigation cameras in order to be fused to the navigation map thus giving a clear indication of the rock locations.

  17. CYCLOPS-3 System Research.

    ERIC Educational Resources Information Center

    Marill, Thomas; And Others

    The aim of the CYCLOPS Project research is the development of techniques for allowing computers to perform visual scene analysis, pre-processing of visual imagery, and perceptual learning. Work on scene analysis and learning has previously been described. The present report deals with research on pre-processing and with further work on scene…

  18. An Intelligent Recommendation System for Animation Scriptwriters' Education

    ERIC Educational Resources Information Center

    Tsai, Shang-Te; Chang, Ting-Cheng; Huang, Yu-Feng

    2016-01-01

    Producing an animation requires extensive labor, time, and money. Experienced directors and screenwriters are required to design scenes using standard props and actors in position. This study structurally analyzes the script and defines scenes, characters, positions, dialogue, etc., according to their dramatic attributes. These are entered into a…

  19. High-speed imaging using compressed sensing and wavelength-dependent scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shin, Jaewook; Bosworth, Bryan T.; Foster, Mark A.

    2017-02-01

    The process of multiple scattering has inherent characteristics that are attractive for high-speed imaging with high spatial resolution and a wide field-of-view. A coherent source passing through a multiple-scattering medium naturally generates speckle patterns with diffraction-limited features over an arbitrarily large field-of-view. In addition, the process of multiple scattering is deterministic allowing a given speckle pattern to be reliably reproduced with identical illumination conditions. Here, by exploiting wavelength dependent multiple scattering and compressed sensing, we develop a high-speed 2D time-stretch microscope. Highly chirped pulses from a 90-MHz mode-locked laser are sent through a 2D grating and a ground-glass diffuser to produce 2D speckle patterns that rapidly evolve with the instantaneous frequency of the chirped pulse. To image a scene, we first characterize the high-speed evolution of the generated speckle patterns. Subsequently we project the patterns onto the microscopic region of interest and collect the total light from the scene using a single high-speed photodetector. Thus the wavelength dependent speckle patterns serve as high-speed pseudorandom structured illumination of the scene. An image sequence is then recovered using the time-dependent signal received by the photodetector, the known speckle pattern evolution, and compressed sensing algorithms. Notably, the use of compressed sensing allows for reconstruction of a time-dependent scene using a highly sub-Nyquist number of measurements, which both increases the speed of the imager and reduces the amount of data that must be collected and stored. We will discuss our experimental demonstration of this approach and the theoretical limits on imaging speed.

  20. Generating descriptive visual words and visual phrases for large-scale image applications.

    PubMed

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  1. Three-dimensional obstacle classification in laser range data

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter; Bers, Karl-Heinz

    1998-10-01

    The threat of hostile surveillance and weapon systems require military aircraft to fly under extreme conditions such as low altitude, high speed, poor visibility and incomplete terrain information. The probability of collision with natural and man-made obstacles during such contour missions is high if detection capability is restricted to conventional vision aids. Forward-looking scanning laser rangefinders which are presently being flight tested and evaluated at German proving grounds, provide a possible solution, having a large field of view, high angular and range resolution, a high pulse repetition rate, and sufficient pulse energy to register returns from wires at over 500 m range (depends on the system) with a high hit-and-detect probability. Despite the efficiency of the sensor, acceptance of current obstacle warning systems by test pilots is not very high, mainly due to the systems' inadequacies in obstacle recognition and visualization. This has motivated the development and the testing of more advanced 3d-scene analysis algorithm at FGAN-FIM to replace the obstacle recognition component of current warning systems. The basic ideas are to increase the recognition probability and to reduce the false alarm rate for hard-to-extract obstacles such as wires, by using more readily recognizable objects such as terrain, poles, pylons, trees, etc. by implementing a hierarchical classification procedure to generate a parametric description of the terrain surface as well as the class, position, orientation, size and shape of all objects in the scene. The algorithms can be used for other applications such as terrain following, autonomous obstacle avoidance, and automatic target recognition.

  2. Scene-based nonuniformity corrections for optical and SWIR pushbroom sensors.

    PubMed

    Leathers, Robert; Downes, Trijntje; Priest, Richard

    2005-06-27

    We propose and evaluate several scene-based methods for computing nonuniformity corrections for visible or near-infrared pushbroom sensors. These methods can be used to compute new nonuniformity correction values or to repair or refine existing radiometric calibrations. For a given data set, the preferred method depends on the quality of the data, the type of scenes being imaged, and the existence and quality of a laboratory calibration. We demonstrate our methods with data from several different sensor systems and provide a generalized approach to be taken for any new data set.

  3. The cognitive structural approach for image restoration

    NASA Astrophysics Data System (ADS)

    Mardare, Igor; Perju, Veacheslav; Casasent, David

    2008-03-01

    It is analyzed the important and actual problem of the defective images of scenes restoration. The proposed approach provides restoration of scenes by a system on the basis of human intelligence phenomena reproduction used for restoration-recognition of images. The cognitive models of the restoration process are elaborated. The models are realized by the intellectual processors constructed on the base of neural networks and associative memory using neural network simulator NNToolbox from MATLAB 7.0. The models provides restoration and semantic designing of images of scenes under defective images of the separate objects.

  4. Orbiting passive microwave sensor simulation applied to soil moisture estimation

    NASA Technical Reports Server (NTRS)

    Newton, R. W. (Principal Investigator); Clark, B. V.; Pitchford, W. M.; Paris, J. F.

    1979-01-01

    A sensor/scene simulation program was developed and used to determine the effects of scene heterogeneity, resolution, frequency, look angle, and surface and temperature relations on the performance of a spaceborne passive microwave system designed to estimate soil water information. The ground scene is based on classified LANDSAT images which provide realistic ground classes, as well as geometries. It was determined that the average sensitivity of antenna temperature to soil moisture improves as the antenna footprint size increased. Also, the precision (or variability) of the sensitivity changes as a function of resolution.

  5. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  6. Negotiating place and gendered violence in Canada's largest open drug scene.

    PubMed

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-05-01

    Vancouver's Downtown Eastside is home to Canada's largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and 'marginal men' (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants' spatial practices. Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into "dangerous" drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug-related risks among these highly-vulnerable PWID. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. NEGOTIATING PLACE AND GENDERED VIOLENCE IN CANADA’S LARGEST OPEN DRUG SCENE

    PubMed Central

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-01-01

    Background Vancouver’s Downtown Eastside is home to Canada’s largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and ‘marginal men’ (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Methods Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants’ spatial practices. Results Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into “dangerous” drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Conclusion Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug-related risks among these highly-vulnerable PWID. PMID:24332972

  8. Landsat 3 return beam vidicon response artifacts

    USGS Publications Warehouse

    ,; Clark, B.

    1981-01-01

    The return beam vidicon (RBV) sensing systems employed aboard Landsats 1, 2, and 3 have all been similar in that they have utilized vidicon tube cameras. These are not mirror-sweep scanning devices such as the multispectral scanner (MSS) sensors that have also been carried aboard the Landsat satellites. The vidicons operate more like common television cameras, using an electron gun to read images from a photoconductive faceplate.In the case of Landsats 1 and 2, the RBV system consisted of three such vidicons which collected remote sensing data in three distinct spectral bands. Landsat 3, however, utilizes just two vidicon cameras, both of which sense data in a single broad band. The Landsat 3 RBV system additionally has a unique configuration. As arranged, the two cameras can be shuttered alternately, twice each, in the same time it takes for one MSS scene to be acquired. This shuttering sequence results in four RBV "subscenes" for every MSS scene acquired, similar to the four quadrants of a square. See Figure 1. Each subscene represents a ground area of approximately 98 by 98 km. The subscenes are designated A, B, C, and D, for the northwest, northeast, southwest, and southeast quarters of the full scene, respectively. RBV data products are normally ordered, reproduced, and sold on a subscene basis and are in general referred to in this way. Each exposure from the RBV camera system presents an image which is 98 km on a side. When these analog video data are subsequently converted to digital form, the picture element, or pixel, that results is 19 m on a side with an effective resolution element of 30 m. This pixel size is substantially smaller than that obtainable in MSS images (the MSS has an effective resolution element of 73.4 m), and, when RBV images are compared to equivalent MSS images, better resolution in the RBV data is clearly evident. It is for this reason that the RBV system can be a valuable tool for remote sensing of earth resources.Until recently, RBV imagery was processed directly from wideband video tape data onto 70-mm film. This changed in September 1980 when digital production of RBV data at the NASA Goddard Space Flight Center (GSFC) began. The wideband video tape data are now subjected to analog-to-digital preprocessing and corrected both radiometrically and geometrically to produce high-density digital tapes (HDT's). The HDT data are subsequently transmitted via satellite (Domsat) to the EROS Data Center (EDC) where they are used to generate 241-mm photographic images at a scale of 1:500,000. Computer-compatible tapes of the data are also generated as digital products. Of the RBV data acquired since September 1, 1980, approximately 2,800 subscenes per month have been processed at EDC.

  9. Simulation as an Engine of Physical Scene Understanding

    DTIC Science & Technology

    2013-11-05

    critical to the origins of intelligence : Researchers in developmental psychology, language, animal cognition, and artificial intelligence (2–6) con- sider...implemented computationally in classic artificial intelligence systems (18–20). However, these systems have not attempted to engage with physical scene un...N00014-09-0124, N00014-07-1-0937, and 1015GNA126; by Qualcomm; and by Intelligence Advanced Research Project Activity Grant D10PC20023. 1. Marr D (1982

  10. Coordinate references for the indoor/outdoor seamless positioning

    NASA Astrophysics Data System (ADS)

    Ruan, Ling; Zhang, Ling; Long, Yi; Cheng, Fei

    2018-05-01

    Indoor positioning technologies are being developed rapidly, and seamless positioning which connected indoor and outdoor space is a new trend. The indoor and outdoor positioning are not applying the same coordinate system and different indoor positioning scenes uses different indoor local coordinate reference systems. A specific and unified coordinate reference frame is needed as the space basis and premise in seamless positioning application. Trajectory analysis of indoor and outdoor integration also requires a uniform coordinate reference. However, the coordinate reference frame in seamless positioning which can applied to various complex scenarios is lacking of research for a long time. In this paper, we proposed a universal coordinate reference frame in indoor/outdoor seamless positioning. The research focus on analysis and classify the indoor positioning scenes and put forward the coordinate reference system establishment and coordinate transformation methods in each scene. And, through some experiments, the calibration method feasibility was verified.

  11. Comparing synthetic imagery with real imagery for visible signature analysis: human observer results

    NASA Astrophysics Data System (ADS)

    Culpepper, Joanne B.; Richards, Noel; Madden, Christopher S.; Winter, Neal; Wheaton, Vivienne C.

    2017-10-01

    Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than would be feasible to collect in field trials. Achieving this requires a method for generating synthetic imagery that is both verified to be realistic and produces the same visible signature analysis results as real images. Is target detectability as measured by image metrics the same for real images and synthetic images of the same scene? Is target detectability as measured by human observer trials the same for real images and synthetic images of the same scene, and how realistic do the synthetic images need to be? In this paper we present the results of a small scale exploratory study on the second question: a photosimulation experiment conducted using digital photographs and synthetic images generated of the same scene. Two sets of synthetic images were created: a high fidelity set created using an image generation tool, E-on Vue, and a low fidelity set created using a gaming engine, Unity 3D. The target detection results obtained using digital photographs were compared with those obtained using the two sets of synthetic images. There was a moderate correlation between the high fidelity synthetic image set and the real images in both the probability of correct detection (Pd: PCC = 0.58, SCC = 0.57) and mean search time (MST: PCC = 0.63, SCC = 0.61). There was no correlation between the low fidelity synthetic image set and the real images for the Pd, but a moderate correlation for MST (PCC = 0.67, SCC = 0.55).

  12. Robust fusion-based processing for military polarimetric imaging systems

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin

    2017-05-01

    Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.

  13. Vertical gaze angle: absolute height-in-scene information for the programming of prehension.

    PubMed

    Gardner, P L; Mon-Williams, M

    2001-02-01

    One possible source of information regarding the distance of a fixated target is provided by the height of the object within the visual scene. It is accepted that this cue can provide ordinal information, but generally it has been assumed that the nervous system cannot extract "absolute" information from height-in-scene. In order to use height-in-scene, the nervous system would need to be sensitive to ocular position with respect to the head and to head orientation with respect to the shoulders (i.e. vertical gaze angle or VGA). We used a perturbation technique to establish whether the nervous system uses vertical gaze angle as a distance cue. Vertical gaze angle was perturbed using ophthalmic prisms with the base oriented either up or down. In experiment 1, participants were required to carry out an open-loop pointing task whilst wearing: (1) no prisms; (2) a base-up prism; or (3) a base-down prism. In experiment 2, the participants reached to grasp an object under closed-loop viewing conditions whilst wearing: (1) no prisms; (2) a base-up prism; or (3) a base-down prism. Experiment 1 and 2 provided clear evidence that the human nervous system uses vertical gaze angle as a distance cue. It was found that the weighting attached to VGA decreased with increasing target distance. The weighting attached to VGA was also affected by the discrepancy between the height of the target, as specified by all other distance cues, and the height indicated by the initial estimate of the position of the supporting surface. We conclude by considering the use of height-in-scene information in the perception of surface slant and highlight some of the complexities that must be involved in the computation of environmental layout.

  14. Introducing Computational Fluid Dynamics Simulation into Olfactory Display

    NASA Astrophysics Data System (ADS)

    Ishida, Hiroshi; Yoshida, Hitoshi; Nakamoto, Takamichi

    An olfactory display is a device that delivers various odors to the user's nose. It can be used to add special effects to movies and games by releasing odors relevant to the scenes shown on the screen. In order to provide high-presence olfactory stimuli to the users, the display must be able to generate realistic odors with appropriate concentrations in a timely manner together with visual and audio playbacks. In this paper, we propose to use computational fluid dynamics (CFD) simulations in conjunction with the olfactory display. Odor molecules released from their source are transported mainly by turbulent flow, and their behavior can be extremely complicated even in a simple indoor environment. In the proposed system, a CFD solver is employed to calculate the airflow field and the odor dispersal in the given environment. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. Experimental results on presenting odor stimuli synchronously with movie clips show the effectiveness of the proposed system.

  15. Acceleration of color computer-generated hologram from three-dimensional scenes with texture and depth information

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2014-06-01

    We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.

  16. Tachistoscopic exposure and masking of real three-dimensional scenes

    PubMed Central

    Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.

    2010-01-01

    Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129

  17. A lighting metric for quantitative evaluation of accent lighting systems

    NASA Astrophysics Data System (ADS)

    Acholo, Cyril O.; Connor, Kenneth A.; Radke, Richard J.

    2014-09-01

    Accent lighting is critical for artwork and sculpture lighting in museums, and subject lighting for stage, Film and television. The research problem of designing effective lighting in such settings has been revived recently with the rise of light-emitting-diode-based solid state lighting. In this work, we propose an easy-to-apply quantitative measure of the scene's visual quality as perceived by human viewers. We consider a well-accent-lit scene as one which maximizes the information about the scene (in an information-theoretic sense) available to the user. We propose a metric based on the entropy of the distribution of colors, which are extracted from an image of the scene from the viewer's perspective. We demonstrate that optimizing the metric as a function of illumination configuration (i.e., position, orientation, and spectral composition) results in natural, pleasing accent lighting. We use a photorealistic simulation tool to validate the functionality of our proposed approach, showing its successful application to two- and three-dimensional scenes.

  18. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  19. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  20. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  1. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  2. Instruments and Methodologies for the Underwater Tridimensional Digitization and Data Musealization

    NASA Astrophysics Data System (ADS)

    Repola, L.; Memmolo, R.; Signoretti, D.

    2015-04-01

    In the research started within the SINAPSIS project of the Università degli Studi Suor Orsola Benincasa an underwater stereoscopic scanning aimed at surveying of submerged archaeological sites, integrable to standard systems for geomorphological detection of the coast, has been developed. The project involves the construction of hardware consisting of an aluminum frame supporting a pair of GoPro Hero Black Edition cameras and software for the production of point clouds and the initial processing of data. The software has features for stereoscopic vision system calibration, reduction of noise and the of distortion of underwater captured images, searching for corresponding points of stereoscopic images using stereo-matching algorithms (dense and sparse), for points cloud generating and filtering. Only after various calibration and survey tests carried out during the excavations envisaged in the project, the mastery of methods for an efficient acquisition of data has been achieved. The current development of the system has allowed generation of portions of digital models of real submerged scenes. A semi-automatic procedure for global registration of partial models is under development as a useful aid for the study and musealization of sites.

  3. Transport delay compensation for computer-generated imagery systems

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1988-01-01

    In the problem of pure transport delay in a low-pass system, a trade-off exists with respect to performance within and beyond a frequency bandwidth. When activity beyond the band is attenuated because of other considerations, this trade-off may be used to improve the performance within the band. Specifically, transport delay in computer-generated imagery systems is reduced to a manageable problem by recognizing frequency limits in vehicle activity and manual-control capacity. Based on these limits, a compensation algorithm has been developed for use in aircraft simulation at NASA Ames Research Center. For direct measurement of transport delays, a beam-splitter experiment is presented that accounts for the complete flight simulation environment. Values determined by this experiment are appropriate for use in the compensation algorithm. The algorithm extends the bandwidth of high-frequency flight simulation to well beyond that of normal pilot inputs. Within this bandwidth, the visual scene presentation manifests negligible gain distortion and phase lag. After a year of utilization, two minor exceptions to universal simulation applicability have been identified and subsequently resolved.

  4. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Jiangye

    Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less

  6. Characterization, adaptive traffic shaping, and multiplexing of real-time MPEG II video

    NASA Astrophysics Data System (ADS)

    Agrawal, Sanjay; Barry, Charles F.; Binnai, Vinay; Kazovsky, Leonid G.

    1997-01-01

    We obtain network traffic model for real-time MPEG-II encoded digital video by analyzing video stream samples from real-time encoders from NUKO Information Systems. MPEG-II sample streams include a resolution intensive movie, City of Joy, an action intensive movie, Aliens, a luminance intensive (black and white) movie, Road To Utopia, and a chrominance intensive (color) movie, Dick Tracy. From our analysis we obtain a heuristic model for the encoded video traffic which uses a 15-stage Markov process to model the I,B,P frame sequences within a group of pictures (GOP). A jointly-correlated Gaussian process is used to model the individual frame sizes. Scene change arrivals are modeled according to a gamma process. Simulations show that our MPEG-II traffic model generates, I,B,P frame sequences and frame sizes that closely match the sample MPEG-II stream traffic characteristics as they relate to latency and buffer occupancy in network queues. To achieve high multiplexing efficiency we propose a traffic shaping scheme which sets preferred 1-frame generation times among a group of encoders so as to minimize the overall variation in total offered traffic while still allowing the individual encoders to react to scene changes. Simulations show that our scheme results in multiplexing gains of up to 10% enabling us to multiplex twenty 6 Mbps MPEG-II video streams instead of 18 streams over an ATM/SONET OC3 link without latency or cell loss penalty. This scheme is due for a patent.

  7. Advances in iterative non-uniformity correction techniques for infrared scene projection

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; LaVeigne, Joe; Prewarski, Marcus; Nehring, Brian

    2015-05-01

    Santa Barbara Infrared (SBIR) is continually developing improved methods for non-uniformity correction (NUC) of its Infrared Scene Projectors (IRSPs) as part of its comprehensive efforts to achieve the best possible projector performance. The most recent step forward, Advanced Iterative NUC (AI-NUC), improves upon previous NUC approaches in several ways. The key to NUC performance is achieving the most accurate possible input drive-to-radiance output mapping for each emitter pixel. This requires many highly-accurate radiance measurements of emitter output, as well as sophisticated manipulation of the resulting data set. AI-NUC expands the available radiance data set to include all measurements made of emitter output at any point. In addition, it allows the user to efficiently manage that data for use in the construction of a new NUC table that is generated from an improved fit of the emitter response curve. Not only does this improve the overall NUC by offering more statistics for interpolation than previous approaches, it also simplifies the removal of erroneous data from the set so that it does not propagate into the correction tables. AI-NUC is implemented by SBIR's IRWindows4 automated test software as part its advanced turnkey IRSP product (the Calibration Radiometry System or CRS), which incorporates all necessary measurement, calibration and NUC table generation capabilities. By employing AI-NUC on the CRS, SBIR has demonstrated the best uniformity results on resistive emitter arrays to date.

  8. Intelligent keyframe extraction for video printing

    NASA Astrophysics Data System (ADS)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  9. Use of an Infrared Thermometer with Laser Targeting in Morphological Scene Change Detection for Fire Detection

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Singh, Harjap; Grindley, Josef E.

    2013-06-01

    Morphological Scene Change Detection (MSCD) is a process typically tasked at detecting relevant changes in a guarded environment for security applications. This can be implemented on a Field Programmable Gate Array (FPGA) by a combination of binary differences based around exclusive-OR (XOR) gates, mathematical morphology and a crucial threshold setting. This is a robust technique and can be applied many areas from leak detection to movement tracking, and further augmented to perform additional functions such as watermarking and facial detection. Fire is a severe problem, and in areas where traditional fire alarm systems are not installed or feasible, it may not be detected until it is too late. Shown here is a way of adapting the traditional Morphological Scene Change Detector (MSCD) with a temperature sensor so if both the temperature sensor and scene change detector are triggered, there is a high likelihood of fire present. Such a system would allow integration into autonomous mobile robots so that not only security patrols could be undertaken, but also fire detection.

  10. Combined optimization of image-gathering and image-processing systems for scene feature detection

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.

    1987-01-01

    The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.

  11. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  12. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  13. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  14. Core geometry in perspective

    PubMed Central

    Dillon, Moira R.; Spelke, Elizabeth S.

    2015-01-01

    Research on animals, infants, children, and adults provides evidence that distinct cognitive systems underlie navigation and object recognition. Here we examine whether and how these systems interact when children interpret 2D edge-based perspectival line drawings of scenes and objects. Such drawings serve as symbols early in development, and they preserve scene and object geometry from canonical points of view. Young children show limits when using geometry both in non-symbolic tasks and in symbolic map tasks that present 3D contexts from unusual, unfamiliar points of view. When presented with the familiar viewpoints in perspectival line drawings, however, do children engage more integrated geometric representations? In three experiments, children successfully interpreted line drawings with respect to their depicted scene or object. Nevertheless, children recruited distinct processes when navigating based on the information in these drawings, and these processes depended on the context in which the drawings were presented. These results suggest that children are flexible but limited in using geometric information to form integrated representations of scenes and objects, even when interpreting spatial symbols that are highly familiar and faithful renditions of the visual world. PMID:25441089

  15. Graded zooming

    DOEpatents

    Coffland, Douglas R.

    2006-04-25

    A system for increasing the resolution in the far field resolution of video or still frame images, while maintaining full coverage in the near field. The system includes a camera connected to a computer. The computer applies a specific zooming scale factor to each of line of pixels and continuously increases the scale factor of the line of pixels from the bottom to the top to capture the scene in the near field, yet maintain resolution in the scene in the far field.

  16. Automated Synthetic Scene Generation

    DTIC Science & Technology

    2014-07-01

    Using the Beard-Maxwell BRDF model , the BRDF from Equations (3.3) and (3.4) is composed of specular, diffuse, and volumetric terms such that x y zSun... models help organizations developing new remote sensing instruments anticipate sensor performance by enabling the ability to create synthetic imagery...for proposed sensor before a sensor is built. One of the largest challenges in modeling realistic synthetic imagery, however, is generating the

  17. Solid state temperature-dependent NUC (non-uniformity correction) in uncooled LWIR (long-wave infrared) imaging system

    NASA Astrophysics Data System (ADS)

    Cao, Yanpeng; Tisse, Christel-Loic

    2013-06-01

    In uncooled LWIR microbolometer imaging systems, temperature fluctuations of FPA (Focal Plane Array) as well as lens and mechanical components placed along the optical path result in thermal drift and spatial non-uniformity. These non-idealities generate undesirable FPN (Fixed-Pattern-Noise) that is difficult to remove using traditional, individual shutterless and TEC-less (Thermo-Electric Cooling) techniques. In this paper we introduce a novel single-image based processing approach that marries the benefits of both statistical scene-based and calibration-based NUC algorithms, without relying neither on extra temperature reference nor accurate motion estimation, to compensate the resulting temperature-dependent non-uniformities. Our method includes two subsequent image processing steps. Firstly, an empirical behavioral model is derived by calibrations to characterize the spatio-temporal response of the microbolometric FPA to environmental and scene temperature fluctuations. Secondly, we experimentally establish that the FPN component caused by the optics creates a spatio-temporally continuous, low frequency, low-magnitude variation of the image intensity. We propose to make use of this property and learn a prior on the spatial distribution of natural image gradients to infer the correction function for the entire image. The performance and robustness of the proposed temperature-adaptive NUC method are demonstrated by showing results obtained from a 640×512 pixels uncooled LWIR microbolometer imaging system operating over a broad range of temperature and with rapid environmental temperature changes (i.e. from -5°C to 65°C within 10 minutes).

  18. IKONOS geometric characterization

    USGS Publications Warehouse

    Helder, Dennis; Coan, Michael; Patrick, Kevin; Gaska, Peter

    2003-01-01

    The IKONOS spacecraft acquired images on July 3, 17, and 25, and August 13, 2001 of Brookings SD, a small city in east central South Dakota, and on May 22, June 30, and July 30, 2000, of the rural area around the EROS Data Center. South Dakota State University (SDSU) evaluated the Brookings scenes and the USGS EROS Data Center (EDC) evaluated the other scenes. The images evaluated by SDSU utilized various natural objects and man-made features as identifiable targets randomly distribution throughout the scenes, while the images evaluated by EDC utilized pre-marked artificial points (panel points) to provide the best possible targets distributed in a grid pattern. Space Imaging provided products at different processing levels to each institution. For each scene, the pixel (line, sample) locations of the various targets were compared to field observed, survey-grade Global Positioning System locations. Patterns of error distribution for each product were plotted, and a variety of statistical statements of accuracy are made. The IKONOS sensor also acquired 12 pairs of stereo images of globally distributed scenes between April 2000 and April 2001. For each scene, analysts at the National Imagery and Mapping Agency (NIMA) compared derived photogrammetric coordinates to their corresponding NIMA field-surveyed ground control point (GCPs). NIMA analysts determined horizontal and vertical accuracies by averaging the differences between the derived photogrammetric points and the field-surveyed GCPs for all 12 stereo pairs. Patterns of error distribution for each scene are presented.

  19. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  20. The Orbital Maneuvering Vehicle Training Facility visual system concept

    NASA Technical Reports Server (NTRS)

    Williams, Keith

    1989-01-01

    The purpose of the Orbital Maneuvering Vehicle (OMV) Training Facility (OTF) is to provide effective training for OMV pilots. A critical part of the training environment is the Visual System, which will simulate the video scenes produced by the OMV Closed-Circuit Television (CCTV) system. The simulation will include camera models, dynamic target models, moving appendages, and scene degradation due to the compression/decompression of video signal. Video system malfunctions will also be provided to ensure that the pilot is ready to meet all challenges the real-world might provide. One possible visual system configuration for the training facility that will meet existing requirements is described.

  1. Field-based detection of biological samples for forensic analysis: Established techniques, novel tools, and future innovations.

    PubMed

    Morrison, Jack; Watts, Giles; Hobbs, Glyn; Dawnay, Nick

    2018-04-01

    Field based forensic tests commonly provide information on the presence and identity of biological stains and can also support the identification of species. Such information can support downstream processing of forensic samples and generate rapid intelligence. These approaches have traditionally used chemical and immunological techniques to elicit the result but some are known to suffer from a lack of specificity and sensitivity. The last 10 years has seen the development of field-based genetic profiling systems, with specific focus on moving the mainstay of forensic genetic analysis, namely STR profiling, out of the laboratory and into the hands of the non-laboratory user. In doing so it is now possible for enforcement officers to generate a crime scene DNA profile which can then be matched to a reference or database profile. The introduction of these novel genetic platforms also allows for further development of new molecular assays aimed at answering the more traditional questions relating to body fluid identity and species detection. The current drive for field-based molecular tools is in response to the needs of the criminal justice system and enforcement agencies, and promises a step-change in how forensic evidence is processed. However, the adoption of such systems by the law enforcement community does not represent a new strategy in the way forensic science has integrated previous novel approaches. Nor do they automatically represent a threat to the quality control and assurance practices that are central to the field. This review examines the historical need and subsequent research and developmental breakthroughs in field-based forensic analysis over the past two decades with particular focus on genetic methods Emerging technologies from a range of scientific fields that have potential applications in forensic analysis at the crime scene are identified and associated issues that arise from the shift from laboratory into operational field use are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Development and Validation of a Polarimetric-MCScene 3D Atmospheric Radiation Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berk, Alexander; Hawes, Frederick; Fox, Marsha

    2016-03-15

    Polarimetric measurements can substantially enhance the ability of both spectrally resolved and single band imagery to detect the proliferation of weapons of mass destruction, providing data for locating and identifying facilities, materials, and processes of undeclared and proliferant nuclear weapons programs worldwide. Unfortunately, models do not exist that efficiently and accurately predict spectral polarized signatures for the materials of interest embedded in complex 3D environments. Having such a model would enable one to test hypotheses and optimize both the enhancement of scene contrast and the signal processing for spectral signature extraction. The Phase I set the groundwork for development ofmore » fully validated polarimetric spectral signature and scene simulation models. This has been accomplished 1. by (a) identifying and downloading state-of-the-art surface and atmospheric polarimetric data sources, (b) implementing tools for generating custom polarimetric data, and (c) identifying and requesting US Government funded field measurement data for use in validation; 2. by formulating an approach for upgrading the radiometric spectral signature model MODTRAN to generate polarimetric intensities through (a) ingestion of the polarimetric data, (b) polarimetric vectorization of existing MODTRAN modules, and (c) integration of a newly developed algorithm for computing polarimetric multiple scattering contributions; 3. by generating an initial polarimetric model that demonstrates calculation of polarimetric solar and lunar single scatter intensities arising from the interaction of incoming irradiances with molecules and aerosols; 4. by developing a design and implementation plan to (a) automate polarimetric scene construction and (b) efficiently sample polarimetric scattering and reflection events, for use in a to be developed polarimetric version of the existing first-principles synthetic scene simulation model, MCScene; and 5. by planning a validation field measurement program in collaboration with the Remote Sensing and Exploitation group at Sandia National Laboratories (SNL) in which data from their ongoing polarimetric field and laboratory measurement program will be shared and, to the extent allowed, tailored for model validation in exchange for model predictions under conditions and for geometries outside of their measurement domain.« less

  3. Classification of wheat: Badhwar profile similarity technique

    NASA Technical Reports Server (NTRS)

    Austin, W. W.

    1980-01-01

    The Badwar profile similarity classification technique used successfully for classification of corn was applied to spring wheat classifications. The software programs and the procedures used to generate full-scene classifications are presented, and numerical results of the acreage estimations are given.

  4. Why people see things that are not there: a novel Perception and Attention Deficit model for recurrent complex visual hallucinations.

    PubMed

    Collerton, Daniel; Perry, Elaine; McKeith, Ian

    2005-12-01

    As many as two million people in the United Kingdom repeatedly see people, animals, and objects that have no objective reality. Hallucinations on the border of sleep, dementing illnesses, delirium, eye disease, and schizophrenia account for 90% of these. The remainder have rarer disorders. We review existing models of recurrent complex visual hallucinations (RCVH) in the awake person, including cortical irritation, cortical hyperexcitability and cortical release, top-down activation, misperception, dream intrusion, and interactive models. We provide evidence that these can neither fully account for the phenomenology of RCVH, nor for variations in the frequency of RCVH in different disorders. We propose a novel Perception and Attention Deficit (PAD) model for RCVH. A combination of impaired attentional binding and poor sensory activation of a correct proto-object, in conjunction with a relatively intact scene representation, bias perception to allow the intrusion of a hallucinatory proto-object into a scene perception. Incorporation of this image into a context-specific hallucinatory scene representation accounts for repetitive hallucinations. We suggest that these impairments are underpinned by disturbances in a lateral frontal cortex-ventral visual stream system. We show how the frequency of RCVH in different diseases is related to the coexistence of attentional and visual perceptual impairments; how attentional and perceptual processes can account for their phenomenology; and that diseases and other states with high rates of RCVH have cholinergic dysfunction in both frontal cortex and the ventral visual stream. Several tests of the model are indicated, together with a number of treatment options that it generates.

  5. Generation of large scale urban environments to support advanced sensor and seeker simulation

    NASA Astrophysics Data System (ADS)

    Giuliani, Joseph; Hershey, Daniel; McKeown, David, Jr.; Willis, Carla; Van, Tan

    2009-05-01

    One of the key aspects for the design of a next generation weapon system is the need to operate in cluttered and complex urban environments. Simulation systems rely on accurate representation of these environments and require automated software tools to construct the underlying 3D geometry and associated spectral and material properties that are then formatted for various objective seeker simulation systems. Under an Air Force Small Business Innovative Research (SBIR) contract, we have developed an automated process to generate 3D urban environments with user defined properties. These environments can be composed from a wide variety of source materials, including vector source data, pre-existing 3D models, and digital elevation models, and rapidly organized into a geo-specific visual simulation database. This intermediate representation can be easily inspected in the visible spectrum for content and organization and interactively queried for accuracy. Once the database contains the required contents, it can then be exported into specific synthetic scene generation runtime formats, preserving the relationship between geometry and material properties. To date an exporter for the Irma simulation system developed and maintained by AFRL/Eglin has been created and a second exporter to Real Time Composite Hardbody and Missile Plume (CHAMP) simulation system for real-time use is currently being developed. This process supports significantly more complex target environments than previous approaches to database generation. In this paper we describe the capabilities for content creation for advanced seeker processing algorithms simulation and sensor stimulation, including the overall database compilation process and sample databases produced and exported for the Irma runtime system. We also discuss the addition of object dynamics and viewer dynamics within the visual simulation into the Irma runtime environment.

  6. A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Leigh, Albert B.; Pal, Sankar K.

    1992-01-01

    This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.

  7. An interactive display system for large-scale 3D models

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  8. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  9. ViCoMo: visual context modeling for scene understanding in video surveillance

    NASA Astrophysics Data System (ADS)

    Creusen, Ivo M.; Javanbakhti, Solmaz; Loomans, Marijn J. H.; Hazelhoff, Lykele B.; Roubtsova, Nadejda; Zinger, Svitlana; de With, Peter H. N.

    2013-10-01

    The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan-tilt-zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.

  10. Finding and recognizing objects in natural scenes: complementary computations in the dorsal and ventral visual systems

    PubMed Central

    Rolls, Edmund T.; Webb, Tristan J.

    2014-01-01

    Searching for and recognizing objects in complex natural scenes is implemented by multiple saccades until the eyes reach within the reduced receptive field sizes of inferior temporal cortex (IT) neurons. We analyze and model how the dorsal and ventral visual streams both contribute to this. Saliency detection in the dorsal visual system including area LIP is modeled by graph-based visual saliency, and allows the eyes to fixate potential objects within several degrees. Visual information at the fixated location subtending approximately 9° corresponding to the receptive fields of IT neurons is then passed through a four layer hierarchical model of the ventral cortical visual system, VisNet. We show that VisNet can be trained using a synaptic modification rule with a short-term memory trace of recent neuronal activity to capture both the required view and translation invariances to allow in the model approximately 90% correct object recognition for 4 objects shown in any view across a range of 135° anywhere in a scene. The model was able to generalize correctly within the four trained views and the 25 trained translations. This approach analyses the principles by which complementary computations in the dorsal and ventral visual cortical streams enable objects to be located and recognized in complex natural scenes. PMID:25161619

  11. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  12. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1.

    PubMed

    Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu

    2014-04-23

    How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.

  13. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1

    PubMed Central

    2014-01-01

    Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246

  14. Chromatic information and feature detection in fast visual analysis

    DOE PAGES

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...

    2016-08-01

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  15. Chromatic information and feature detection in fast visual analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  16. N400 brain responses to spoken phrases paired with photographs of scenes: implications for visual scene displays in AAC systems.

    PubMed

    Wilkinson, Krista M; Stutzman, Allyson; Seisler, Andrea

    2015-03-01

    Augmentative and alternative communication (AAC) systems are often implemented for individuals whose speech cannot meet their full communication needs. One type of aided display is called a Visual Scene Display (VSD). VSDs consist of integrated scenes (such as photographs) in which language concepts are embedded. Often, the representations of concepts on VSDs are perceptually similar to their referents. Given this physical resemblance, one may ask how well VSDs support development of symbolic functioning. We used brain imaging techniques to examine whether matches and mismatches between the content of spoken messages and photographic images of scenes evoke neural activity similar to activity that occurs to spoken or written words. Electroencephalography (EEG) was recorded from 15 college students who were shown photographs paired with spoken phrases that were either matched or mismatched to the concepts embedded within each photograph. Of interest was the N400 component, a negative deflecting wave 400 ms post-stimulus that is considered to be an index of semantic functioning. An N400 response in the mismatched condition (but not the matched) would replicate brain responses to traditional linguistic symbols. An N400 was found, exclusively in the mismatched condition, suggesting that mismatches between spoken messages and VSD-type representations set the stage for the N400 in ways similar to traditional linguistic symbols.

  17. Method for mapping a natural gas leak

    DOEpatents

    Reichardt, Thomas A [Livermore, CA; Luong, Amy Khai [Dublin, CA; Kulp, Thomas J [Livermore, CA; Devdas, Sanjay [Albany, CA

    2009-02-03

    A system is described that is suitable for use in determining the location of leaks of gases having a background concentration. The system is a point-wise backscatter absorption gas measurement system that measures absorption and distance to each point of an image. The absorption measurement provides an indication of the total amount of a gas of interest, and the distance provides an estimate of the background concentration of gas. The distance is measured from the time-of-flight of laser pulse that is generated along with the absorption measurement light. The measurements are formatted into an image of the presence of gas in excess of the background. Alternatively, an image of the scene is superimposed on the image of the gas to aid in locating leaks. By further modeling excess gas as a plume having a known concentration profile, the present system provides an estimate of the maximum concentration of the gas of interest.

  18. Natural gas leak mapper

    DOEpatents

    Reichardt, Thomas A [Livermore, CA; Luong, Amy Khai [Dublin, CA; Kulp, Thomas J [Livermore, CA; Devdas, Sanjay [Albany, CA

    2008-05-20

    A system is described that is suitable for use in determining the location of leaks of gases having a background concentration. The system is a point-wise backscatter absorption gas measurement system that measures absorption and distance to each point of an image. The absorption measurement provides an indication of the total amount of a gas of interest, and the distance provides an estimate of the background concentration of gas. The distance is measured from the time-of-flight of laser pulse that is generated along with the absorption measurement light. The measurements are formated into an image of the presence of gas in excess of the background. Alternatively, an image of the scene is superimosed on the image of the gas to aid in locating leaks. By further modeling excess gas as a plume having a known concentration profile, the present system provides an estimate of the maximum concentration of the gas of interest.

  19. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  20. Evaluation of Landscape Structure Using AVIRIS Quicklooks and Ancillary Data

    NASA Technical Reports Server (NTRS)

    Sanderson, Eric W.; Ustin, Susan L.

    1998-01-01

    Currently the best tool for examining landscape structure is remote sensing, because remotely sensed data provide complete and repeatable coverage over landscapes in many climatic regimes. Many sensors, with a variety of spatial scales and temporal repeat cycles, are available. The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has imaged over 4000 scenes from over 100 different sites throughout North America. For each of these scenes, one-band "quicklook" images have been produced for review by AVIRIS investigators. These quicklooks are free, publicly available over the Internet, and provide the most complete set of landscape structure data yet produced. This paper describes the methodologies used to evaluate the landscape structure of quicklooks and generate corresponding datasets for climate, topography and land use. A brief discussion of preliminary results is included at the end. Since quicklooks correspond exactly to their parent AVIRIS scenes, the methods used to derive climate, topography and land use data should be applicable to any AVIRIS analysis.

  1. Effect of Clouds on Apertures of Space-based Air Fluorescence Detectors

    NASA Technical Reports Server (NTRS)

    Sokolsky, P.; Krizmanic, J.

    2003-01-01

    Space-based ultra-high-energy cosmic ray detectors observe fluorescence light from extensive air showers produced by these particles in the troposphere. Clouds can scatter and absorb this light and produce systematic errors in energy determination and spectrum normalization. We study the possibility of using IR remote sensing data from MODIS and GOES satellites to delimit clear areas of the atmosphere. The efficiency for detecting ultra-high-energy cosmic rays whose showers do not intersect clouds is determined for real, night-time cloud scenes. We use the MODIS SST cloud mask product to define clear pixels for cloud scenes along the equator and use the OWL Monte Carlo to generate showers in the cloud scenes. We find the efficiency for cloud-free showers with closest approach of three pixels to a cloudy pixel is 6.5% exclusive of other factors. We conclude that defining a totally cloud-free aperture reduces the sensitivity of space-based fluorescence detectors to unacceptably small levels.

  2. Modeling and analysis of LWIR signature variability associated with 3D and BRDF effects

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Less, David; Jin, Xuemin; Rynes, Peter

    2016-05-01

    Algorithms for retrieval of surface reflectance, emissivity or temperature from a spectral image almost always assume uniform illumination across the scene and horizontal surfaces with Lambertian reflectance. When these algorithms are used to process real 3-D scenes, the retrieved "apparent" values contain the strong, spatially dependent variations in illumination as well as surface bidirectional reflectance distribution function (BRDF) effects. This is especially problematic with horizontal or near-horizontal viewing, where many observed surfaces are vertical, and where horizontal surfaces can show strong specularity. The goals of this study are to characterize long-wavelength infrared (LWIR) signature variability in a HSI 3-D scene and develop practical methods for estimating the true surface values. We take advantage of synthetic near-horizontal imagery generated with the high-fidelity MultiService Electro-optic Signature (MuSES) model, and compare retrievals of temperature and directional-hemispherical reflectance using standard sky downwelling illumination and MuSES-based non-uniform environmental illumination.

  3. Small-size pedestrian detection in large scene based on fast R-CNN

    NASA Astrophysics Data System (ADS)

    Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu

    2018-04-01

    Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.

  4. Recreation of three-dimensional objects in a real-time simulated environment by means of a panoramic single lens stereoscopic image-capturing device

    NASA Astrophysics Data System (ADS)

    Wong, Erwin

    2000-03-01

    Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.

  5. Cloud Classification in Polar and Desert Regions and Smoke Classification from Biomass Burning Using a Hierarchical Neural Network

    NASA Technical Reports Server (NTRS)

    Alexander, June; Corwin, Edward; Lloyd, David; Logar, Antonette; Welch, Ronald

    1996-01-01

    This research focuses on a new neural network scene classification technique. The task is to identify scene elements in Advanced Very High Resolution Radiometry (AVHRR) data from three scene types: polar, desert and smoke from biomass burning in South America (smoke). The ultimate goal of this research is to design and implement a computer system which will identify the clouds present on a whole-Earth satellite view as a means of tracking global climate changes. Previous research has reported results for rule-based systems (Tovinkere et at 1992, 1993) for standard back propagation (Watters et at. 1993) and for a hierarchical approach (Corwin et al 1994) for polar data. This research uses a hierarchical neural network with don't care conditions and applies this technique to complex scenes. A hierarchical neural network consists of a switching network and a collection of leaf networks. The idea of the hierarchical neural network is that it is a simpler task to classify a certain pattern from a subset of patterns than it is to classify a pattern from the entire set. Therefore, the first task is to cluster the classes into groups. The switching, or decision network, performs an initial classification by selecting a leaf network. The leaf networks contain a reduced set of similar classes, and it is in the various leaf networks that the actual classification takes place. The grouping of classes in the various leaf networks is determined by applying an iterative clustering algorithm. Several clustering algorithms were investigated, but due to the size of the data sets, the exhaustive search algorithms were eliminated. A heuristic approach using a confusion matrix from a lightly trained neural network provided the basis for the clustering algorithm. Once the clusters have been identified, the hierarchical network can be trained. The approach of using don't care nodes results from the difficulty in generating extremely complex surfaces in order to separate one class from all of the others. This approach finds pairwise separating surfaces and forms the more complex separating surface from combinations of simpler surfaces. This technique both reduces training time and improves accuracy over the previously reported results. Accuracies of 97.47%, 95.70%, and 99.05% were achieved for the polar, desert and smoke data sets.

  6. A High-Fidelity Virtual Environment for the Study of Paranoia

    PubMed Central

    Broome, Matthew R.; Zányi, Eva; Selmanovic, Elmedin; Czanner, Silvester; Birchwood, Max; Chalmers, Alan; Singh, Swaran P.

    2013-01-01

    Psychotic disorders carry social and economic costs for sufferers and society. Recent evidence highlights the risk posed by urban upbringing and social deprivation in the genesis of paranoia and psychosis. Evidence based psychological interventions are often not offered because of a lack of therapists. Virtual reality (VR) environments have been used to treat mental health problems. VR may be a way of understanding the aetiological processes in psychosis and increasing psychotherapeutic resources for its treatment. We developed a high-fidelity virtual reality scenario of an urban street scene to test the hypothesis that virtual urban exposure is able to generate paranoia to a comparable or greater extent than scenarios using indoor scenes. Participants (n = 32) entered the VR scenario for four minutes, after which time their degree of paranoid ideation was assessed. We demonstrated that the virtual reality scenario was able to elicit paranoia in a nonclinical, healthy group and that an urban scene was more likely to lead to higher levels of paranoia than a virtual indoor environment. We suggest that this study offers evidence to support the role of exposure to factors in the urban environment in the genesis and maintenance of psychotic experiences and symptoms. The realistic high-fidelity street scene scenario may offer a useful tool for therapists. PMID:24455255

  7. A high-fidelity virtual environment for the study of paranoia.

    PubMed

    Broome, Matthew R; Zányi, Eva; Hamborg, Thomas; Selmanovic, Elmedin; Czanner, Silvester; Birchwood, Max; Chalmers, Alan; Singh, Swaran P

    2013-01-01

    Psychotic disorders carry social and economic costs for sufferers and society. Recent evidence highlights the risk posed by urban upbringing and social deprivation in the genesis of paranoia and psychosis. Evidence based psychological interventions are often not offered because of a lack of therapists. Virtual reality (VR) environments have been used to treat mental health problems. VR may be a way of understanding the aetiological processes in psychosis and increasing psychotherapeutic resources for its treatment. We developed a high-fidelity virtual reality scenario of an urban street scene to test the hypothesis that virtual urban exposure is able to generate paranoia to a comparable or greater extent than scenarios using indoor scenes. Participants (n = 32) entered the VR scenario for four minutes, after which time their degree of paranoid ideation was assessed. We demonstrated that the virtual reality scenario was able to elicit paranoia in a nonclinical, healthy group and that an urban scene was more likely to lead to higher levels of paranoia than a virtual indoor environment. We suggest that this study offers evidence to support the role of exposure to factors in the urban environment in the genesis and maintenance of psychotic experiences and symptoms. The realistic high-fidelity street scene scenario may offer a useful tool for therapists.

  8. Virtual environments for scene of crime reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  9. Research on three-dimensional visualization based on virtual reality and Internet

    NASA Astrophysics Data System (ADS)

    Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai

    2007-06-01

    To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.

  10. Visual cognition

    PubMed Central

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  11. Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing

    NASA Astrophysics Data System (ADS)

    Williams, McKay D.

    Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.

  12. Scan Patterns Predict Sentence Production in the Cross-Modal Processing of Visual Scenes

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank

    2012-01-01

    Most everyday tasks involve multiple modalities, which raises the question of how the processing of these modalities is coordinated by the cognitive system. In this paper, we focus on the coordination of visual attention and linguistic processing during speaking. Previous research has shown that objects in a visual scene are fixated before they…

  13. A procedure for radiometric recalibration of Landsat 5 TM reflective-band data

    USGS Publications Warehouse

    Chander, G.; Haque, M.O.; Micijevic, E.; Barsi, J.A.

    2010-01-01

    From the Landsat program's inception in 1972 to the present, the Earth science user community has been benefiting from a historical record of remotely sensed data. The multispectral data from the Landsat 5 (L5) Thematic Mapper (TM) sensor provide the backbone for this extensive archive. Historically, the radiometric calibration procedure for the L5 TM imagery used the detectors' response to the internal calibrator (IC) on a scene-by-scene basis to determine the gain and offset for each detector. The IC system degraded with time, causing radiometric calibration errors up to 20%. In May 2003, the L5 TM data processed and distributed by the U.S. Geological Survey (USGS) Earth Resources Observation and Science Center through the National Landsat Archive Production System (NLAPS) were updated to use a lifetime lookup-table (LUT) gain model to radiometrically calibrate TM data instead of using scene-specific IC gains. Further modification of the gain model was performed in 2007. The L5 TM data processed using IC prior to the calibration update do not benefit from the recent calibration revisions. A procedure has been developed to give users the ability to recalibrate their existing level-1 products. The best recalibration results are obtained if the work-order report that was included in the original standard data product delivery is available. However, if users do not have the original work-order report, the IC trends can be used for recalibration. The IC trends were generated using the radiometric gain trends recorded in the NLAPS database. This paper provides the details of the recalibration procedure for the following: 1) data processed using IC where users have the work-order file; 2) data processed using IC where users do not have the work-order file; 3) data processed using prelaunch calibration parameters; and 4) data processed using the previous version of the LUT (e.g., LUT03) that was released before April 2, 2007.

  14. Full-color large-scaled computer-generated holograms for physical and non-physical objects

    NASA Astrophysics Data System (ADS)

    Matsushima, Kyoji; Tsuchiyama, Yasuhiro; Sonobe, Noriaki; Masuji, Shoya; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-05-01

    Several full-color high-definition CGHs are created for reconstructing 3D scenes including real-existing physical objects. The field of the physical objects are generated or captured by employing three techniques; 3D scanner, synthetic aperture digital holography, and multi-viewpoint images. Full-color reconstruction of high-definition CGHs is realized by RGB color filters. The optical reconstructions are presented for verifying these techniques.

  15. Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori

    NASA Astrophysics Data System (ADS)

    Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2017-02-01

    Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.

  16. Mapping burned areas using dense time-series of Landsat data

    USGS Publications Warehouse

    Hawbaker, Todd J.; Vanderhoof, Melanie; Beal, Yen-Ju G.; Takacs, Joshua; Schmidt, Gail L.; Falgout, Jeff T.; Williams, Brad; Brunner, Nicole M.; Caldwell, Megan K.; Picotte, Joshua J.; Howard, Stephen M.; Stitt, Susan; Dwyer, John L.

    2017-01-01

    Complete and accurate burned area data are needed to document patterns of fires, to quantify relationships between the patterns and drivers of fire occurrence, and to assess the impacts of fires on human and natural systems. Unfortunately, in many areas existing fire occurrence datasets are known to be incomplete. Consequently, the need to systematically collect burned area information has been recognized by the United Nations Framework Convention on Climate Change and the Intergovernmental Panel on Climate Change, which have both called for the production of essential climate variables (ECVs), including information about burned area. In this paper, we present an algorithm that identifies burned areas in dense time-series of Landsat data to produce the Landsat Burned Area Essential Climate Variable (BAECV) products. The algorithm uses gradient boosted regression models to generate burn probability surfaces using band values and spectral indices from individual Landsat scenes, lagged reference conditions, and change metrics between the scene and reference predictors. Burn classifications are generated from the burn probability surfaces using pixel-level thresholding in combination with a region growing process. The algorithm can be applied anywhere Landsat and training data are available. For this study, BAECV products were generated for the conterminous United States from 1984 through 2015. These products consist of pixel-level burn probabilities for each Landsat scene, in addition to, annual composites including: the maximum burn probability and a burn classification. We compared the BAECV burn classification products to the existing Global Fire Emissions Database (GFED; 1997–2015) and Monitoring Trends in Burn Severity (MTBS; 1984–2013) data. We found that the BAECV products mapped 36% more burned area than the GFED and 116% more burned area than MTBS. Differences between the BAECV products and the GFED were especially high in the West and East where the BAECV products mapped 32% and 88% more burned area, respectively. However, the BAECV products found less burned area than the GFED in regions with frequent agricultural fires. Compared to the MTBS data, the BAECV products identified 31% more burned area in the West, 312% more in the Great Plains, and 233% more in the East. Most pixels in the MTBS data were detected by the BAECV, regardless of burn severity. The BAECV products document patterns of fire similar to those in the GFED but also showed patterns of fire that are not well characterized by the existing MTBS data. We anticipate the BAECV products will be useful to studies that seek to understand past patterns of fire occurrence, the drivers that created them, and the impacts fires have on natural and human systems.

  17. A new method for text detection and recognition in indoor scene for assisting blind people

    NASA Astrophysics Data System (ADS)

    Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid

    2017-03-01

    Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.

  18. SIR-B ocean-wave enhancement with fast Fourier transform techniques

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1987-01-01

    Shuttle Imaging Radar (SIR-B) imagery is Fourier filtered to remove the estimated system-transfer function, reduce speckle noise, and produce ocean scenes with a gray scale that is proportional to wave height. The SIR-B system response to speckled scenes of uniform surfaces yields an estimate of the stationary wavenumber response of the imaging radar, modeled by the 15 even terms of an eighth-order two-dimensional polynomial. Speckle can also be used to estimate the dynamic wavenumber response of the system due to surface motion during the aperture synthesis period, modeled with a single adaptive parameter describing an exponential correlation along track. A Fourier filter can then be devised to correct for the wavenumber response of the remote sensor and scene correlation, with subsequent subtraction of an estimate of the speckle noise component. A linearized velocity bunching model, combined with a surface tilt and hydrodynamic model, is incorporated in the Fourier filter to derive estimates of wave height from the radar intensities corresponding to individual picture elements.

  19. Mise en Scene: Conversion of Scenarios to CSP Traces for the Requirements-to-Design-to-Code Project

    NASA Technical Reports Server (NTRS)

    Carter. John D.; Gardner, William B.; Rash, James L.; Hinchey, Michael G.

    2007-01-01

    The "Requirements-to-Design-to-Code" (R2D2C) project at NASA's Goddard Space Flight Center is based on deriving a formal specification expressed in Communicating Sequential Processes (CSP) notation from system requirements supplied in the form of CSP traces. The traces, in turn, are to be extracted from scenarios, a user-friendly medium often used to describe the required behavior of computer systems under development. This work, called Mise en Scene, defines a new scenario medium (Scenario Notation Language, SNL) suitable for control-dominated systems, coupled with a two-stage process for automatic translation of scenarios to a new trace medium (Trace Notation Language, TNL) that encompasses CSP traces. Mise en Scene is offered as an initial solution to the problem of the scenarios-to-traces "D2" phase of R2D2C. A survey of the "scenario" concept and some case studies are also provided.

  20. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    PubMed

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

Top