Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3
NASA Astrophysics Data System (ADS)
Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.
2014-12-01
The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.
Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2
NASA Astrophysics Data System (ADS)
Makar, Robert J.; O'Toole, Brian E.
1998-07-01
An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.
Modeling human pilot cue utilization with applications to simulator fidelity assessment.
Zeyada, Y; Hess, R A
2000-01-01
An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.
Systems and Methods for Automated Water Detection Using Visible Sensors
NASA Technical Reports Server (NTRS)
Rankin, Arturo L. (Inventor); Matthies, Larry H. (Inventor); Bellutta, Paolo (Inventor)
2016-01-01
Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.
ERBE Geographic Scene and Monthly Snow Data
NASA Technical Reports Server (NTRS)
Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.
1997-01-01
The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations i The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzyinference identification can be used to reflect changes in simulator fidelity for the task examined.
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to reflect changes in simulator fidelity for the task examined.
NASA Astrophysics Data System (ADS)
Tsifouti, A.; Triantaphillidou, S.; Larabi, M. C.; Doré, G.; Bilissi, E.; Psarrou, A.
2015-01-01
In this investigation we study the effects of compression and frame rate reduction on the performance of four video analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack) entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4 AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems. Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence. Findings could contribute to the improvement of VA systems.
Landscape preference assessment of Louisiana river landscapes: a methodological study
Michael S. Lee
1979-01-01
The study pertains to the development of an assessment system for the analysis of visual preference attributed to Louisiana river landscapes. The assessment system was utilized in the evaluation of 20 Louisiana river scenes. Individuals were tested for their free choice preference for the same scenes. A statistical analysis was conducted to examine the relationship...
NASA Astrophysics Data System (ADS)
Shimada, Satoshi; Azuma, Shouzou; Teranaka, Sayaka; Kojima, Akira; Majima, Yukie; Maekawa, Yasuko
We developed the system that knowledge could be discovered and shared cooperatively in the organization based on the SECI model of knowledge management. This system realized three processes by the following method. (1)A video that expressed skill is segmented into a number of scenes according to its contents. Tacit knowledge is shared in each scene. (2)Tacit knowledge is extracted by bulletin board linked to each scene. (3)Knowledge is acquired by repeatedly viewing the video scene with the comment that shows the technical content to be practiced. We conducted experiments that the system was used by nurses working for general hospitals. Experimental results show that the nursing practical knack is able to be collected by utilizing bulletin board linked to video scene. Results of this study confirmed the possibility of expressing the tacit knowledge of nurses' empirical nursing skills sensitively with a clue of video images.
Change Blindness Phenomena for Virtual Reality Display Systems.
Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete
2011-09-01
In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.
Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet
NASA Astrophysics Data System (ADS)
Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay
1999-11-01
The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.
Illumination discrimination in real and simulated scenes
Radonjić, Ana; Pearce, Bradley; Aston, Stacey; Krieger, Avery; Dubin, Hilary; Cottaris, Nicolas P.; Brainard, David H.; Hurlbert, Anya C.
2016-01-01
Characterizing humans' ability to discriminate changes in illumination provides information about the visual system's representation of the distal stimulus. We have previously shown that humans are able to discriminate illumination changes and that sensitivity to such changes depends on their chromatic direction. Probing illumination discrimination further would be facilitated by the use of computer-graphics simulations, which would, in practice, enable a wider range of stimulus manipulations. There is no a priori guarantee, however, that results obtained with simulated scenes generalize to real illuminated scenes. To investigate this question, we measured illumination discrimination in real and simulated scenes that were well-matched in mean chromaticity and scene geometry. Illumination discrimination thresholds were essentially identical for the two stimulus types. As in our previous work, these thresholds varied with illumination change direction. We exploited the flexibility offered by the use of graphics simulations to investigate whether the differences across direction are preserved when the surfaces in the scene are varied. We show that varying the scene's surface ensemble in a manner that also changes mean scene chromaticity modulates the relative sensitivity to illumination changes along different chromatic directions. Thus, any characterization of sensitivity to changes in illumination must be defined relative to the set of surfaces in the scene. PMID:28558392
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1996-01-01
This report describes the basic design considerations for perspective air traffic control displays. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. Two distinct modes of MVPS operations are considered, both of which utilize manipulation pointers imbedded in the three-dimensional scene: (1) direct manipulation of the viewing parameters -- in this mode the manipulation pointers act like the control-input device, through which the viewing parameter changes are made. Part of the parameters are rate controlled, and part of them position controlled. This mode is intended for making fast, iterative small changes in the parameters. (2) Indirect manipulation of the viewing parameters -- this mode is intended primarily for introducing large, predetermined changes in the parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of the screen. This arrangement has been chosen in order to preserve the correspondence between the spatial lay-outs of the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer-generated scene. The proposed, continued research efforts will deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the air traffic control scene can be viewed for a given traffic situation. They determine whether a change in viewing parameter setting is required and determine the dynamic path along which the change to the new viewing parameter setting should take place.
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
Common and Innovative Visuals: A sparsity modeling framework for video.
Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder
2014-05-02
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.
IKONOS geometric characterization
Helder, Dennis; Coan, Michael; Patrick, Kevin; Gaska, Peter
2003-01-01
The IKONOS spacecraft acquired images on July 3, 17, and 25, and August 13, 2001 of Brookings SD, a small city in east central South Dakota, and on May 22, June 30, and July 30, 2000, of the rural area around the EROS Data Center. South Dakota State University (SDSU) evaluated the Brookings scenes and the USGS EROS Data Center (EDC) evaluated the other scenes. The images evaluated by SDSU utilized various natural objects and man-made features as identifiable targets randomly distribution throughout the scenes, while the images evaluated by EDC utilized pre-marked artificial points (panel points) to provide the best possible targets distributed in a grid pattern. Space Imaging provided products at different processing levels to each institution. For each scene, the pixel (line, sample) locations of the various targets were compared to field observed, survey-grade Global Positioning System locations. Patterns of error distribution for each product were plotted, and a variety of statistical statements of accuracy are made. The IKONOS sensor also acquired 12 pairs of stereo images of globally distributed scenes between April 2000 and April 2001. For each scene, analysts at the National Imagery and Mapping Agency (NIMA) compared derived photogrammetric coordinates to their corresponding NIMA field-surveyed ground control point (GCPs). NIMA analysts determined horizontal and vertical accuracies by averaging the differences between the derived photogrammetric points and the field-surveyed GCPs for all 12 stereo pairs. Patterns of error distribution for each scene are presented.
Integration of an open interface PC scene generator using COTS DVI converter hardware
NASA Astrophysics Data System (ADS)
Nordland, Todd; Lyles, Patrick; Schultz, Bret
2006-05-01
Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.
Recognising the forest, but not the trees: an effect of colour on scene perception and recognition.
Nijboer, Tanja C W; Kanai, Ryota; de Haan, Edward H F; van der Smagt, Maarten J
2008-09-01
Colour has been shown to facilitate the recognition of scene images, but only when these images contain natural scenes, for which colour is 'diagnostic'. Here we investigate whether colour can also facilitate memory for scene images, and whether this would hold for natural scenes in particular. In the first experiment participants first studied a set of colour and greyscale natural and man-made scene images. Next, the same images were presented, randomly mixed with a different set. Participants were asked to indicate whether they had seen the images during the study phase. Surprisingly, performance was better for greyscale than for coloured images, and this difference is due to the higher false alarm rate for both natural and man-made coloured scenes. We hypothesized that this increase in false alarm rate was due to a shift from scrutinizing details of the image to recognition of the gist of the (coloured) image. A second experiment, utilizing images without a nameable gist, confirmed this hypothesis as participants now performed equally on greyscale and coloured images. In the final experiment we specifically targeted the more detail-based perception and recognition for greyscale images versus the more gist-based perception and recognition for coloured images with a change detection paradigm. The results show that changes to images are detected faster when image-pairs were presented in greyscale than in colour. This counterintuitive result held for both natural and man-made scenes (but not for scenes without nameable gist) and thus corroborates the shift from more detailed processing of images in greyscale to more gist-based processing of coloured images.
Real time moving scene holographic camera system
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1973-01-01
A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).
NASA Astrophysics Data System (ADS)
Tickle, Andrew J.; Singh, Harjap; Grindley, Josef E.
2013-06-01
Morphological Scene Change Detection (MSCD) is a process typically tasked at detecting relevant changes in a guarded environment for security applications. This can be implemented on a Field Programmable Gate Array (FPGA) by a combination of binary differences based around exclusive-OR (XOR) gates, mathematical morphology and a crucial threshold setting. This is a robust technique and can be applied many areas from leak detection to movement tracking, and further augmented to perform additional functions such as watermarking and facial detection. Fire is a severe problem, and in areas where traditional fire alarm systems are not installed or feasible, it may not be detected until it is too late. Shown here is a way of adapting the traditional Morphological Scene Change Detector (MSCD) with a temperature sensor so if both the temperature sensor and scene change detector are triggered, there is a high likelihood of fire present. Such a system would allow integration into autonomous mobile robots so that not only security patrols could be undertaken, but also fire detection.
Walter, Brittany S; Schultz, John J
2013-05-10
Scene mapping is an integral aspect of processing a scene with scattered human remains. By utilizing the appropriate mapping technique, investigators can accurately document the location of human remains and maintain a precise geospatial record of evidence. One option that has not received much attention for mapping forensic evidence is the differential global positioning (DGPS) unit, as this technology now provides decreased positional error suitable for mapping scenes. Because of the lack of knowledge concerning this utility in mapping a scene, controlled research is necessary to determine the practicality of using newer and enhanced DGPS units in mapping scattered human remains. The purpose of this research was to quantify the accuracy of a DGPS unit for mapping skeletal dispersals and to determine the applicability of this utility in mapping a scene with dispersed remains. First, the accuracy of the DGPS unit in open environments was determined using known survey markers in open areas. Secondly, three simulated scenes exhibiting different types of dispersals were constructed and mapped in an open environment using the DGPS. Variables considered during data collection included the extent of the dispersal, data collection time, data collected on different days, and different postprocessing techniques. Data were differentially postprocessed and compared in a geographic information system (GIS) to evaluate the most efficient recordation methods. Results of this study demonstrate that the DGPS is a viable option for mapping dispersed human remains in open areas. The accuracy of collected point data was 11.52 and 9.55 cm for 50- and 100-s collection times, respectfully, and the orientation and maximum length of long bones was maintained. Also, the use of error buffers for point data of bones in maps demonstrated the error of the DGPS unit, while showing that the context of the dispersed skeleton was accurately maintained. Furthermore, the application of a DGPS for accurate scene mapping is discussed and guidelines concerning the implementation of this technology for mapping human scattered skeletal remains in open environments are provided. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Eye guidance during real-world scene search: The role color plays in central and peripheral vision.
Nuthmann, Antje; Malcolm, George L
2016-01-01
The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.
High-temperature MIRAGE XL (LFRA) IRSP system development
NASA Astrophysics Data System (ADS)
McHugh, Steve; Franks, Greg; LaVeigne, Joe
2017-05-01
The development of very-large format infrared detector arrays has challenged the IR scene projector community to develop larger-format infrared emitter arrays. Many scene projector applications also require much higher simulated temperatures than can be generated with current technology. This paper will present an overview of resistive emitterbased (broadband) IR scene projector system development, as well as describe recent progress in emitter materials and pixel designs applicable for legacy MIRAGE XL Systems to achieve apparent temperatures >1000K in the MWIR. These new high temperature MIRAGE XL (LFRA) Digital Emitter Engines (DEE) will be "plug and play" equivalent with legacy MIRAGE XL DEEs, the rest of the system is reusable. Under the High Temperature Dynamic Resistive Array (HDRA) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>2k x 2k) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1500 K. These new emitter materials can be utilized with legacy RIICs to produce pixels that can achieve 7X the radiance of the legacy systems with low cost and low risk. A 'scalable' Read-In Integrated Circuit (RIIC) is also being developed under the same HDRA program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. These quilted arrays can be fabricated in any N x M size in 512 steps.
Modeling Of Object- And Scene-Prototypes With Hierarchically Structured Classes
NASA Astrophysics Data System (ADS)
Ren, Z.; Jensch, P.; Ameling, W.
1989-03-01
The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.
Virtual environments for scene of crime reconstruction and analysis
NASA Astrophysics Data System (ADS)
Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon
2000-02-01
This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.
Visual search for changes in scenes creates long-term, incidental memory traces.
Utochkin, Igor S; Wolfe, Jeremy M
2018-05-01
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
NASA Astrophysics Data System (ADS)
Barsai, Gabor
Creating accurate, current digital maps and 3-D scenes is a high priority in today's fast changing environment. The nation's maps are in a constant state of revision, with many alterations or new additions each day. Digital maps have become quite common. Google maps, Mapquest and others are examples. These also have 3-D viewing capability. Many details are now included, such as the height of low bridges, in the attribute data for the objects displayed on digital maps and scenes. To expedite the updating of these datasets, they should be created autonomously, without human intervention, from data streams. Though systems exist that attain fast, or even real-time performance mapping and reconstruction, they are typically restricted to creating sketches from the data stream, and not accurate maps or scenes. The ever increasing amount of image data available from private companies, governments and the internet, suggest the development of an automated system is of utmost importance. The proposed framework can create 3-D views autonomously; which extends the functionality of digital mapping. The first step to creating 3-D views is to reconstruct the scene of the area to be mapped. To reconstruct a scene from heterogeneous sources, the data has to be registered: either to each other or, preferably, to a general, absolute coordinate system. Registering an image is based on the reconstruction of the geometric relationship of the image to the coordinate system at the time of imaging. Registration is the process of determining the geometric transformation parameters of a dataset in one coordinate system, the source, with respect to the other coordinate system, the target. The advantages of fusing these datasets by registration manifests itself by the data contained in the complementary information that different modality datasets have. The complementary characteristics of these systems can be fully utilized only after successful registration of the photogrammetric and alternative data relative to a common reference frame. This research provides a novel approach to finding registration parameters, without the explicit use of conjugate points, but using conjugate features. These features are open or closed free-form linear features, there is no need for a parametric or any other type of representation of these features The proposed method will use different modality datasets of the same area: lidar data, image data and GIS data. There are two datasets: one from the Ohio State University and the other from San Bernardino, California. The reconstruction of scenes from imagery and range data, using laser and radar data, has been an active research area in the fields of photogrammetry and computer vision. Automatic, or just less human intervention, would have a great impact on alleviating the "bottle-neck" that describes the current state of creating knowledge from data. Pixels or laser points, the output of the sensor, represent a discretization of the real world. By themselves, these data points do not contain representative information. The values that are associated with them, intensity values and coordinates, do not define an object, and thus accurate maps are not possible just from data. Data is not an end product, nor does it directly provide answers to applications, although implicitly, the information about the object in question is contained in the data. In some form, the data from the initial data acquisition by the sensor has to be further processed to create useable information, and this information has to be combined with facts, procedures and heuristics that can be used to make inferences for reconstruction. To reconstruct a scene perfectly, whether it is an urban or rural scene, requires prior knowledge, heuristics. Buildings are, usually, smooth surfaces and many buildings are blocky with orthogonal, straight edges and sides; streets are smooth; vegetation is rough, with different shapes and sizes of trees, bushes. This research provides a path to fuse data from lidar, GIS and digital multispectral images and reconstructing the precise 3-D scene model, without human intervention, regardless of the type of data or features in the data. The data are initially registered to each other using GPS/INS initial positional values, then conjugate features are found in the datasets to refine the registration. The novelty of the research is that no conjugate points are necessary in the various datasets, and registration is performed without human intervention. The proposed system uses the original lidar and GIS data and finds edges of buildings with the help of the digital images, utilizing the exterior orientation parameters to project the lidar points onto the edge extracted image/map. These edge points are then utilized to orient and locate the datasets, in a correct position with respect to each other.
MTF analysis of LANDSAT-4 Thematic Mapper
NASA Technical Reports Server (NTRS)
Schowengerdt, R.
1983-01-01
The spatial radiance distribution of a ground target must be known to a resolution at least four to five times greater than that of the system under test when measuring a satellite sensor's modulation transfer function. Calibration of the target requires either the use of man-made special purpose targets with known properties, e.g., a small reflective mirror or a dark-light linear pattern such as line or edge, or use of relatively high resolution underflight imagery to calibrate an arbitrary ground scene. Both approaches are to be used in addition a technique that utilizes an analytical model for the scene spatial frequency power spectrum is being investigated as an alternative to calibration of the scene.
MTF Analysis of LANDSAT-4 Thematic Mapper
NASA Technical Reports Server (NTRS)
Schowengerdt, R.
1985-01-01
The spatial radiance distribution of a ground target must be known to a resolution at least four to five times greater than that of the system under test when measuring a satellite sensor's modulation transfer function. Calibration of the target requires either the use of man-made special purpose targets with known properties, e.g., a small reflective mirror or a dark-light linear pattern such as line or edge, or use of relatively high resolution underflight imagery to calibrate an arbitrary ground scene. Both approaches are to be used, in addition a technique that utilizes an analytical model of the scene spatial frequency power spectrum is being investigated as an alternative to calibration of the scene.
NASA Technical Reports Server (NTRS)
Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.
1992-01-01
This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.
NASA Astrophysics Data System (ADS)
Altschuler, Bruce R.; Monson, Keith L.
1998-03-01
Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a fixed optical bench with simulated crime scene models of the people and furniture to assess feasibility, requirements and utility of such a system for crime scene documentation and analysis.
Ball, Felix; Elzemann, Anne; Busch, Niko A
2014-09-01
The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.
Landsat-7 long-term acquisition plan radiometry - evolution over time
Markham, Brian L; Goward, Samuel; Arvidson, Terry; Barsi, Julia A.; Scaramuzza, Pat
2006-01-01
The Landsat-7 Enhanced Thematic Mapper Plus instrument has two selectable gains for each spectral band. In the acquisition plan, the gains were initially set to maximize the entropy in each scene. One unintended consequence of this strategy was that, at times, dense vegetation saturated band 4 and deserts saturated all bands. A revised strategy, based on a land-cover classification and sun angle thresholds, reduced saturation, but resulted in gain changes occurring within the same scene on multiple overpasses. As the gain changes cause some loss of data and difficulties for some ground processing systems, a procedure was devised to shift the gain changes to the nearest predicted cloudy scenes. The results are still not totally satisfactory as gain changes still impact some scenes and saturation still occurs, particularly in ephemerally snow-covered regions. A primary conclusion of our experience with variable gain on Landsat-7 is that such an approach should not be employed on future global monitoring missions.
Beyond the cockpit: The visual world as a flight instrument
NASA Technical Reports Server (NTRS)
Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.
1992-01-01
The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).
Human Relations Procedures Relevant to University Environmental Change.
ERIC Educational Resources Information Center
American Personnel and Guidance Association, Washington, DC.
The present college scene is in a state of flux and confusion. Several problems are receiving major priority: (1) student stress, (2) alienation of students, and (3) activism among students. Reasons for the above problems could include: (1) individual and inter-group stress, and (2) tension between groups. Procedures which have been utilized on…
Color constancy in natural scenes explained by global image statistics
Foster, David H.; Amano, Kinjiro; Nascimento, Sérgio M. C.
2007-01-01
To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance. PMID:16961965
Color constancy in natural scenes explained by global image statistics.
Foster, David H; Amano, Kinjiro; Nascimento, Sérgio M C
2006-01-01
To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance.
Steering and positioning targets for HWIL IR testing at cryogenic conditions
NASA Astrophysics Data System (ADS)
Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.
2006-05-01
In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.
Advanced radiometric and interferometric milimeter-wave scene simulations
NASA Technical Reports Server (NTRS)
Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.
1993-01-01
Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.
2012-05-18
by the AWAC. It is a surface- penetrating device that measures continuous changes in the water elevations over time at much higher sampling rates of...background subtraction, a technique based on detecting change from a background scene. Their study highlights the difficulty in object detection and tracking...movements (Zhang et al. 2009) Alternatively, another common object detection method , known as Optical Flow Analysis , may be utilized for vessel
Working group organizational meeting
NASA Technical Reports Server (NTRS)
1982-01-01
Scene radiation and atmospheric effects, mathematical pattern recognition and image analysis, information evaluation and utilization, and electromagnetic measurements and signal handling are considered. Research issues in sensors and signals, including radar (SAR) reflectometry, SAR processing speed, registration, including overlay of SAR and optical imagery, entire system radiance calibration, and lack of requirements for both sensors and systems, etc. were discussed.
An overview of computer vision
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.
The Effect of Consistency on Short-Term Memory for Scenes.
Gong, Mingliang; Xuan, Yuming; Xu, Xinwen; Fu, Xiaolan
2017-01-01
Which is more detectable, the change of a consistent or an inconsistent object in a scene? This question has been debated for decades. We noted that the change of objects in scenes might simultaneously be accompanied with gist changes. In the present study we aimed to examine how the alteration of gist, as well as the consistency of the changed objects, modulated change detection. In Experiment 1, we manipulated the semantic content by either keeping or changing the consistency of the scene. Results showed that the changes of consistent and inconsistent scenes were equally detected. More importantly, the changes were more accurately detected when scene consistency changed than when the consistency remained unchanged, regardless of the consistency of the memory scenes. A phase-scrambled version of stimuli was adopted in Experiment 2 to decouple the possible confounding effect of low-level factors. The results of Experiment 2 demonstrated that the effect found in Experiment 1 was indeed due to the change of high-level semantic consistency rather than the change of low-level physical features. Together, the study suggests that the change of consistency plays an important role in scene short-term memory, which might be attributed to the sensitivity to the change of semantic content.
The Effect of Consistency on Short-Term Memory for Scenes
Gong, Mingliang; Xuan, Yuming; Xu, Xinwen; Fu, Xiaolan
2017-01-01
Which is more detectable, the change of a consistent or an inconsistent object in a scene? This question has been debated for decades. We noted that the change of objects in scenes might simultaneously be accompanied with gist changes. In the present study we aimed to examine how the alteration of gist, as well as the consistency of the changed objects, modulated change detection. In Experiment 1, we manipulated the semantic content by either keeping or changing the consistency of the scene. Results showed that the changes of consistent and inconsistent scenes were equally detected. More importantly, the changes were more accurately detected when scene consistency changed than when the consistency remained unchanged, regardless of the consistency of the memory scenes. A phase-scrambled version of stimuli was adopted in Experiment 2 to decouple the possible confounding effect of low-level factors. The results of Experiment 2 demonstrated that the effect found in Experiment 1 was indeed due to the change of high-level semantic consistency rather than the change of low-level physical features. Together, the study suggests that the change of consistency plays an important role in scene short-term memory, which might be attributed to the sensitivity to the change of semantic content. PMID:29046654
Warren, Wayne; Brinkley, James F.
2005-01-01
Few biomedical subjects of study are as resource-intensive to teach as gross anatomy. Medical education stands to benefit greatly from applications which deliver virtual representations of human anatomical structures. While many applications have been created to achieve this goal, their utility to the student is limited because of a lack of interactivity or customizability by expert authors. Here we describe the first version of the Biolucida system, which allows an expert anatomist author to create knowledge-based, customized, and fully interactive scenes and lessons for students of human macroscopic anatomy. Implemented in Java and VRML, Biolucida allows the sharing of these instructional 3D environments over the internet. The system simplifies the process of authoring immersive content while preserving its flexibility and expressivity. PMID:16779148
Warren, Wayne; Brinkley, James F
2005-01-01
Few biomedical subjects of study are as resource-intensive to teach as gross anatomy. Medical education stands to benefit greatly from applications which deliver virtual representations of human anatomical structures. While many applications have been created to achieve this goal, their utility to the student is limited because of a lack of interactivity or customizability by expert authors. Here we describe the first version of the Biolucida system, which allows an expert anatomist author to create knowledge-based, customized, and fully interactive scenes and lessons for students of human macroscopic anatomy. Implemented in Java and VRML, Biolucida allows the sharing of these instructional 3D environments over the internet. The system simplifies the process of authoring immersive content while preserving its flexibility and expressivity.
Updating Landsat-derived land-cover maps using change detection and masking techniques
NASA Technical Reports Server (NTRS)
Likens, W.; Maw, K.
1982-01-01
The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.
Efficient structure from motion on large scenes using UAV with position and pose information
NASA Astrophysics Data System (ADS)
Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang
2018-04-01
In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.
Automatic acquisition of motion trajectories: tracking hockey players
NASA Astrophysics Data System (ADS)
Okuma, Kenji; Little, James J.; Lowe, David
2003-12-01
Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.
Effects of capacity limits, memory loss, and sound type in change deafness.
Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S
2017-11-01
Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.
Orbiting passive microwave sensor simulation applied to soil moisture estimation
NASA Technical Reports Server (NTRS)
Newton, R. W. (Principal Investigator); Clark, B. V.; Pitchford, W. M.; Paris, J. F.
1979-01-01
A sensor/scene simulation program was developed and used to determine the effects of scene heterogeneity, resolution, frequency, look angle, and surface and temperature relations on the performance of a spaceborne passive microwave system designed to estimate soil water information. The ground scene is based on classified LANDSAT images which provide realistic ground classes, as well as geometries. It was determined that the average sensitivity of antenna temperature to soil moisture improves as the antenna footprint size increased. Also, the precision (or variability) of the sensitivity changes as a function of resolution.
Space Shuttle Columbia views the world with imaging radar: The SIR-A experiment
NASA Technical Reports Server (NTRS)
Ford, J. P.; Cimino, J. B.; Elachi, C.
1983-01-01
Images acquired by the Shuttle Imaging Radar (SIR-A) in November 1981, demonstrate the capability of this microwave remote sensor system to perceive and map a wide range of different surface features around the Earth. A selection of 60 scenes displays this capability with respect to Earth resources - geology, hydrology, agriculture, forest cover, ocean surface features, and prominent man-made structures. The combined area covered by the scenes presented amounts to about 3% of the total acquired. Most of the SIR-A images are accompanied by a LANDSAT multispectral scanner (MSS) or SEASAT synthetic-aperture radar (SAR) image of the same scene for comparison. Differences between the SIR-A image and its companion LANDSAT or SEASAT image at each scene are related to the characteristics of the respective imaging systems, and to seasonal or other changes that occurred in the time interval between acquisition of the images.
Scene incongruity and attention.
Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John
2017-02-01
Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.
Changing scenes: memory for naturalistic events following change blindness.
Mäntylä, Timo; Sundström, Anna
2004-11-01
Research on scene perception indicates that viewers often fail to detect large changes to scene regions when these changes occur during a visual disruption such as a saccade or a movie cut. In two experiments, we examined whether this relative inability to detect changes would produce systematic biases in event memory. In Experiment 1, participants decided whether two successively presented images were the same or different, followed by a memory task, in which they recalled the content of the viewed scene. In Experiment 2, participants viewed a short video, in which an actor carried out a series of daily activities, and central scenes' attributes were changed during a movie cut. A high degree of change blindness was observed in both experiments, and these effects were related to scene complexity (Experiment 1) and level of retrieval support (Experiment 2). Most important, participants reported the changed, rather than the initial, event attributes following a failure in change detection. These findings suggest that attentional limitations during encoding contribute to biases in episodic memory.
Guest Editor's introduction: Special issue on distributed virtual environments
NASA Astrophysics Data System (ADS)
Lea, Rodger
1998-09-01
Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com
On improving IED object detection by exploiting scene geometry using stereo processing
NASA Astrophysics Data System (ADS)
van de Wouw, Dennis W. J. M.; Dubbelman, Gijs; de With, Peter H. N.
2015-03-01
Detecting changes in the environment with respect to an earlier data acquisition is important for several applications, such as finding Improvised Explosive Devices (IEDs). We explore and evaluate the benefit of depth sensing in the context of automatic change detection, where an existing monocular system is extended with a second camera in a fixed stereo setup. We then propose an alternative frame registration that exploits scene geometry, in particular the ground plane. Furthermore, change characterization is applied to localized depth maps to distinguish between 3D physical changes and shadows, which solves one of the main challenges of a monocular system. The proposed system is evaluated on real-world acquisitions, containing geo-tagged test objects of 18 18 9 cm up to a distance of 60 meters. The proposed extensions lead to a significant reduction of the false-alarm rate by a factor of 3, while simultaneously improving the detection score with 5%.
Henderson, John M; Choi, Wonil
2015-06-01
During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.
The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory
ERIC Educational Resources Information Center
Hollingworth, Andrew
2005-01-01
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or…
Utilization of DIRSIG in support of real-time infrared scene generation
NASA Astrophysics Data System (ADS)
Sanders, Jeffrey S.; Brown, Scott D.
2000-07-01
Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.
Compressed Sensing in On-Grid MIMO Radar.
Minner, Michael F
2015-01-01
The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.
Intercomparison of Satellite-Derived Snow-Cover Maps
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Tait, Andrew B.; Foster, James L.; Chang, Alfred T. C.; Allen, Milan
1999-01-01
In anticipation of the launch of the Earth Observing System (EOS) Terra, and the PM-1 spacecraft in 1999 and 2000, respectively, efforts are ongoing to determine errors of satellite-derived snow-cover maps. EOS Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer-E (AMSR-E) snow-cover products will be produced. For this study we compare snow maps covering the same study area acquired from different sensors using different snow- mapping algorithms. Four locations are studied: 1) southern Saskatchewan; 2) a part of New England (New Hampshire, Vermont and Massachusetts) and eastern New York; 3) central Idaho and western Montana; and 4) parts of North and South Dakota. Snow maps were produced using a prototype MODIS snow-mapping algorithm used on Landsat Thematic Mapper (TM) scenes of each study area at 30-m and when the TM data were degraded to 1 -km resolution. National Operational Hydrologic Remote Sensing Center (NOHRSC) 1 -km resolution snow maps were also used, as were snow maps derived from 1/2 deg. x 1/2 deg. resolution Special Sensor Microwave Imager (SSM/1) data. A land-cover map derived from the International Geosphere-Biosphere Program (IGBP) land-cover map of North America was also registered to the scenes. The TM, NOHRSC and SSM/I snow maps, and land-cover maps were compared digitally. In most cases, TM-derived maps show less snow cover than the NOHRSC and SSM/I maps because areas of incomplete snow cover in forests (e.g., tree canopies, branches and trunks) are seen in the TM data, but not in the coarser-resolution maps. The snow maps generally agree with respect to the spatial variability of the snow cover. The 30-m resolution TM data provide the most accurate snow maps, and are thus used as the baseline for comparison with the other maps. Comparisons show that the percent change in amount of snow cover relative to the 3 0-m resolution TM maps is lowest using the TM I -km resolution maps, ranging from 0 to 40%. The highest percent change (less than 100%) is found in the New England study area, probably due to the presence of patchy snow cover. A scene with patchy snow cover is more difficult to map accurately than is a scene with a well-defined snowline such as is found on the North and South Dakota scene where the percent change ranged from 0 to 40%. There are also some important differences in the amount of snow mapped using the two different SSM/I algorithms because they utilize different channels.
Zelinsky, G J
2001-02-01
Search, memory, and strategy constraints on change detection were analyzed in terms of oculomotor variables. Observers viewed a repeating sequence of three displays (Scene 1-->Mask-->Scene 2-->Mask...) and indicated the presence-absence of a changing object between Scenes 1 and 2. Scenes depicted real-world objects arranged on a surface. Manipulations included set size (one, three, or nine items) and the orientation of the changing objects (similar or different). Eye movements increased with the number of potentially changing objects in the scene, with this set size effect suggesting a relationship between change detection and search. A preferential fixation analysis determined that memory constraints are better described by the operation comparing the pre- and postchange objects than as a capacity limitation, and a scanpath analysis revealed a change detection strategy relying on the peripheral encoding and comparison of display items. These findings support a signal-in-noise interpretation of change detection in which the signal varies with the similarity of the changing objects and the noise is determined by the distractor objects and scene background.
New scene change control scheme based on pseudoskipped picture
NASA Astrophysics Data System (ADS)
Lee, Youngsun; Lee, Jinwhan; Chang, Hyunsik; Nam, Jae Y.
1997-01-01
A new scene change control scheme which improves the video coding performance for sequences that have many scene changed pictures is proposed in this paper. The scene changed pictures except intra-coded picture usually need more bits than normal pictures in order to maintain constant picture quality. The major idea of this paper is how to obtain extra bits which are needed to encode scene changed pictures. We encode a B picture which is located before a scene changed picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture like a skipped picture. We call such a B picture as a pseudo-skipped picture. By generating the pseudo-skipped picture, we can save some bits and they are added to the originally allocated target bits to encode the scene changed picture. The simulation results show that the proposed algorithm improves encoding performance about 0.5 to approximately 2.0 dB of PSNR compared to MPEG-2 TM5 rate controls scheme. In addition, the suggested algorithm is compatible with MPEG-2 video syntax and the picture repetition is not recognizable.
A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.
NASA Technical Reports Server (NTRS)
Leigh, Albert B.; Pal, Sankar K.
1992-01-01
This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.
Guidance of attention to objects and locations by long-term memory of natural scenes.
Becker, Mark W; Rasmussen, Ian P
2008-11-01
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.
Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A
1994-01-01
In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.
NASA Astrophysics Data System (ADS)
Hildreth, E. C.
1985-09-01
For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.
Computer-generated scenes depicting the HST capture and EVA repair mission
1993-11-12
Computer generated scenes depicting the Hubble Space Telescope capture and a sequence of planned events on the planned extravehicular activity (EVA). Scenes include the Remote Manipulator System (RMS) arm assisting two astronauts changing out the Wide Field/Planetary Camera (WF/PC) (48699); RMS arm assisting in the temporary mating of the orbiting telescope to the flight support system in Endeavour's cargo bay (48700); Endeavour's RMS arm assisting in the "capture" of the orbiting telescope (48701); Two astronauts changing out the telescope's coprocessor (48702); RMS arm assistign two astronauts replacing one of the telescope's electronic control units (48703); RMS assisting two astronauts replacing the fuse plugs on the telescope's Power Distribution Unit (PDU) (48704); The telescope's High Resolution Spectrograph (HRS) kit is depicted in this scene (48705); Two astronauts during the removal of the high speed photometer and the installation of the COSTAR instrument (48706); Two astronauts, standing on the RMS, during installation of one of the Magnetic Sensing System (MSS) (48707); High angle view of the orbiting Space Shuttle Endeavour with its cargo bay doors open, revealing the bay's pre-capture configuration. Seen are, from the left, the Solar Array Carrier, the ORU Carrier and the flight support system (48708); Two astronauts performing the replacement of HST's Rate Sensor Units (RSU) (48709); The RMS arm assisting two astronauts with the replacement of the telescope's solar array panels (48710); Two astronauts replacing the telescope's Solar Array Drive Electronics (SADE) (48711).
Clandestine laboratory scene investigation and processing using portable GC/MS
NASA Astrophysics Data System (ADS)
Matejczyk, Raymond J.
1997-02-01
This presentation describes the use of portable gas chromatography/mass spectrometry for on-scene investigation and processing of clandestine laboratories. Clandestine laboratory investigations present special problems to forensic investigators. These crime scenes contain many chemical hazards that must be detected, identified and collected as evidence. Gas chromatography/mass spectrometry performed on-scene with a rugged, portable unit is capable of analyzing a variety of matrices for drugs and chemicals used in the manufacture of illicit drugs, such as methamphetamine. Technologies used to detect various materials at a scene have particular applications but do not address the wide range of samples, chemicals, matrices and mixtures that exist in clan labs. Typical analyses performed by GC/MS are for the purpose of positively establishing the identity of starting materials, chemicals and end-product collected from clandestine laboratories. Concerns for the public and investigator safety and the environment are also important factors for rapid on-scene data generation. Here is described the implementation of a portable multiple-inlet GC/MS system designed for rapid deployment to a scene to perform forensic investigations of clandestine drug manufacturing laboratories. GC/MS has long been held as the 'gold standard' in performing forensic chemical analyses. With the capability of GC/MS to separate and produce a 'chemical fingerprint' of compounds, it is utilized as an essential technique for detecting and positively identifying chemical evidence. Rapid and conclusive on-scene analysis of evidence will assist the forensic investigators in collecting only pertinent evidence thereby reducing the amount of evidence to be transported, reducing chain of custody concerns, reducing costs and hazards, maintaining sample integrity and speeding the completion of the investigative process.
NASA Technical Reports Server (NTRS)
Meyer, Peter; Green, Robert O.; Staenz, Karl; Itten, Klaus I.
1994-01-01
A geocoding procedure for remotely sensed data of airborne systems in rugged terrain is affected by several factors: buffeting of the aircraft by turbulence, variations in ground speed, changes in altitude, attitude variations, and surface topography. The current investigation was carried out with an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene of central Switzerland (Rigi) from NASA's Multi Aircraft Campaign (MAC) in Europe (1991). The parametric approach reconstructs for every pixel the observation geometry based on the flight line, aircraft attitude, and surface topography. To utilize the data for analysis of materials on the surface, the AVIRIS data are corrected to apparent reflectance using algorithms based on MODTRAN (moderate resolution transfer code).
Robust colour constancy in red-green dichromats
Linhares, João M. M.; Moreira, Humberto; Lillo, Julio; Nascimento, Sérgio M. C.
2017-01-01
Colour discrimination has been widely studied in red-green (R-G) dichromats but the extent to which their colour constancy is affected remains unclear. This work estimated the extent of colour constancy for four normal trichromatic observers and seven R-G dichromats when viewing natural scenes under simulated daylight illuminants. Hyperspectral imaging data from natural scenes were used to generate the stimuli on a calibrated CRT display. In experiment 1, observers viewed a reference scene illuminated by daylight with a correlated colour temperature (CCT) of 6700K; observers then viewed sequentially two versions of the same scene, one illuminated by either a higher or lower CCT (condition 1, pure CCT change with constant luminance) or a higher or lower average luminance (condition 2, pure luminance change with a constant CCT). The observers’ task was to identify the version of the scene that looked different from the reference scene. Thresholds for detecting a pure CCT change or a pure luminance change were estimated, and it was found that those for R-G dichromats were marginally higher than for normal trichromats regarding CCT. In experiment 2, observers viewed sequentially a reference scene and a comparison scene with a CCT change or a luminance change above threshold for each observer. The observers’ task was to identify whether or not the change was an intensity change. No significant differences were found between the responses of normal trichromats and dichromats. These data suggest robust colour constancy mechanisms along daylight locus in R-G dichromacy. PMID:28662218
Robust colour constancy in red-green dichromats.
Álvaro, Leticia; Linhares, João M M; Moreira, Humberto; Lillo, Julio; Nascimento, Sérgio M C
2017-01-01
Colour discrimination has been widely studied in red-green (R-G) dichromats but the extent to which their colour constancy is affected remains unclear. This work estimated the extent of colour constancy for four normal trichromatic observers and seven R-G dichromats when viewing natural scenes under simulated daylight illuminants. Hyperspectral imaging data from natural scenes were used to generate the stimuli on a calibrated CRT display. In experiment 1, observers viewed a reference scene illuminated by daylight with a correlated colour temperature (CCT) of 6700K; observers then viewed sequentially two versions of the same scene, one illuminated by either a higher or lower CCT (condition 1, pure CCT change with constant luminance) or a higher or lower average luminance (condition 2, pure luminance change with a constant CCT). The observers' task was to identify the version of the scene that looked different from the reference scene. Thresholds for detecting a pure CCT change or a pure luminance change were estimated, and it was found that those for R-G dichromats were marginally higher than for normal trichromats regarding CCT. In experiment 2, observers viewed sequentially a reference scene and a comparison scene with a CCT change or a luminance change above threshold for each observer. The observers' task was to identify whether or not the change was an intensity change. No significant differences were found between the responses of normal trichromats and dichromats. These data suggest robust colour constancy mechanisms along daylight locus in R-G dichromacy.
A Martin-Puplett cartridge FIR interferometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Roger J.; Penniman, Edwin E.; Jarboe, Thomas R.
2004-10-01
A compact prealigned Martin-Puplett interferometer (MPI) cartridge for plasma interferometry is described. The MPI cartridge groups all components of a MP interferometer, with the exception of the end mirror for the scene beam, on a stand-alone rigid platform. The interferometer system is completed by positioning a cartridge anywhere along and coaxial with the scene beam, considerably reducing the amount of effort in alignment over a discrete component layout. This allows the interferometer to be expanded to any number of interferometry chords consistent with optical access, limited only by the laser power. The cartridge interferometer has been successfully incorporated as amore » second chord on the Helicity Injected Torus II (HIT-II) far infrared interferometer system and a comparison with the discrete component system is presented. Given the utility and compactness of the cartridge, a possible design for a five-chord interferometer arrangement on the HIT-II device is described.« less
Efficient summary statistical representation when change localization fails.
Haberman, Jason; Whitney, David
2011-10-01
People are sensitive to the summary statistics of the visual world (e.g., average orientation/speed/facial expression). We readily derive this information from complex scenes, often without explicit awareness. Given the fundamental and ubiquitous nature of summary statistical representation, we tested whether this kind of information is subject to the attentional constraints imposed by change blindness. We show that information regarding the summary statistics of a scene is available despite limited conscious access. In a novel experiment, we found that while observers can suffer from change blindness (i.e., not localize where change occurred between two views of the same scene), observers could nevertheless accurately report changes in the summary statistics (or "gist") about the very same scene. In the experiment, observers saw two successively presented sets of 16 faces that varied in expression. Four of the faces in the first set changed from one emotional extreme (e.g., happy) to another (e.g., sad) in the second set. Observers performed poorly when asked to locate any of the faces that changed (change blindness). However, when asked about the ensemble (which set was happier, on average), observer performance remained high. Observers were sensitive to the average expression even when they failed to localize any specific object change. That is, even when observers could not locate the very faces driving the change in average expression between the two sets, they nonetheless derived a precise ensemble representation. Thus, the visual system may be optimized to process summary statistics in an efficient manner, allowing it to operate despite minimal conscious access to the information presented.
Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes
ERIC Educational Resources Information Center
Becker, Mark W.; Rasmussen, Ian P.
2008-01-01
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…
Snow Coverage Analysis Using ASTER over the Sierra Nevada Mountain Range
NASA Astrophysics Data System (ADS)
Ross, B.
2017-12-01
Snow has strong impacts on human behavior, state and local activities, and the economy. The Sierra Nevada snowpack is California's most important natural reservoir of water. Such snow is melting sooner and faster. A recent California drought study showed that there was a deficit of 1.5 million acre-feet of water in 2014 due to the fast melting rates. Scientists have been using the Moderate Resolution Imaging Spectrometer (MODIS) which is available at the spatial resolution of 500-meter, to analyze the changes in snow coverage. While such analysis provides us with the valuable information, it would be more beneficial to employ the imageries at a higher spatial resolution for snow studies. Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER), which acquires the high-resolution imageries ranging from 15-meter to 90-meter, has recently become freely available to the public. Our study utilized two scenes obtained from ASTER to investigate the changes in snow extent over the Sierra Nevada's mountain area for an 8-year period. These two scenes were collected on April 11, 2007 and April 16, 2015 covering the same geographic region. Normalized Difference Snow Index (NDSI) was adopted to delineate the snow coverage in each scene. Our study shows a substantial decrease of snow coverage in the studied geographic region by pixel count.
Expedient range enhanced 3-D robot colour vision
NASA Astrophysics Data System (ADS)
Jarvis, R. A.
1983-01-01
Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.
Classification of road sign type using mobile stereo vision
NASA Astrophysics Data System (ADS)
McLoughlin, Simon D.; Deegan, Catherine; Fitzgerald, Conor; Markham, Charles
2005-06-01
This paper presents a portable mobile stereo vision system designed for the assessment of road signage and delineation (lines and reflective pavement markers or "cat's eyes"). This novel system allows both geometric and photometric measurements to be made on objects in a scene. Global Positioning System technology provides important location data for any measurements made. Using the system it has been shown that road signs can be classfied by nature of their reflectivity. This is achieved by examining the changes in the reflected light intensity with changes in range (facilitated by stereo vision). Signs assessed include those made from retro-reflective materials, those made from diffuse reflective materials and those made from diffuse reflective matrials with local illumination. Field-testing results demonstrate the systems ability to classify objects in the scene based on their reflective properties. The paper includes a discussion of a physical model that supports the experimental data.
Kotabe, Hiroki P; Kardan, Omid; Berman, Marc G
2017-08-01
Natural environments have powerful aesthetic appeal linked to their capacity for psychological restoration. In contrast, disorderly environments are aesthetically aversive, and have various detrimental psychological effects. But in our research, we have repeatedly found that natural environments are perceptually disorderly. What could explain this paradox? We present 3 competing hypotheses: the aesthetic preference for naturalness is more powerful than the aesthetic aversion to disorder (the nature-trumps-disorder hypothesis ); disorder is trivial to aesthetic preference in natural contexts (the harmless-disorder hypothesis ); and disorder is aesthetically preferred in natural contexts (the beneficial-disorder hypothesis ). Utilizing novel methods of perceptual study and diverse stimuli, we rule in the nature-trumps-disorder hypothesis and rule out the harmless-disorder and beneficial-disorder hypotheses. In examining perceptual mechanisms, we find evidence that high-level scene semantics are both necessary and sufficient for the nature-trumps-disorder effect. Necessity is evidenced by the effect disappearing in experiments utilizing only low-level visual stimuli (i.e., where scene semantics have been removed) and experiments utilizing a rapid-scene-presentation procedure that obscures scene semantics. Sufficiency is evidenced by the effect reappearing in experiments utilizing noun stimuli which remove low-level visual features. Furthermore, we present evidence that the interaction of scene semantics with low-level visual features amplifies the nature-trumps-disorder effect-the effect is weaker both when statistically adjusting for quantified low-level visual features and when using noun stimuli which remove low-level visual features. These results have implications for psychological theories bearing on the joint influence of low- and high-level perceptual inputs on affect and cognition, as well as for aesthetic design. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation
NASA Astrophysics Data System (ADS)
Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward
1988-08-01
A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.
Monitoring gypsy moth defoliation by applying change detection techniques to Landsat imagery
NASA Technical Reports Server (NTRS)
Williams, D. L.; Stauffer, M. L.
1978-01-01
The overall objective of a research effort at NASA's Goddard Space Flight Center is to develop and evaluate digital image processing techniques that will facilitate the assessment of the intensity and spatial distribution of forest insect damage in Northeastern U.S. forests using remotely sensed data from Landsats 1, 2 and C. Automated change detection techniques are presently being investigated as a method of isolating the areas of change in the forest canopy resulting from pest outbreaks. In order to follow the change detection approach, Landsat scene correction and overlay capabilities are utilized to provide multispectral/multitemporal image files of 'defoliation' and 'nondefoliation' forest stand conditions.
Development of a high-definition IR LED scene projector
NASA Astrophysics Data System (ADS)
Norton, Dennis T.; LaVeigne, Joe; Franks, Greg; McHugh, Steve; Vengel, Tony; Oleson, Jim; MacDougal, Michael; Westerfeld, David
2016-05-01
Next-generation Infrared Focal Plane Arrays (IRFPAs) are demonstrating ever increasing frame rates, dynamic range, and format size, while moving to smaller pitch arrays.1 These improvements in IRFPA performance and array format have challenged the IRFPA test community to accurately and reliably test them in a Hardware-In-the-Loop environment utilizing Infrared Scene Projector (IRSP) systems. The rapidly-evolving IR seeker and sensor technology has, in some cases, surpassed the capabilities of existing IRSP technology. To meet the demands of future IRFPA testing, Santa Barbara Infrared Inc. is developing an Infrared Light Emitting Diode IRSP system. Design goals of the system include a peak radiance >2.0W/cm2/sr within the 3.0-5.0μm waveband, maximum frame rates >240Hz, and >4million pixels within a form factor supported by pixel pitches <=32μm. This paper provides an overview of our current phase of development, system design considerations, and future development work.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-08-27
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.
Scene text detection by leveraging multi-channel information and local context
NASA Astrophysics Data System (ADS)
Wang, Runmin; Qian, Shengyou; Yang, Jianfeng; Gao, Changxin
2018-03-01
As an important information carrier, texts play significant roles in many applications. However, text detection in unconstrained scenes is a challenging problem due to cluttered backgrounds, various appearances, uneven illumination, etc.. In this paper, an approach based on multi-channel information and local context is proposed to detect texts in natural scenes. According to character candidate detection plays a vital role in text detection system, Maximally Stable Extremal Regions(MSERs) and Graph-cut based method are integrated to obtain the character candidates by leveraging the multi-channel image information. A cascaded false positive elimination mechanism are constructed from the perspective of the character and the text line respectively. Since the local context information is very valuable for us, these information is utilized to retrieve the missing characters for boosting the text detection performance. Experimental results on two benchmark datasets, i.e., the ICDAR 2011 dataset and the ICDAR 2013 dataset, demonstrate that the proposed method have achieved the state-of-the-art performance.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-01-01
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656
Feature diagnosticity and task context shape activity in human scene-selective cortex.
Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S
2016-01-15
Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.
Groen, Iris I A; Silson, Edward H; Baker, Chris I
2017-02-19
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
2017-01-01
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013
Top-down control of visual perception: attention in natural vision.
Rolls, Edmund T
2008-01-01
Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.
High-dynamic-range scene compression in humans
NASA Astrophysics Data System (ADS)
McCann, John J.
2006-02-01
Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.
Guided exploration in virtual environments
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas
2001-06-01
We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.
New technologies for HWIL testing of WFOV, large-format FPA sensor systems
NASA Astrophysics Data System (ADS)
Fink, Christopher
2016-05-01
Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.
Pearce, Bradley; Crichton, Stuart; Mackiewicz, Michal; Finlayson, Graham D; Hurlbert, Anya
2014-01-01
The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Terrain - Umbra Package v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oppel, Fred; Hart, Brian; Rigdon, James Brian
This library contains modules that read terrain files (e.g., OpenFlight, Open Scene Graph IVE, GeoTIFF Image) and to read and manage ESRI terrain datasets. All data is stored and managed in Open Scene Graph (OSG). Terrain system accesses OSG and provides elevation data, access to meta-data such as soil types and enables linears, areals and buildings to be placed in a terrain, These geometry objects include boxes, point, path, and polygon (region), and sector modules. Utilities have been made available for clamping objects to the terrain and accessing LOS information. This assertion includes a managed C++ wrapper code (TerrainWrapper) tomore » enable C# applications, such as OpShed and UTU, to incorporate this library.« less
MTF Analysis of LANDSAT-4 Thematic Mapper
NASA Technical Reports Server (NTRS)
Schowengerdt, R.
1984-01-01
A research program to measure the LANDSAT 4 Thematic Mapper (TM) modulation transfer function (MTF) is described. Measurement of a satellite sensor's MTF requires the use of a calibrated ground target, i.e., the spatial radiance distribution of the target must be known to a resolution at least four to five times greater than that of the system under test. A small reflective mirror or a dark light linear pattern such as line or edge, and relatively high resolution underflight imagery are used to calibrate the target. A technique that utilizes an analytical model for the scene spatial frequency power spectrum will be investigated as an alternative to calibration of the scene. The test sites and analysis techniques are also described.
Adaptive foveated single-pixel imaging with dynamic supersampling
Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.
2017-01-01
In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538
Developing a confidence metric for the Landsat land surface temperature product
NASA Astrophysics Data System (ADS)
Laraby, Kelly G.; Schott, John R.; Raqueno, Nina
2016-05-01
Land Surface Temperature (LST) is an important Earth system data record that is useful to fields such as change detection, climate research, environmental monitoring, and smaller scale applications such as agriculture. Certain Earth-observing satellites can be used to derive this metric, and it would be extremely useful if such imagery could be used to develop a global product. Through the support of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), a LST product for the Landsat series of satellites has been developed. Currently, it has been validated for scenes in North America, with plans to expand to a trusted global product. For ideal atmospheric conditions (e.g. stable atmosphere with no clouds nearby), the LST product underestimates the surface temperature by an average of 0.26 K. When clouds are directly above or near the pixel of interest, however, errors can extend to several Kelvin. As the product approaches public release, our major goal is to develop a quality metric that will provide the user with a per-pixel map of estimated LST errors. There are several sources of error that are involved in the LST calculation process, but performing standard error propagation is a difficult task due to the complexity of the atmospheric propagation component. To circumvent this difficulty, we propose to utilize the relationship between cloud proximity and the error seen in the LST process to help develop a quality metric. This method involves calculating the distance to the nearest cloud from a pixel of interest in a scene, and recording the LST error at that location. Performing this calculation for hundreds of scenes allows us to observe the average LST error for different ranges of distances to the nearest cloud. This paper describes this process in full, and presents results for a large set of Landsat scenes.
Inertial navigation sensor integrated obstacle detection system
NASA Technical Reports Server (NTRS)
Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)
1992-01-01
A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.
NASA Astrophysics Data System (ADS)
Scheinert, M.; Rosenau, R.; Ebermann, B.; Horwath, M.
2016-12-01
Utilizing the freely available Landsat archive we have set up a monitoring system to process and provide flow-velocity fields for more than 300 outlet glaciers along the margin of the Greenland ice sheet. We will present major processing steps. These include, among others, an improved orthorectification that is based on the Global Digital Elevation Map V2 (GDEM-V2) of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). For those Landsat 7 products which feature the scan line corrector (SLC) failure a destriping correction was applied. An adaptive, recursive filter approach was applied in order to remove outliers. Altogether, the enhanced processing leads to a higher accuracy of the flow-velocity fields. By mid-2016 we succeeded in incorporating more than 37,000 optical multi-sensoral scenes from Landsat 1 to 8. These scenes cover the period from 1972 to 2015. Until now, for almost 300 glaciers we processed more than 100,000 flow-velocity fields for the time span until 2012. For the time until 2015 velocity fields were inferred only for the fastest flowing glaciers. However, new recordings of Landsat 7 and Landsat 8 as well as the availability of further scenes through the Landsat Global Archive Consolidation (LGAC) effort will help to enlarge the database. With a further quality check, we can provide more than 40,000 flow-velocity for public accessibility. More products will be added continuously while the almost automated processing is ongoing. The long time span enables to determine trends of the flow velocity over different (long) periods. A major achievement can be seen in the fact that a high temporal resolution facilitates the analysis of seasonal flow-velocity variations. We will discuss prominent examples of the non-uniform pattern of ice flow velocity changes. For this, a powerful tool is provided by the monitoring system and its web-based data portal. It allows to study the flow-velocity changes in time and space, and to possibly identify distinctive patterns. Rapid changes like surge events can be detected and analyzed in detail. The presentation will demonstrate how the data portal enables to interactively perform the calculation of profiles or time series for locations the user can select on the map. Also, the user can choose from different options to download the examined data.
Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing
NASA Astrophysics Data System (ADS)
Sedlar, Michael F.; Griffith, Jerry A.
1988-07-01
This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.
Space flight visual simulation.
Xu, L
1985-01-01
In this paper, based on the scenes of stars seen by astronauts in their orbital flights, we have studied the mathematical model which must be constructed for CGI system to realize the space flight visual simulation. Considering such factors as the revolution and rotation of the Earth, exact date, time and site of orbital injection of the spacecraft, as well as its orbital flight and attitude motion, etc., we first defined all the instantaneous lines of sight and visual fields of astronauts in space. Then, through a series of coordinate transforms, the pictures of the scenes of stars changing with time-space were photographed one by one mathematically. In the procedure, we have designed a method of three-times "mathematical cutting." Finally, we obtained each instantaneous picture of the scenes of stars observed by astronauts through the window of the cockpit. Also, the dynamic conditions shaded by the Earth in the varying pictures of scenes of stars could be displayed.
The development of automated behavior analysis software
NASA Astrophysics Data System (ADS)
Jaana, Yuki; Prima, Oky Dicky A.; Imabuchi, Takashi; Ito, Hisayoshi; Hosogoe, Kumiko
2015-03-01
The measurement of behavior for participants in a conversation scene involves verbal and nonverbal communications. The measurement validity may vary depending on the observers caused by some aspects such as human error, poorly designed measurement systems, and inadequate observer training. Although some systems have been introduced in previous studies to automatically measure the behaviors, these systems prevent participants to talk in a natural way. In this study, we propose a software application program to automatically analyze behaviors of the participants including utterances, facial expressions (happy or neutral), head nods, and poses using only a single omnidirectional camera. The camera is small enough to be embedded into a table to allow participants to have spontaneous conversation. The proposed software utilizes facial feature tracking based on constrained local model to observe the changes of the facial features captured by the camera, and the Japanese female facial expression database to recognize expressions. Our experiment results show that there are significant correlations between measurements observed by the observers and by the software.
Scene recognition following locomotion around a scene.
Motes, Michael A; Finlay, Cory A; Kozhevnikov, Maria
2006-01-01
Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40 degrees to 360 degrees). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36 degrees to 180 degrees. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.
Optimal directional view angles for remote-sensing missions
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Holben, B. N.; Tucker, C. J.; Newcomb, W. W.
1984-01-01
The present investigation is concerned with the directional, off-nadir viewing of terrestrial scenes using remote-sensing systems from aircraft and satellite platforms, taking into account advantages of such an approach over strictly nadir viewing systems. Directional reflectance data collected for bare soil and several different vegetation canopies in NOAA-7 AVHRR bands 1 and 2 were analyzed. Optimum view angles were recommended for two strategies. The first strategy views the utility of off-nadir measurements as extending spatial and temporal coverage of the target area. The second strategy views the utility of off-nadir measurements as providing additional information about the physical characteristics of the target. Conclusions regarding the two strategies are discussed.
Object-oriented structures supporting remote sensing databases
NASA Technical Reports Server (NTRS)
Wichmann, Keith; Cromp, Robert F.
1995-01-01
Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.
Real-time 3D change detection of IEDs
NASA Astrophysics Data System (ADS)
Wathen, Mitch; Link, Norah; Iles, Peter; Jinkerson, John; Mrstik, Paul; Kusevic, Kresimir; Kovats, David
2012-06-01
Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted, prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the same roadways. The prototype VISR system is briefly described. The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time processing of scene alignment plus change detection.
Eye movements and attention in reading, scene perception, and visual search.
Rayner, Keith
2009-08-01
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.
NASA Astrophysics Data System (ADS)
Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.
2011-06-01
The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.
Xia, Xinxing; Zheng, Zhenrong; Liu, Xu; Li, Haifeng; Yan, Caijie
2010-09-10
We utilized a high-frame-rate projector, a rotating mirror, and a cylindrical selective-diffusing screen to present a novel three-dimensional (3D) omnidirectional-view display system without the need for any special viewing aids. The display principle and image size are analyzed, and the common display zone is proposed. The viewing zone for one observation place is also studied. The experimental results verify this method, and a vivid color 3D scene with occlusion and smooth parallax is also demonstrated with the system.
Visualization of spatial-temporal data based on 3D virtual scene
NASA Astrophysics Data System (ADS)
Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang
2009-10-01
The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.
A method of 3D object recognition and localization in a cloud of points
NASA Astrophysics Data System (ADS)
Bielicki, Jerzy; Sitnik, Robert
2013-12-01
The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.
Pearce, Bradley; Crichton, Stuart; Mackiewicz, Michal; Finlayson, Graham D.; Hurlbert, Anya
2014-01-01
The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed. PMID:24586299
Effects of chromatic image statistics on illumination induced color differences.
Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels
2013-09-01
We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.
NASA Technical Reports Server (NTRS)
Kogut, J.; Larduinat, E.; Fitzgerald, M.
1983-01-01
The utility of methods for generating TM RLUTS which can improve the quality of the resultant images was investigated. The TM-CCT-ADDS tape was changed to account for a different collection window for the calibration data. Several scenes of Terrebonne Bay, Louisiana and the Grand Bahamas were analyzed to evaluate the radiometric corrections operationally applied to the image data and to investigate several techniques for reducing striping in the images. Printer plots for the TM shutter data were produced and detector statistics were compiled and plotted. These statistics included various combinations of the average shutter counts for each scan before and after DC restore for forward and reverse scans. Results show that striping is caused by the detectors becoming saturated when they view a bright cloud and depress the DC restore level.
Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones
Chen, Jing; Cao, Ruochen; Wang, Yongtian
2015-01-01
Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters. PMID:26690439
Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones.
Chen, Jing; Cao, Ruochen; Wang, Yongtian
2015-12-10
Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters.
Performance evaluation and geologic utility of LANDSAT 4 TM and MSS scanners
NASA Technical Reports Server (NTRS)
Paley, H. N.
1983-01-01
Experiments using artificial targets (polyethylene sheets) to help calibrate and evaluate atmospheric effects as well as the radiometric precision and spatial characteristics of the NS-001 and TM sensor systems were attempted and show the technical feasibility of using plastic targets for such studies, although weather precluded successful TM data acquisition. Tapes for six LANDSAT 4 TM scenes were acquired and data processing began. Computer enhanced TM simulator and LANDSAT 4 TM data were compared for a porphyry copper deposit in Southern Arizona. Preliminary analyses performed on two TM scenes acquired in the CCT-PT format, show the TM data appear to contain a marked increase in geologically useful information; however, a number of instrumental processing artifacts may well limit the ability of the geologist to fully extract this information.
Landsat 3 return beam vidicon response artifacts
,; Clark, B.
1981-01-01
The return beam vidicon (RBV) sensing systems employed aboard Landsats 1, 2, and 3 have all been similar in that they have utilized vidicon tube cameras. These are not mirror-sweep scanning devices such as the multispectral scanner (MSS) sensors that have also been carried aboard the Landsat satellites. The vidicons operate more like common television cameras, using an electron gun to read images from a photoconductive faceplate.In the case of Landsats 1 and 2, the RBV system consisted of three such vidicons which collected remote sensing data in three distinct spectral bands. Landsat 3, however, utilizes just two vidicon cameras, both of which sense data in a single broad band. The Landsat 3 RBV system additionally has a unique configuration. As arranged, the two cameras can be shuttered alternately, twice each, in the same time it takes for one MSS scene to be acquired. This shuttering sequence results in four RBV "subscenes" for every MSS scene acquired, similar to the four quadrants of a square. See Figure 1. Each subscene represents a ground area of approximately 98 by 98 km. The subscenes are designated A, B, C, and D, for the northwest, northeast, southwest, and southeast quarters of the full scene, respectively. RBV data products are normally ordered, reproduced, and sold on a subscene basis and are in general referred to in this way. Each exposure from the RBV camera system presents an image which is 98 km on a side. When these analog video data are subsequently converted to digital form, the picture element, or pixel, that results is 19 m on a side with an effective resolution element of 30 m. This pixel size is substantially smaller than that obtainable in MSS images (the MSS has an effective resolution element of 73.4 m), and, when RBV images are compared to equivalent MSS images, better resolution in the RBV data is clearly evident. It is for this reason that the RBV system can be a valuable tool for remote sensing of earth resources.Until recently, RBV imagery was processed directly from wideband video tape data onto 70-mm film. This changed in September 1980 when digital production of RBV data at the NASA Goddard Space Flight Center (GSFC) began. The wideband video tape data are now subjected to analog-to-digital preprocessing and corrected both radiometrically and geometrically to produce high-density digital tapes (HDT's). The HDT data are subsequently transmitted via satellite (Domsat) to the EROS Data Center (EDC) where they are used to generate 241-mm photographic images at a scale of 1:500,000. Computer-compatible tapes of the data are also generated as digital products. Of the RBV data acquired since September 1, 1980, approximately 2,800 subscenes per month have been processed at EDC.
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.
2017-10-01
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1995-01-01
The basic design considerations for perspective Air Traffic Control displays are described. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. The MVPS system is based on indirect manipulation of the viewing parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of screen. This arrangement has been chosen, in order to preserve the correspondence between the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer generated scene. Current, ongoing efforts deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the Air Traffic Control scene can be viewed, for a given traffic situation.
Driving with indirect viewing sensors: understanding the visual perception issues
NASA Astrophysics Data System (ADS)
O'Kane, Barbara L.
1996-05-01
Visual perception is one of the most important elements of driving in that it enables the driver to understand and react appropriately to the situation along the path of the vehicle. The visual perception of the driver is enabled to the greatest extent while driving during the day. Noticeable decrements in visual acuity, range of vision, depth of field and color perception occur at night and under certain weather conditions. Indirect viewing sensors, utilizing various technologies and spectral bands, may assist the driver's normal mode of driving. Critical applications in the military as well as other official activities may require driving at night without headlights. In these latter cases, it is critical that the device, being the only source of scene information, provide the required scene cues needed for driving on, and often-times, off road. One can speculate about the scene information that a driver needs, such as road edges, terrain orientation, people and object detection in or near the path of the vehicle, and so on. But the perceptual qualities of the scene that give rise to these perceptions are little known and thus not quantified for evaluation of indirect viewing devices. This paper discusses driving with headlights and compares the scene content with that provided by a thermal system in the 8 - 12 micrometers micron spectral band, which may be used for driving at some time. The benefits and advantages of each are discussed as well as their limitations in providing information useful for the driver who must make rapid and critical decisions based upon the scene content available. General recommendations are made for potential avenues of development to overcome some of these limitations.
NASA Astrophysics Data System (ADS)
Hawbaker, T. J.; Vanderhoof, M.; Beal, Y. J. G.; Takacs, J. D.; Schmidt, G.; Falgout, J.; Brunner, N. M.; Caldwell, M. K.; Picotte, J. J.; Howard, S. M.; Stitt, S.; Dwyer, J. L.
2016-12-01
Complete and accurate burned area data are needed to document patterns of fires, to quantify relationships between the patterns and drivers of fire occurrence, and to assess the impacts of fires on human and natural systems. Unfortunately, many existing fire datasets in the United States are known to be incomplete and that complicates efforts to understand burned area patterns and introduces a large amount of uncertainty in efforts to identify their driving processes and impacts. Because of this, the need to systematically collect burned area information has been recognized by the United Nations Framework Convention on Climate Change and the Intergovernmental Panel on Climate Change, which have both called for the production of essential climate variables. To help meet this need, we developed a novel algorithm that automatically identifies burned areas in temporally-dense time series of Landsat image stacks to produce Landsat Burned Area Essential Climate Variable (BAECV) products. The algorithm makes use of predictors derived from individual Landsat scenes, lagged reference conditions, and change metrics between the scene and reference predictors. Outputs of the BAECV algorithm, generated for the conterminous United States for 1984 through 2015, consist of burn probabilities for each Landsat scene, in addition to, annual composites including: the maximum burn probability, burn classification, and the Julian date of the first Landsat scene a burn was observed. The BAECV products document patterns of fire occurrence that are not well characterized by existing fire datasets in the United States. We anticipate that these data could help to better understand past patterns of fire occurrence, the drivers that created them, and the impacts fires had on natural and human systems.
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)
1994-01-01
The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system administration.
Mavratzakis, Aimee; Herbert, Cornelia; Walla, Peter
2016-01-01
In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250ms) than for scenes (500ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
The Changing Educational Scene in China.
ERIC Educational Resources Information Center
Hindes, Sally
1978-01-01
Education in the People's Republic of China has always been an integral part of the political system, emphasizing political commitment, physical health and labor, academic studies, self-reliance, concern and respect for all Chinese people, and group achievement. Recent educational changes call for a unified curriculum and schools for the talented.…
LANDSAT D local user terminal study
NASA Technical Reports Server (NTRS)
Alexander, L.; Louie, M.; Spencer, R.; Stow, W. K.
1976-01-01
The effect of the changes incorporated in the LANDSAT D system on the ability of a local user terminal to receive, record and process data in real time was studied. Alternate solutions to the problems raised by these changes were evaluated. A loading analysis was performed in order to determine the quantities of data that a local user terminal (LUT) would be interested in receiving and processing. The number of bits in an MSS and a TM scene were calculated along with the number of scenes per day that an LUT might require for processing. These then combined to a total number of processed bits/day for an LUT as a function of sensor and coverage circle radius.
Hollingworth, Andrew; Henderson, John M
2004-07-01
In a change detection paradigm, the global orientation of a natural scene was incrementally changed in 1 degree intervals. In Experiments 1 and 2, participants demonstrated sustained change blindness to incremental rotation, often coming to consider a significantly different scene viewpoint as an unchanged continuation of the original view. Experiment 3 showed that participants who failed to detect the incremental rotation nevertheless reliably detected a single-step rotation back to the initial view. Together, these results demonstrate an important dissociation between explicit change detection and visual memory. Following a change, visual memory is updated to reflect the changed state of the environment, even if the change was not detected.
[Visual representation of natural scenes in flicker changes].
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2010-08-01
Coherence theory in scene perception (Rensink, 2002) assumes the retention of volatile object representations on which attention is not focused. On the other hand, visual memory theory in scene perception (Hollingworth & Henderson, 2002) assumes that robust object representations are retained. In this study, we hypothesized that the difference between these two theories is derived from the difference of the experimental tasks that they are based on. In order to verify this hypothesis, we examined the properties of visual representation by using a change detection and memory task in a flicker paradigm. We measured the representations when participants were instructed to search for a change in a scene, and compared them with the intentional memory representations. The visual representations were retained in visual long-term memory even in the flicker paradigm, and were as robust as the intentional memory representations. However, the results indicate that the representations are unavailable for explicitly localizing a scene change, but are available for answering the recognition test. This suggests that coherence theory and visual memory theory are compatible.
Atmospheric corrections for satellite water quality studies
NASA Technical Reports Server (NTRS)
Piech, K. R.; Schott, J. R.
1975-01-01
Variations in the relative value of the blue and green reflectances of a lake can be correlated with important optical and biological parameters measured from surface vessels. Measurement of the relative reflectance values from color film imagery requires removal of atmospheric effects. Data processing is particularly crucial because: (1) lakes are the darkest objects in a scene; (2) minor reflectance changes can correspond to important physical changes; (3) lake systems extend over broad areas in which atmospheric conditions may fluctuate; (4) seasonal changes are of importance; and, (5) effects of weather are important, precluding flights under only ideal weather conditions. Data processing can be accomplished through microdensitometry of scene shadow areas. Measurements of reflectance ratios can be made to an accuracy of plus or minus 12%, sufficient to permit monitoring of important eutrophication indices.
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.
1975-01-01
An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.
CB Database: A change blindness database for objects in natural indoor scenes.
Sareen, Preeti; Ehinger, Krista A; Wolfe, Jeremy M
2016-12-01
Change blindness has been a topic of interest in cognitive sciences for decades. Change detection experiments are frequently used for studying various research topics such as attention and perception. However, creating change detection stimuli is tedious and there is no open repository of such stimuli using natural scenes. We introduce the Change Blindness (CB) Database with object changes in 130 colored images of natural indoor scenes. The size and eccentricity are provided for all the changes as well as reaction time data from a baseline experiment. In addition, we have two specialized satellite databases that are subsets of the 130 images. In one set, changes are seen in rooms or in mirrors in those rooms (Mirror Change Database). In the other, changes occur in a room or out a window (Window Change Database). Both the sets have controlled background, change size, and eccentricity. The CB Database is intended to provide researchers with a stimulus set of natural scenes with defined stimulus parameters that can be used for a wide range of experiments. The CB Database can be found at http://search.bwh.harvard.edu/new/CBDatabase.html .
NASA Astrophysics Data System (ADS)
Folaron, Michelle; Deacutis, Martin; Hegarty, Jennifer; Vollmerhausen, Richard; Schroeder, John; Colby, Frank P.
2007-04-01
US Navy and Marine Corps pilots receive Night Vision Goggle (NVG) training as part of their overall training to maintain the superiority of our forces. This training must incorporate realistic targets; backgrounds; and representative atmospheric and weather effects they may encounter under operational conditions. An approach for pilot NVG training is to use the Night Imaging and Threat Evaluation Laboratory (NITE Lab) concept. The NITE Labs utilize a 10' by 10' static terrain model equipped with both natural and cultural lighting that are used to demonstrate various illumination conditions, and visual phenomena which might be experienced when utilizing night vision goggles. With this technology, the military can safely, systematically, and reliably expose pilots to the large number of potentially dangerous environmental conditions that will be experienced in their NVG training flights. A previous SPIE presentation described our work for NAVAIR to add realistic atmospheric and weather effects to the NVG NITE Lab training facility using the NVG - WDT(Weather Depiction Technology) system (Colby, et al.). NVG -WDT consist of a high end multiprocessor server with weather simulation software, and several fixed and goggle mounted Heads Up Displays (HUDs). Atmospheric and weather effects are simulated using state-of-the-art computer codes such as the WRF (Weather Research μ Forecasting) model; and the US Air Force Research Laboratory MODTRAN radiative transport model. Imagery for a variety of natural and man-made obscurations (e.g. rain, clouds, snow, dust, smoke, chemical releases) are being calculated and injected into the scene observed through the NVG via the fixed and goggle mounted HUDs. This paper expands on the work described in the previous presentation and will describe the 3D Virtual/Augmented Reality Scene - Weather - Atmosphere - Target Simulation part of the NVG - WDT. The 3D virtual reality software is a complete simulation system to generate realistic target - background scenes and display the results in a DirectX environment. This paper will describe our approach and show a brief demonstration of the software capabilities. The work is supported by the SBIR program under contract N61339-06-C-0113.
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-01-01
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis. PMID:28029121
Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.
Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu
2016-12-24
A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.
Study on general design of dual-DMD based infrared two-band scene simulation system
NASA Astrophysics Data System (ADS)
Pan, Yue; Qiao, Yang; Xu, Xi-ping
2017-02-01
Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.
Foo, Cheryl P Z; Ahghari, Mahvareh; MacDonald, Russell D
2010-01-01
Traumatic injury is a leading cause of morbidity and mortality, but these can be minimized by timely transport to definite care. Helicopter emergency medical services (HEMS) provide timely transport and can influence survival. However, accident analyses indicate that landing at an unsecured landing zone (LZ), particularly at night, increases the risk of aviation accidents. To ensure safety, some HEMS operations land only at designated, secured LZs. This study utilized geographic information systems (GISs) to compare locations of scene call requests and secure LZs. The goal was to determine the optimal placement of new helipads as a strategy to improve access while mitigating the risk of aviation accidents. Call request data from a large air medical transport service were used to determine the geographic locations of all requests for scene responses in 2006. Request locations were compared with the locations of existing helipads, and straight-line distances between scene and helipad were determined using the GIS application. The application was then used to determine potential locations for new helipads. During the study period, 748 requests for scene calls and 269 helipads were available. There were 476 (52.4%) requests at least 10 kilometers from a helipad and 356 (36.6%) requests at least 15 kilometers from a helipad. One particular region, Southwestern Ontario, was identified as having the highest number of requests >15 kilometers from the closest helipad. GISs can be used to determine potential locations for new helipad construction using historical call request data. This evidence-based approach can improve HEMS access while mitigating operational risk.
Figure-ground segmentation can occur without attention.
Kimchi, Ruth; Peterson, Mary A
2008-07-01
The question of whether or not figure-ground segmentation can occur without attention is unresolved. Early theorists assumed it can, but the evidence is scant and open to alternative interpretations. Recent research indicating that attention can influence figure-ground segmentation raises the question anew. We examined this issue by asking participants to perform a demanding change-detection task on a small matrix presented on a task-irrelevant scene of alternating regions organized into figures and grounds by convexity. Independently of any change in the matrix, the figure-ground organization of the scene changed or remained the same. Changes in scene organization produced congruency effects on target-change judgments, even though, when probed with surprise questions, participants could report neither the figure-ground status of the region on which the matrix appeared nor any change in that status. When attending to the scene, participants reported figure-ground status and changes to it highly accurately. These results clearly demonstrate that figure-ground segmentation can occur without focal attention.
A Dual-Process Account of Auditory Change Detection
ERIC Educational Resources Information Center
McAnally, Ken I.; Martin, Russell L.; Eramudugolla, Ranmalee; Stuart, Geoffrey W.; Irvine, Dexter R. F.; Mattingley, Jason B.
2010-01-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed…
The Development of Change Detection
ERIC Educational Resources Information Center
Shore, David I.; Burack, Jacob A.; Miller, Danny; Joseph, Shari; Enns, James T.
2006-01-01
Changes to a scene often go unnoticed if the objects of the change are unattended, making change detection an index of where attention is focused during scene perception. We measured change detection in school-age children and young adults by repeatedly alternating two versions of an image. To provide an age-fair assessment we used a bimanual…
The utility of polarimetry within passive military imaging systems
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin
2017-10-01
An ongoing challenge for many military imaging systems is the detection and classification of weak target signatures in a cluttered environment. In such cases, the use of image contrast and relative target motion alone does not always provide a sufficient level of target discrimination to give operational confidence and it is therefore necessary to consider the use of other discriminatory scene information. Polarisation is one such source of information and this paper reports on an extensive series of polarimetric trials undertaken across the visible, NIR, SWIR, MWIR and LWIR spectral bands. Using this data, the benefits and limitations of polarisation discrimination are reviewed in the context of practical military scenarios. It is shown that polarisation signatures vary with viewing geometry and atmospheric conditions. This would lead to an unpredictable performance level if the sensor discrimination was based solely on polarisation. However, by carefully combining polarisation with other scene information, useful operational benefits can be obtained and this is illustrated through a consideration of different data fusion approaches.
NASA Astrophysics Data System (ADS)
Menze, Moritz; Heipke, Christian; Geiger, Andreas
2018-06-01
This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
Regional information guidance system based on hypermedia concept
NASA Astrophysics Data System (ADS)
Matoba, Hiroshi; Hara, Yoshinori; Kasahara, Yutako
1990-08-01
A regional information guidance system has been developed on an image workstation. Two main features of this system are hypermedia data structure and friendly visual interface realized by the full-color frame memory system. As the hypermedia data structure manages regional information such as maps, pictures and explanations of points of interest, users can retrieve those information one by one, next to next according to their interest change. For example, users can retrieve explanation of a picture through the link between pictures and text explanations. Users can also traverse from one document to another by using keywords as cross reference indices. The second feature is to utilize a full-color, high resolution and wide space frame memory for visual interface design. This frame memory system enables real-time operation of image data and natural scene representation. The system also provides half tone representing function which enables fade-in/out presentations. This fade-in/out functions used in displaying and erasing menu and image data, makes visual interface soft for human eyes. The system we have developed is a typical example of multimedia applications. We expect the image workstation will play an important role as a platform for multimedia applications.
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko
2012-10-01
Using a holographic disc memory on which a huge amount of data can be stored, we constructed an ultra-high-speed, all-optical correlation system. In this method, multiplex recording is, however, restricted to "one page" on "one spot." In addition, signal information must be normalized as data of the same size, even if the object data size is smaller. Therefore, this system is difficult to apply to part of the object data scene (i.e., partial scene searching and template matching), while maintaining high accessibility and programmability. In this paper, we develop a holographic correlation system by a time division recording method that increases the number of multiplex recordings on the same spot. Assuming that a four-channel detector is utilized, 15 parallel correlations are achieved by a time-division recording method. Preliminary correlation experiments with the holographic optical disc setup are carried out by high correlation peaks at a rotational speed of 300 rpm. We also describe the combination of an optical correlation system for copyright content management that searches the Internet and detects illegal contents on video sharing websites.
Werner, Annette
2014-11-01
Illumination in natural scenes changes at multiple temporal and spatial scales: slow changes in global illumination occur in the course of a day, and we encounter fast and localised illumination changes when visually exploring the non-uniform light field of three-dimensional scenes; in addition, very long-term chromatic variations may come from the environment, like for example seasonal changes. In this context, I consider the temporal and spatial properties of chromatic adaptation and discuss their functional significance for colour constancy in three-dimensional scenes. A process of fast spatial tuning in chromatic adaptation is proposed as a possible sensory mechanism for linking colour constancy to the spatial structure of a scene. The observed middlewavelength selectivity of this process is particularly suitable for adaptation to the mean chromaticity and the compensation of interreflections in natural scenes. Two types of sensory colour constancy are distinguished, based on the functional differences of their temporal and spatial scales: a slow type, operating at a global scale for the compensation of the ambient illumination; and a fast colour constancy, which is locally restricted and well suited to compensate region-specific variations in the light field of three dimensional scenes. Copyright © 2014 Elsevier B.V. All rights reserved.
User-friendly InSAR Data Products: Fast and Simple Timeseries (FAST) Processing
NASA Astrophysics Data System (ADS)
Zebker, H. A.
2017-12-01
Interferometric Synthetic Aperture Radar (InSAR) methods provide high resolution maps of surface deformation applicable to many scientific, engineering and management studies. Despite its utility, the specialized skills and computer resources required for InSAR analysis remain as barriers for truly widespread use of the technique. Reduction of radar scenes to maps of temporal deformation evolution requires not only detailed metadata describing the exact radar and surface acquisition geometries, but also a software package that can combine these for the specific scenes of interest. Furthermore, the radar range-Doppler radar coordinate system itself is confusing, so that many users find it hard to incorporate even useful products in their customary analyses. And finally, the sheer data volume needed to represent interferogram time series makes InSAR analysis challenging for many analysis systems. We show here that it is possible to deliver radar data products to users that address all of these difficulties, so that the data acquired by large, modern satellite systems are ready to use in more natural coordinates, without requiring further processing, and in as small volume as possible.
Transient cardio-respiratory responses to visually induced tilt illusions
NASA Technical Reports Server (NTRS)
Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.
2000-01-01
Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.
An unusual pedestrian road trauma: from forensic pathology to forensic veterinary medicine.
Aquila, Isabella; Di Nunzio, Ciro; Paciello, Orlando; Britti, Domenico; Pepe, Francesca; De Luca, Ester; Ricci, Pietrantonio
2014-01-01
Traffic accidents have increased in the last decade, pedestrians being the most affected group. At autopsy, it is evident that the most common cause of pedestrian death is central nervous system injury, followed by skull base fractures, internal bleeding, lower limb haemorrhage, skull vault fractures, cervical spinal cord injury and airway compromise. The attribution of accident responsibility can be realised through reconstruction of road accident dynamics, investigation of the scene, survey of the vehicle involved and examination of the victim(s). A case study concerning a car accident where both humans and pets were involved is reported here. Investigation and reconstruction of the crime scene were conducted by a team consisting of forensic pathologists and forensic veterinarians. At the scene investigation, the pedestrian and his dog were recovered on the side of the road. An autopsy and a necropsy were conducted on the man and the dog, respectively. In addition, a complete inspection of the sports utility vehicle (SUV) implicated in the road accident was conducted. The results of the autopsy and necropsy were compared and the information was used to reconstruct the collision. This unusual case was solved through the collaboration between forensic pathology and veterinary forensic medicine, emphasising the importance of this kind of co-operation to solve a crime scene concerning both humans and animals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Texture classification using autoregressive filtering
NASA Technical Reports Server (NTRS)
Lawton, W. M.; Lee, M.
1984-01-01
A general theory of image texture models is proposed and its applicability to the problem of scene segmentation using texture classification is discussed. An algorithm, based on half-plane autoregressive filtering, which optimally utilizes second order statistics to discriminate between texture classes represented by arbitrary wide sense stationary random fields is described. Empirical results of applying this algorithm to natural and sysnthesized scenes are presented and future research is outlined.
Correlated Topic Vector for Scene Classification.
Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang
2017-07-01
Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
The forensic holodeck: an immersive display for forensic crime scene reconstructions.
Ebert, Lars C; Nguyen, Tuan T; Breitbeck, Robert; Braun, Marcel; Thali, Michael J; Ross, Steffen
2014-12-01
In forensic investigations, crime scene reconstructions are created based on a variety of three-dimensional image modalities. Although the data gathered are three-dimensional, their presentation on computer screens and paper is two-dimensional, which incurs a loss of information. By applying immersive virtual reality (VR) techniques, we propose a system that allows a crime scene to be viewed as if the investigator were present at the scene. We used a low-cost VR headset originally developed for computer gaming in our system. The headset offers a large viewing volume and tracks the user's head orientation in real-time, and an optical tracker is used for positional information. In addition, we created a crime scene reconstruction to demonstrate the system. In this article, we present a low-cost system that allows immersive, three-dimensional and interactive visualization of forensic incident scene reconstructions.
The Nature of Change Detection and Online Representations of Scenes
ERIC Educational Resources Information Center
Ryan,J ennifer D.; Cohen, Neal J.
2004-01-01
This article provides evidence for implicit change detection and for the contribution of multiple memory sources to online representations. Multiple eye-movement measures distinguished original from changed scenes, even when college students had no conscious awareness for the change. Patients with amnesia showed a systematic deficit on 1 class of…
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
An intelligent crowdsourcing system for forensic analysis of surveillance video
NASA Astrophysics Data System (ADS)
Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.
2015-03-01
Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.
Effect of Display Color on Pilot Performance and Describing Functions
NASA Technical Reports Server (NTRS)
Chase, Wendell D.
1997-01-01
A study has been conducted with the full-spectrum, calligraphic, computer-generated display system to determine the effect of chromatic content of the visual display upon pilot performance during the landing approach maneuver. This study utilizes a new digital chromatic display system, which has previously been shown to improve the perceived fidelity of out-the-window display scenes, and presents the results of an experiment designed to determine the effects of display color content by the measurement of both vertical approach performance and pilot-describing functions. This method was selected to more fully explore the effects of visual color cues used by the pilot. Two types of landing approaches were made: dynamic and frozen range, with either a landing approach scene or a perspective array display. The landing approach scene was presented with either red runway lights and blue taxiway lights or with the colors reversed, and the perspective array with red lights, blue lights, or red and blue lights combined. The vertical performance measures obtained in this experiment indicated that the pilots performed best with the blue and red/blue displays. and worst with the red displays. The describing-function system analysis showed more variation with the red displays. The crossover frequencies were lowest with the red displays and highest with the combined red/blue displays, which provided the best overall tracking, performance. Describing-function performance measures, vertical performance measures, and pilot opinion support the hypothesis that specific colors in displays can influence the pilots' control characteristics during the final approach.
Somali Perspectives on Physical Activity: Photovoice to Address Barriers and Resources in San Diego
Murray, Kate; Mohamed, Amina Sheik; Dawson, Darius B.; Syme, Maggie; Abdi, Sahra; Barnack-Tavlaris, Jessica
2015-01-01
Background Though many immigrants enter the U.S. with a healthy body weight, this health advantage disappears the longer they reside in the U.S. To better understand the complexities of obesity change within a cultural framework, a community-based participatory research (CBPR) approach, Photovoice, was utilized focusing on physical activity among Muslim Somali women. Objectives The CBPR partnership was formed to identify barriers and resources to engaging in physical activity with goals of advocacy and program development. Methods Muslim Somali women (n = 8) were recruited to participate, trained and provided cameras, and engaged in group discussions about the scenes they photographed. Results Participants identified several barriers, including safety concerns, minimal culturally appropriate resources, and financial constraints. Strengths included public resources and a community support system. The CBPR process identified opportunities and challenges to collaboration and dissemination processes. Conclusions The findings laid the framework for subsequent program development and community engagement. PMID:25981428
Beanland, Vanessa; Filtness, Ashleigh J; Jeans, Rhiannon
2017-03-01
The ability to detect changes is crucial for safe driving. Previous research has demonstrated that drivers often experience change blindness, which refers to failed or delayed change detection. The current study explored how susceptibility to change blindness varies as a function of the driving environment, type of object changed, and safety relevance of the change. Twenty-six fully-licenced drivers completed a driving-related change detection task. Changes occurred to seven target objects (road signs, cars, motorcycles, traffic lights, pedestrians, animals, or roadside trees) across two environments (urban or rural). The contextual safety relevance of the change was systematically manipulated within each object category, ranging from high safety relevance (i.e., requiring a response by the driver) to low safety relevance (i.e., requiring no response). When viewing rural scenes, compared with urban scenes, participants were significantly faster and more accurate at detecting changes, and were less susceptible to "looked-but-failed-to-see" errors. Interestingly, safety relevance of the change differentially affected performance in urban and rural environments. In urban scenes, participants were more efficient at detecting changes with higher safety relevance, whereas in rural scenes the effect of safety relevance has marginal to no effect on change detection. Finally, even after accounting for safety relevance, change blindness varied significantly between target types. Overall the results suggest that drivers are less susceptible to change blindness for objects that are likely to change or move (e.g., traffic lights vs. road signs), and for moving objects that pose greater danger (e.g., wild animals vs. pedestrians). Copyright © 2017 Elsevier Ltd. All rights reserved.
Kiat, John E; Dodd, Michael D; Belli, Robert F; Cheadle, Jacob E
2018-05-01
Neuroimaging-based investigations of change blindness, a phenomenon in which seemingly obvious changes in visual scenes fail to be detected, have significantly advanced our understanding of visual awareness. The vast majority of prior investigations, however, utilize paradigms involving visual disruptions (e.g., intervening blank screens, saccadic movements, "mudsplashes"), making it difficult to isolate neural responses toward visual changes cleanly. To address this issue in this present study, high-density EEG data (256 channel) were collected from 25 participants using a paradigm in which visual changes were progressively introduced into detailed real-world scenes without the use of visual disruption. Oscillatory activity associated with undetected changes was contrasted with activity linked to their absence using standardized low-resolution brain electromagnetic tomography (sLORETA). Although an insufficient number of detections were present to allow for analysis of actual change detection, increased beta-2 activity in the right inferior parietal lobule (rIPL), a region repeatedly associated with change blindness in disruption paradigms, followed by increased theta activity in the right superior temporal gyrus (rSTG) was noted in undetected visual change responses relative to the absence of change. We propose the rIPL beta-2 activity to be associated with orienting attention toward visual changes, with the subsequent rise in rSTG theta activity being potentially linked with updating preconscious perceptual memory representations. NEW & NOTEWORTHY This study represents the first neuroimaging-based investigation of gradual change blindness, a visual phenomenon that has significant potential to shed light on the processes underlying visual detection and conscious perception. The use of gradual change materials is reflective of real-world visual phenomena and allows for cleaner isolation of signals associated with the neural registration of change relative to the use of abrupt change transients.
Mapping and monitoring renewable resources with space SAR
NASA Technical Reports Server (NTRS)
Ulaby, F. T.; Brisco, B.; Dobson, M. C.; Moezzi, S.
1983-01-01
The SEASAT-A SAR and SIR-A imagery was examined to evaluate the quality and type of information that can be extracted and used to monitor renewable resources on Earth. Two tasks were carried out: (1) a land cover classification study which utilized two sets of imagery acquired by the SEASAT-A SAR, one set by SIR-A, and one LANDSAT set (4 bands); and (2) a change detection to examine differences between pairs of SEASAT-A SAR images and relates them to hydrologic and/or agronomic variations in the scene.
NASA Astrophysics Data System (ADS)
Cudennec, Christophe
2016-04-01
The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y., Schumann A., Post D., Taniguchi M., Boegh E., Hubert P., Harman C., Thompson S., Rogger M., Hipsey M., Toth E., Viglione A., Di Baldassarre G., Schaefli B., McMillan H., Schymanski S., Characklis G., Yu B., Pang Z., Belyaev V., 2013. "Panta Rhei - Everything Flows": Change in hydrology and society - The IAHS Scientific Decade 2013-2022. Hydrological Sciences Journal, 58, 6, 1256-1275, DOI: 10.1080/02626667.2013.809088
Eye Movements and Visual Memory for Scenes
2005-01-01
Scene memory research has demonstrated that the memory representation of a semantically inconsistent object in a scene is more detailed and/or complete... memory during scene viewing, then changes to semantically inconsistent objects (which should be represented more com- pletely) should be detected more... semantic description. Due to the surprise nature of the visual memory test, any learning that occurred during the search portion of the experiment was
NASA Astrophysics Data System (ADS)
Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.
2007-04-01
AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.
Basic level scene understanding: categories, attributes and structures
Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude
2013-01-01
A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590
a Novel Framework for Remote Sensing Image Scene Classification
NASA Astrophysics Data System (ADS)
Jiang, S.; Zhao, H.; Wu, W.; Tan, Q.
2018-04-01
High resolution remote sensing (HRRS) images scene classification aims to label an image with a specific semantic category. HRRS images contain more details of the ground objects and their spatial distribution patterns than low spatial resolution images. Scene classification can bridge the gap between low-level features and high-level semantics. It can be applied in urban planning, target detection and other fields. This paper proposes a novel framework for HRRS images scene classification. This framework combines the convolutional neural network (CNN) and XGBoost, which utilizes CNN as feature extractor and XGBoost as a classifier. Then, this framework is evaluated on two different HRRS images datasets: UC-Merced dataset and NWPU-RESISC45 dataset. Our framework achieved satisfying accuracies on two datasets, which is 95.57 % and 83.35 % respectively. From the experiments result, our framework has been proven to be effective for remote sensing images classification. Furthermore, we believe this framework will be more practical for further HRRS scene classification, since it costs less time on training stage.
Beyond scene gist: Objects guide search more than scene background.
Koehler, Kathryn; Eckstein, Miguel P
2017-06-01
Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Fischer, E.
1979-01-01
The pilot's ability to accurately extract information from either one or both of two superimposed sources of information was determined. Static, aerial, color 35 mm slides of external runway environments and slides of corresponding static head-up display (HUD) symbology were used as the sources. A three channel tachistoscope was utilized to show either the HUD alone, the scene alone, or the two slides superimposed. Cognitive performance of the pilots was assessed by determining the percentage of correct answers given to two HUD related questions, two scene related questions, or one HUD and one scene related question.
ERIC Educational Resources Information Center
Fletcher-Watson, S.; Collis, J. M.; Findlay, J. M.; Leekam, S. R.
2009-01-01
Change blindness describes the surprising difficulty of detecting large changes in visual scenes when changes occur during a visual disruption. In order to study the developmental course of this phenomenon, a modified version of the flicker paradigm, based on Rensink, O'Regan & Clark (1997), was given to three groups of children aged 6-12 years…
Achieving ultra-high temperatures with a resistive emitter array
NASA Astrophysics Data System (ADS)
Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott
2016-05-01
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.
Resolution-enhanced Mapping Spectrometer
NASA Technical Reports Server (NTRS)
Kumer, J. B.; Aubrun, J. N.; Rosenberg, W. J.; Roche, A. E.
1993-01-01
A familiar mapping spectrometer implementation utilizes two dimensional detector arrays with spectral dispersion along one direction and spatial along the other. Spectral images are formed by spatially scanning across the scene (i.e., push-broom scanning). For imaging grating and prism spectrometers, the slit is perpendicular to the spatial scan direction. For spectrometers utilizing linearly variable focal-plane-mounted filters the spatial scan direction is perpendicular to the direction of spectral variation. These spectrometers share the common limitation that the number of spectral resolution elements is given by the number of pixels along the spectral (or dispersive) direction. Resolution enhancement by first passing the light input to the spectrometer through a scanned etalon or Michelson is discussed. Thus, while a detector element is scanned through a spatial resolution element of the scene, it is also temporally sampled. The analysis for all the pixels in the dispersive direction is addressed. Several specific examples are discussed. The alternate use of a Michelson for the same enhancement purpose is also discussed. Suitable for weight constrained deep space missions, hardware systems were developed including actuators, sensor, and electronics such that low-resolution etalons with performance required for implementation would weigh less than one pound.
Development of an ultra-high temperature infrared scene projector at Santa Barbara Infrared Inc.
NASA Astrophysics Data System (ADS)
Franks, Greg; Laveigne, Joe; Danielson, Tom; McHugh, Steve; Lannon, John; Goodwin, Scott
2015-05-01
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.
Automatic Pedestrian Crossing Detection and Impairment Analysis Based on Mobile Mapping System
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, Y.; Li, Q.
2017-09-01
Pedestrian crossing, as an important part of transportation infrastructures, serves to secure pedestrians' lives and possessions and keep traffic flow in order. As a prominent feature in the street scene, detection of pedestrian crossing contributes to 3D road marking reconstruction and diminishing the adverse impact of outliers in 3D street scene reconstruction. Since pedestrian crossing is subject to wearing and tearing from heavy traffic flow, it is of great imperative to monitor its status quo. On this account, an approach of automatic pedestrian crossing detection using images from vehicle-based Mobile Mapping System is put forward and its defilement and impairment are analyzed in this paper. Firstly, pedestrian crossing classifier is trained with low recall rate. Then initial detections are refined by utilizing projection filtering, contour information analysis, and monocular vision. Finally, a pedestrian crossing detection and analysis system with high recall rate, precision and robustness will be achieved. This system works for pedestrian crossing detection under different situations and light conditions. It can recognize defiled and impaired crossings automatically in the meanwhile, which facilitates monitoring and maintenance of traffic facilities, so as to reduce potential traffic safety problems and secure lives and property.
Shape-based human detection for threat assessment
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.
2004-07-01
Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.
Change deafness for real spatialized environmental scenes.
Gaston, Jeremy; Dickerson, Kelly; Hipp, Daniel; Gerhardstein, Peter
2017-01-01
The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds. Results using signal detection theory and accuracy analyses indicated that, under most conditions, errors were significantly reduced for spatially distributed relative to non-spatial scenes. A second goal of the present study was to evaluate a possible link between memory for scene contents and change discrimination. Memory was evaluated by presenting a cued recall test following each trial of the discrimination task. Results using signal detection theory and accuracy analyses indicated that recall ability was similar in terms of accuracy, but there were reductions in sensitivity compared to previous reports. Finally, the present study used a large and representative sample of outdoor, urban, and environmental sounds, presented in unique combinations of nearly 1000 trials per participant. This enabled the exploration of the relationship between change perception and the perceptual similarity between change targets and background scene sounds. These (post hoc) analyses suggest both a categorical and a stimulus-level relationship between scene similarity and the magnitude of change errors.
Trained Eyes: Experience Promotes Adaptive Gaze Control in Dynamic and Uncertain Visual Environments
Taya, Shuichiro; Windridge, David; Osman, Magda
2013-01-01
Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. PMID:23951147
Radiometrically accurate scene-based nonuniformity correction for array sensors.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2003-10-01
A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.
Good initialization model with constrained body structure for scene text recognition
NASA Astrophysics Data System (ADS)
Zhu, Anna; Wang, Guoyou; Dong, Yangbo
2016-09-01
Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Updating representations of learned scenes.
Finlay, Cory A; Motes, Michael A; Kozhevnikov, Maria
2007-05-01
Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0 degrees -360 degrees in 36 degrees increments) around the scene, and participants judged whether the objects' positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.
Heasly, Benjamin S; Cottaris, Nicolas P; Lichtman, Daniel P; Xiao, Bei; Brainard, David H
2014-02-07
RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3.
Optical to optical interface device
NASA Technical Reports Server (NTRS)
Oliver, D. S.; Vohl, P.; Nisenson, P.
1972-01-01
The development, fabrication, and testing of a preliminary model of an optical-to-optical (noncoherent-to-coherent) interface device for use in coherent optical parallel processing systems are described. The developed device demonstrates a capability for accepting as an input a scene illuminated by a noncoherent radiation source and providing as an output a coherent light beam spatially modulated to represent the original noncoherent scene. The converter device developed under this contract employs a Pockels readout optical modulator (PROM). This is a photosensitive electro-optic element which can sense and electrostatically store optical images. The stored images can be simultaneously or subsequently readout optically by utilizing the electrostatic storage pattern to control an electro-optic light modulating property of the PROM. The readout process is parallel as no scanning mechanism is required. The PROM provides the functions of optical image sensing, modulation, and storage in a single active material.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-01-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703
Development of infrared scene projectors for testing fire-fighter cameras
NASA Astrophysics Data System (ADS)
Neira, Jorge E.; Rice, Joseph P.; Amon, Francine K.
2008-04-01
We have developed two types of infrared scene projectors for hardware-in-the-loop testing of thermal imaging cameras such as those used by fire-fighters. In one, direct projection, images are projected directly into the camera. In the other, indirect projection, images are projected onto a diffuse screen, which is then viewed by the camera. Both projectors use a digital micromirror array as the spatial light modulator, in the form of a Micromirror Array Projection System (MAPS) engine having resolution of 800 x 600 with mirrors on a 17 micrometer pitch, aluminum-coated mirrors, and a ZnSe protective window. Fire-fighter cameras are often based upon uncooled microbolometer arrays and typically have resolutions of 320 x 240 or lower. For direct projection, we use an argon-arc source, which provides spectral radiance equivalent to a 10,000 Kelvin blackbody over the 7 micrometer to 14 micrometer wavelength range, to illuminate the micromirror array. For indirect projection, an expanded 4 watt CO II laser beam at a wavelength of 10.6 micrometers illuminates the micromirror array and the scene formed by the first-order diffracted light from the array is projected onto a diffuse aluminum screen. In both projectors, a well-calibrated reference camera is used to provide non-uniformity correction and brightness calibration of the projected scenes, and the fire-fighter cameras alternately view the same scenes. In this paper, we compare the two methods for this application and report on our quantitative results. Indirect projection has an advantage of being able to more easily fill the wide field of view of the fire-fighter cameras, which typically is about 50 degrees. Direct projection more efficiently utilizes the available light, which will become important in emerging multispectral and hyperspectral applications.
Sohl, Terry L.; Dwyer, John L.
1998-01-01
The North American Landscape Characterization (NALC) project is a component of the National Aeronautics and Space Administration (NASA) Landsat Pathfinder program. Pathfinder projects are focused on the investigation of global change utilizing current remote sensing technologies. The NALC project is a cooperative effort between the U. S. Environmental Protection Agency (EPA), the U.S. Geological Survey (USGS), and NASA to make Landsat data available to the widest possible user community for scientific research and general public interest. The NALC project is principally funded by the EPA Office of Research and Development and the USGS's Earth Resources Observation Systems (EROS) Data Center (EDC).The objectives of the NALC project are to produce standardized remote sensing data sets, develop standardized analysis methods, and derive standardized land cover change products for a large portion of the North American continent (the conterminous United States and Mexico) (Lunetta and Sturdevant, 1993). The standard product is the NALC “triplicate”;, consisting of co‐registered Landsat multispectral scanner data for the years 1973, 1986, and 1991 (plus or minus one year), plus co‐registered 3 arcsecond digital terrain elevation data. Processing began with the 1986 scene, which was precision corrected (with full terrain correction) to a 60 meter Universal Transverse Mercator base. Automated cross‐correlation procedures were used to co‐register the 1970's and 1990's data to the 1980's base, and independent verifications of registration quality were performed on all triplicate components. The pertinent metadata were compiled in a relational database, which includes WRS2 path/rows, scene ID's, image dates, solar azimuth and elevation, verification RMSE's, and the number of verification control points. NALC triplicate data sets are being used for a number of applications, including the analysis of urbanization patterns, dynamics of climatic fluctuations, deforestation studies, and vegetation classification and mapping. These data are being distributed through the Earth Observing System Data and Information System (EOSDIS) Information Management System (IMS) at a cost of $15(U.S.) for each triplicate.
Hird, H J; Brown, M K
2017-11-01
The identification of samples at a crime scene which require forensic DNA typing has been the focus of recent research interest. We propose a simple, but sensitive analysis system which can be deployed at a crime scene to identify crime scene stains as human or non-human. The proposed system uses the isothermal amplification of DNA in a rapid assay format, which returns results in as little as 30min from sampling. The assay system runs on the Genie II device, a proven in-field detection system which could be deployed at a crime scene. The results presented here demonstrate that the system was sufficiently specific and sensitive and was able to detect the presence of human blood, semen and saliva on mock forensic samples. Copyright © 2017. Published by Elsevier B.V.
Rank preserving sparse learning for Kinect based scene classification.
Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong
2013-10-01
With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.
A knowledge-based machine vision system for space station automation
NASA Technical Reports Server (NTRS)
Chipman, Laure J.; Ranganath, H. S.
1989-01-01
A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.
Research on three-dimensional real scene technology of Sichuan-Tibet highway
NASA Astrophysics Data System (ADS)
Yin, Peng; Bo, Xianglei; Liu, Fen
2018-04-01
This paper studies the three-dimensional real scene technology in the application of highway simulation, and a system to realize three-dimensional real scene of Sichuan-Tibet highway is presented. This system can improve the defect of the traditional Sichuan-Tibet highway geographic information system from performance and feeling. The Tibet forces can use this system to improve motor adaptive training effect and command decision-making ability.
Electrophysiological revelations of trial history effects in a color oddball search task.
Shin, Eunsam; Chong, Sang Chul
2016-12-01
In visual oddball search tasks, viewing a no-target scene (i.e., no-target selection trial) leads to the facilitation or delay of the search time for a target in a subsequent trial. Presumably, this selection failure leads to biasing attentional set and prioritizing stimulus features unseen in the no-target scene. We observed attention-related ERP components and tracked the course of attentional biasing as a function of trial history. Participants were instructed to identify color oddballs (i.e., targets) shown in varied trial sequences. The number of no-target scenes preceding a target scene was increased from zero to two to reinforce attentional biasing, and colors presented in two successive no-target scenes were repeated or changed to systematically bias attention to specific colors. For the no-target scenes, the presentation of a second no-target scene resulted in an early selection of, and sustained attention to, the changed colors (mirrored in the frontal selection positivity, the anterior N2, and the P3b). For the target scenes, the N2pc indicated an earlier allocation of attention to the targets with unseen or remotely seen colors. Inhibitory control of attention, shown in the anterior N2, was greatest when the target scene was followed by repeated no-target scenes with repeated colors. Finally, search times and the P3b were influenced by both color previewing and its history. The current results demonstrate that attentional biasing can occur on a trial-by-trial basis and be influenced by both feature previewing and its history. © 2016 Society for Psychophysiological Research.
The Faces in Infant-Perspective Scenes Change over the First Year of Life
Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B.
2015-01-01
Mature face perception has its origins in the face experiences of infants. However, little is known about the basic statistics of faces in early visual environments. We used head cameras to capture and analyze over 72,000 infant-perspective scenes from 22 infants aged 1-11 months as they engaged in daily activities. The frequency of faces in these scenes declined markedly with age: for the youngest infants, faces were present 15 minutes in every waking hour but only 5 minutes for the oldest infants. In general, the available faces were well characterized by three properties: (1) they belonged to relatively few individuals; (2) they were close and visually large; and (3) they presented views showing both eyes. These three properties most strongly characterized the face corpora of our youngest infants and constitute environmental constraints on the early development of the visual system. PMID:26016988
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang
2007-12-01
In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-12-31
The Twenty-Third Annual Illinois Energy Conference entitled, ``Energy and Environmental Policy in a Period of Transition`` was held in Chicago, Illinois on November 20--21, 1995. The conference program explored how federal policy in energy and environment is changing and how these shifts will impact the economy of the Midwest. The conference was divided in four plenary sessions. Session 1 focused on the national policy scene where speakers discussed proposed legislation to change federal energy and environmental policy. Session 2 looked at the future structure of the energy industry, projecting the roles of natural gas, the electric utility industry, and independentmore » power producers in the overall energy system of the 21st century. Session 3 examined current federal policy in research and development as a baseline for discussing the future role of government and industry in supporting research and development. In particular, it looked at the relationship between energy research and development and global competitiveness. Finally, Session 4 attempted to tie these issues together and consider the impact of national policy change on Illinois and the Midwest.« less
Three-dimensional measurement system for crime scene documentation
NASA Astrophysics Data System (ADS)
Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert
2017-10-01
Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.
NASA Technical Reports Server (NTRS)
Barker, John L.; Harnden, Joann M. K.; Montgomery, Harry; Anuta, Paul; Kvaran, Geir; Knight, ED; Bryant, Tom; Mckay, AL; Smid, Jon; Knowles, Dan, Jr.
1994-01-01
The EOS Moderate Resolution Imaging Spectrometer (MODIS) is being developed by NASA for flight on the Earth Observing System (EOS) series of satellites, the first of which (EOS-AM-1) is scheduled for launch in 1998. This document describes the algorithms and their theoretical basis for the MODIS Level 1B characterization, calibration, and geolocation algorithms which must produce radiometrically, spectrally, and spatially calibrated data with sufficient accuracy so that Global change research programs can detect minute changes in biogeophysical parameters. The document first describes the geolocation algorithm which determines geodetic latitude, longitude, and elevation of each MODIS pixel and the determination of geometric parameters for each observation (satellite zenith angle, satellite azimuth, range to the satellite, solar zenith angle, and solar azimuth). Next, the utilization of the MODIS onboard calibration sources, which consist of the Spectroradiometric Calibration Assembly (SRCA), Solar Diffuser (SD), Solar Diffuser Stability Monitor (SDSM), and the Blackbody (BB), is treated. Characterization of these sources and integration of measurements into the calibration process is described. Finally, the use of external sources, including the Moon, instrumented sites on the Earth (called vicarious calibration), and unsupervised normalization sites having invariant reflectance and emissive properties is treated. Finally, algorithms for generating utility masks needed for scene-based calibration are discussed. Eight appendices are provided, covering instrument design and additional algorithm details.
Saliency predicts change detection in pictures of natural scenes.
Wright, Michael J
2005-01-01
It has been proposed that the visual system encodes the salience of objects in the visual field in an explicit two-dimensional map that guides visual selective attention. Experiments were conducted to determine whether salience measurements applied to regions of pictures of outdoor scenes could predict the detection of changes in those regions. To obtain a quantitative measure of change detection, observers located changes in pairs of colour pictures presented across an interstimulus interval (ISI). Salience measurements were then obtained from different observers for image change regions using three independent methods, and all were positively correlated with change detection. Factor analysis extracted a single saliency factor that accounted for 62% of the variance contained in the four measures. Finally, estimates of the magnitude of the image change in each picture pair were obtained, using nine separate visual filters representing low-level vision features (luminance, colour, spatial frequency, orientation, edge density). None of the feature outputs was significantly associated with change detection or saliency. On the other hand it was shown that high-level (structural) properties of the changed region were related to saliency and to change detection: objects were more salient than shadows and more detectable when changed.
ERIC Educational Resources Information Center
Carlin, Michael T.; Soraci, Sal A.; Strawbridge, Christina P.
2005-01-01
Memory for scene changes that were identified immediately (passive encoding) or following systematic and effortful search (generative encoding) was compared across groups differing in age and intelligence. In the context of flicker methodology, generative search for the changing object involved selection and rejection of multiple potential…
Age-related macular degeneration changes the processing of visual scenes in the brain.
Ramanoël, Stephen; Chokron, Sylvie; Hera, Ruxandra; Kauffmann, Louise; Chiquet, Christophe; Krainik, Alexandre; Peyrin, Carole
2018-01-01
In age-related macular degeneration (AMD), the processing of fine details in a visual scene, based on a high spatial frequency processing, is impaired, while the processing of global shapes, based on a low spatial frequency processing, is relatively well preserved. The present fMRI study aimed to investigate the residual abilities and functional brain changes of spatial frequency processing in visual scenes in AMD patients. AMD patients and normally sighted elderly participants performed a categorization task using large black and white photographs of scenes (indoors vs. outdoors) filtered in low and high spatial frequencies, and nonfiltered. The study also explored the effect of luminance contrast on the processing of high spatial frequencies. The contrast across scenes was either unmodified or equalized using a root-mean-square contrast normalization in order to increase contrast in high-pass filtered scenes. Performance was lower for high-pass filtered scenes than for low-pass and nonfiltered scenes, for both AMD patients and controls. The deficit for processing high spatial frequencies was more pronounced in AMD patients than in controls and was associated with lower activity for patients than controls not only in the occipital areas dedicated to central and peripheral visual fields but also in a distant cerebral region specialized for scene perception, the parahippocampal place area. Increasing the contrast improved the processing of high spatial frequency content and spurred activation of the occipital cortex for AMD patients. These findings may lead to new perspectives for rehabilitation procedures for AMD patients.
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth
2016-01-01
Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.
Scheme for Terminal Guidance Utilizing Acousto-Optic Correlator.
longitudinally extending acousto - optic device as index of refraction variation pattern signals. Real time signals corresponding to the scene actually being viewed...by the vehicle are propagated across the stored signals, and the results of an acousto - optic correlation are utilized to determine X and Y error
The role of iconic memory in change-detection tasks.
Becker, M W; Pashler, H; Anstis, S M
2000-01-01
In three experiments, subjects attempted to detect the change of a single item in a visually presented array of items. Subjects' ability to detect a change was greatly reduced if a blank interstimulus interval (ISI) was inserted between the original array and an array in which one item had changed ('change blindness'). However, change detection improved when the location of the change was cued during the blank ISI. This suggests that people represent more information of a scene than change blindness might suggest. We test two possible hypotheses why, in the absence of a cue, this representation fails to produce good change detection. The first claims that the intervening events employed to create change blindness result in multiple neural transients which co-occur with the to-be-detected change. Poor detection rates occur because a serial search of all the transient locations is required to detect the change, during which time the representation of the original scene fades. The second claims that the occurrence of the second frame overwrites the representation of the first frame, unless that information is insulated against overwriting by attention. The results support the second hypothesis. We conclude that people may have a fairly rich visual representation of a scene while the scene is present, but fail to detect changes because they lack the ability to simultaneously represent two complete visual representations.
Zhao, Nan; Chen, Wenfeng; Xuan, Yuming; Mehler, Bruce; Reimer, Bryan; Fu, Xiaolan
2014-01-01
The 'looked-but-failed-to-see' phenomenon is crucial to driving safety. Previous research utilising change detection tasks related to driving has reported inconsistent effects of driver experience on the ability to detect changes in static driving scenes. Reviewing these conflicting results, we suggest that drivers' increased ability to detect changes will only appear when the task requires a pattern of visual attention distribution typical of actual driving. By adding a distant fixation point on the road image, we developed a modified change blindness paradigm and measured detection performance of drivers and non-drivers. Drivers performed better than non-drivers only in scenes with a fixation point. Furthermore, experience effect interacted with the location of the change and the relevance of the change to driving. These results suggest that learning associated with driving experience reflects increased skill in the efficient distribution of visual attention across both the central focus area and peripheral objects. This article provides an explanation for the previously conflicting reports of driving experience effects in change detection tasks. We observed a measurable benefit of experience in static driving scenes, using a modified change blindness paradigm. These results have translational opportunities for picture-based training and testing tools to improve driver skill.
Picotte, Joshua J.; Coan, Michael; Howard, Stephen M.
2014-01-01
The effort to utilize satellite-based MODIS, AVHRR, and GOES fire detections from the Hazard Monitoring System (HMS) to identify undocumented fires in Florida and improve the Monitoring Trends in Burn Severity (MTBS) mapping process has yielded promising results. This method was augmented using regression tree models to identify burned/not-burned pixels (BnB) in every Landsat scene (1984–2012) in Worldwide Referencing System 2 Path/Rows 16/40, 17/39, and 1839. The burned area delineations were combined with the HMS detections to create burned area polygons attributed with their date of fire detection. Within our study area, we processed 88,000 HMS points (2003–2012) and 1,800 Landsat scenes to identify approximately 300,000 burned area polygons. Six percent of these burned area polygons were larger than the 500-acre MTBS minimum size threshold. From this study, we conclude that the process can significantly improve understanding of fire occurrence and improve the efficiency and timeliness of assessing its impacts upon the landscape.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
-The Influence of Scene Context on Parafoveal Processing of Objects.
Castelhano, Monica S; Pereira, Effie J
2017-04-21
Many studies in reading have shown the enhancing effect of context on the processing of a word before it is directly fixated (parafoveal processing of words; Balota et al., 1985; Balota & Rayner, 1983; Ehrlich & Rayner, 1981). Here, we examined whether scene context influences the parafoveal processing of objects and enhances the extraction of object information. Using a modified boundary paradigm (Rayner, 1975), the Dot-Boundary paradigm, participants fixated on a suddenly-onsetting cue before the preview object would onset 4° away. The preview object could be identical to the target, visually similar, visually dissimilar, or a control (black rectangle). The preview changed to the target object once a saccade toward the object was made. Critically, the objects were presented on either a consistent or an inconsistent scene background. Results revealed that there was a greater processing benefit for consistent than inconsistent scene backgrounds and that identical and visually similar previews produced greater processing benefits than other previews. In the second experiment, we added an additional context condition in which the target location was inconsistent, but the scene semantics remained consistent. We found that changing the location of the target object disrupted the processing benefit derived from the consistent context. Most importantly, across both experiments, the effect of preview was not enhanced by scene context. Thus, preview information and scene context appear to independently boost the parafoveal processing of objects without any interaction from object-scene congruency.
Hillstrom, Anne P; Segabinazi, Joice D; Godwin, Hayward J; Liversedge, Simon P; Benson, Valerie
2017-02-19
We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Multiple pedestrian detection using IR LED stereo camera
NASA Astrophysics Data System (ADS)
Ling, Bo; Zeifman, Michael I.; Gibson, David R. P.
2007-09-01
As part of the U.S. Department of Transportations Intelligent Vehicle Initiative (IVI) program, the Federal Highway Administration (FHWA) is conducting R&D in vehicle safety and driver information systems. There is an increasing number of applications where pedestrian monitoring is of high importance. Visionbased pedestrian detection in outdoor scenes is still an open challenge. People dress in very different colors that sometimes blend with the background, wear hats or carry bags, and stand, walk and change directions unpredictably. The background is various, containing buildings, moving or parked cars, bicycles, street signs, signals, etc. Furthermore, existing pedestrian detection systems perform only during daytime, making it impossible to detect pedestrians at night. Under FHWA funding, we are developing a multi-pedestrian detection system using IR LED stereo camera. This system, without using any templates, detects the pedestrians through statistical pattern recognition utilizing 3D features extracted from the disparity map. A new IR LED stereo camera is being developed, which can help detect pedestrians during daytime and night time. Using the image differencing and denoising, we have also developed new methods to estimate the disparity map of pedestrians in near real time. Our system will have a hardware interface with the traffic controller through wireless communication. Once pedestrians are detected, traffic signals at the street intersections will change phases to alert the drivers of approaching vehicles. The initial test results using images collected at a street intersection show that our system can detect pedestrians in near real time.
Acquaintance Rape: Applying Crime Scene Analysis to the Prediction of Sexual Recidivism.
Lehmann, Robert J B; Goodwill, Alasdair M; Hanson, R Karl; Dahle, Klaus-Peter
2016-10-01
The aim of the current study was to enhance the assessment and predictive accuracy of risk assessments for sexual offenders by utilizing detailed crime scene analysis (CSA). CSA was conducted on a sample of 247 male acquaintance rapists from Berlin (Germany) using a nonmetric, multidimensional scaling (MDS) Behavioral Thematic Analysis (BTA) approach. The age of the offenders at the time of the index offense ranged from 14 to 64 years (M = 32.3; SD = 11.4). The BTA procedure revealed three behavioral themes of hostility, criminality, and pseudo-intimacy, consistent with previous CSA research on stranger rape. The construct validity of the three themes was demonstrated through correlational analyses with known sexual offending measures and criminal histories. The themes of hostility and pseudo-intimacy were significant predictors of sexual recidivism. In addition, the pseudo-intimacy theme led to a significant increase in the incremental validity of the Static-99 actuarial risk assessment instrument for the prediction of sexual recidivism. The results indicate the potential utility and validity of crime scene behaviors in the applied risk assessment of sexual offenders. © The Author(s) 2015.
A compressed sensing method with analytical results for lidar feature classification
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark
2011-04-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.
Evaluation of experimental UAV video change detection
NASA Astrophysics Data System (ADS)
Bartelsen, J.; Saur, G.; Teutsch, C.
2016-10-01
During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.
Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-06-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Inverting a dispersive scene's side-scanned image
NASA Technical Reports Server (NTRS)
Harger, R. O.
1983-01-01
Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.
Hart, Alexander; Chai, Peter R; Griswold, Matthew K; Lai, Jeffrey T; Boyer, Edward W; Broach, John
2017-01-01
This study seeks to understand the acceptability and perceived utility of unmanned aerial vehicle (UAV) technology to Mass Casualty Incidents (MCI) scene management. Qualitative questionnaires regarding the ease of operation, perceived usefulness, and training time to operate UAVs were administered to Emergency Medical Technicians (n = 15). A Single Urban New England Academic Tertiary Care Medical Center. Front-line emergency medical service (EMS) providers and senior EMS personnel in Incident Commander roles. Data from this pilot study indicate that EMS responders are accepting to deploying and operating UAV technology in a disaster scenario. Additionally, they perceived UAV technology as easy to adopt yet impactful in improving MCI scene management.
NASA Astrophysics Data System (ADS)
Cho, Min Ji; Shin, Uisub; Lee, Hee Chul
2017-05-01
This paper proposes a read-in integrated circuit (RIIC) for infrared scene projectors, which compensates for the voltage drops in ground lines in order to improve the uniformity of the emitter current. A current output digital-to-analog converter is utilized to convert digital scene data into scene data currents. The unit cells in the array receive the scene data current and convert it into data voltage, which simultaneously self-adjusts to account for the voltage drop in the ground line in order to generate the desired emitter current independently of variations in the ground voltage. A 32 × 32 RIIC unit cell array was designed and fabricated using a 0.18-μm CMOS process. The experimental results demonstrate that the proposed RIIC can output a maximum emitter current of 150 μA and compensate for a voltage drop in the ground line of up to 500 mV under a 3.3-V supply. The uniformity of the emitter current is significantly improved compared to that of a conventional RIIC.
Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues.
Kraft, James M; Maloney, Shannon I; Brainard, David H
2002-01-01
Two experiments were conducted to study how scene complexity and cues to depth affect human color constancy. Specifically, two levels of scene complexity were compared. The low-complexity scene contained two walls with the same surface reflectance and a test patch which provided no information about the illuminant. In addition to the surfaces visible in the low-complexity scene, the high-complexity scene contained two rectangular solid objects and 24 paper samples with diverse surface reflectances. Observers viewed illuminated objects in an experimental chamber and adjusted the test patch until it appeared achromatic. Achromatic settings made tinder two different illuminants were used to compute an index that quantified the degree of constancy. Two experiments were conducted: one in which observers viewed the stimuli directly, and one in which they viewed the scenes through an optical system that reduced cues to depth. In each experiment, constancy was assessed for two conditions. In the valid-cue condition, many cues provided valid information about the illuminant change. In the invalid-cue condition, some image cues provided invalid information. Four broad conclusions are drawn from the data: (a) constancy is generally better in the valid-cue condition than in the invalid-cue condition: (b) for the stimulus configuration used, increasing image complexity has little effect in the valid-cue condition but leads to increased constancy in the invalid-cue condition; (c) for the stimulus configuration used, reducing cues to depth has little effect for either constancy condition: and (d) there is moderate individual variation in the degree of constancy exhibited, particularly in the degree to which the complexity manipulation affects performance.
Urban area change detection procedures with remote sensing data
NASA Technical Reports Server (NTRS)
Maxwell, E. L. (Principal Investigator); Riordan, C. J.
1980-01-01
The underlying factors affecting the detection and identification of nonurban to urban land cover change using satellite data were studied. Computer programs were developed to create a digital scene and to simulate the effect of the sensor point spread function (PSF) on the transfer of modulation from the scene to an image of the scene. The theory behind the development of a digital filter representing the PSF is given as well as an example of its application. Atmospheric effects on modulation transfer are also discussed. A user's guide and program listings are given.
ERIC Educational Resources Information Center
Chang, Min-min
1998-01-01
Discusses the Online Computer Library Center (OCLC) and the changing Asia Pacific library scene under the broad headings of the three phases of technology innovation. Highlights include WorldCat and the OCLC shared cataloging system; resource sharing and interlibrary loan; enriching OCLC online catalog with Asian collections; and future outlooks.…
Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory
NASA Technical Reports Server (NTRS)
Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.
2005-01-01
Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.
A dual-waveband dynamic IR scene projector based on DMD
NASA Astrophysics Data System (ADS)
Hu, Yu; Zheng, Ya-wei; Gao, Jiao-bo; Sun, Ke-feng; Li, Jun-na; Zhang, Lei; Zhang, Fang
2016-10-01
Infrared scene simulation system can simulate multifold objects and backgrounds to perform dynamic test and evaluate EO detecting system in the hardware in-the-loop test. The basic structure of a dual-waveband dynamic IR scene projector was introduced in the paper. The system's core device is an IR Digital Micro-mirror Device (DMD) and the radiant source is a mini-type high temperature IR plane black-body. An IR collimation optical system which transmission range includes 3-5μm and 8-12μm is designed as the projection optical system. Scene simulation software was developed with Visual C++ and Vega soft tools and a software flow chart was presented. The parameters and testing results of the system were given, and this system was applied with satisfying performance in an IR imaging simulation testing.
Scene text recognition in mobile applications by character descriptor and structure configuration.
Yi, Chucai; Tian, Yingli
2014-07-01
Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.
Crime scene investigation, reporting, and reconstuction (CSIRR)
NASA Astrophysics Data System (ADS)
Booth, John F.; Young, Jeffrey M.; Corrigan, Paul
1997-02-01
Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.
No Measured Effect of a Familiar Contextual Object on Color Constancy.
Kanematsu, Erika; Brainard, David H
2014-08-01
Some familiar objects have a typical color, such as the yellow of a banana. The presence of such objects in a scene is a potential cue to the scene illumination, since the light reflected from them should on average be consistent with their typical surface reflectance. Although there are many studies on how the identity of an object affects how its color is perceived, little is known about whether the presence of a familiar object in a scene helps the visual system stabilize the color appearance of other objects with respect to changes in illumination. We used a successive color matching procedure in three experiments designed to address this question. Across the experiments we studied a total of 6 subjects (2 in Experiment 1, 3 in Experiment 2, and 4 in Experiment 3) with partial overlap of subjects between experiments. We compared measured color constancy across conditions in which a familiar object cue to the illuminant was available with conditions in which such a cue was not present. Overall, our results do not reveal a reliable improvement in color constancy with the addition of a familiar object to a scene. An analysis of the experimental power of our data suggests that if there is such an effect, it is small: less than approximately a change of 0.09 in a constancy index where an absence of constancy corresponds to an index value of 0 and perfect constancy corresponds to an index value of 1.
No Measured Effect of a Familiar Contextual Object on Color Constancy
Kanematsu, Erika; Brainard, David H.
2013-01-01
Some familiar objects have a typical color, such as the yellow of a banana. The presence of such objects in a scene is a potential cue to the scene illumination, since the light reflected from them should on average be consistent with their typical surface reflectance. Although there are many studies on how the identity of an object affects how its color is perceived, little is known about whether the presence of a familiar object in a scene helps the visual system stabilize the color appearance of other objects with respect to changes in illumination. We used a successive color matching procedure in three experiments designed to address this question. Across the experiments we studied a total of 6 subjects (2 in Experiment 1, 3 in Experiment 2, and 4 in Experiment 3) with partial overlap of subjects between experiments. We compared measured color constancy across conditions in which a familiar object cue to the illuminant was available with conditions in which such a cue was not present. Overall, our results do not reveal a reliable improvement in color constancy with the addition of a familiar object to a scene. An analysis of the experimental power of our data suggests that if there is such an effect, it is small: less than approximately a change of 0.09 in a constancy index where an absence of constancy corresponds to an index value of 0 and perfect constancy corresponds to an index value of 1. PMID:25313267
Model-based video segmentation for vision-augmented interactive games
NASA Astrophysics Data System (ADS)
Liu, Lurng-Kuo
2000-04-01
This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.
Automated synthetic scene generation
NASA Astrophysics Data System (ADS)
Givens, Ryan N.
Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.
Statistics of high-level scene context.
Greene, Michelle R
2013-01-01
CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.
NASA Astrophysics Data System (ADS)
Toadere, Florin
2017-12-01
A spectral image processing algorithm that allows the illumination of the scene with different illuminants together with the reconstruction of the scene's reflectance is presented. Color checker spectral image and CIE A (warm light 2700 K), D65 (cold light 6500 K) and Cree TW Series LED T8 (4000 K) are employed for scene illumination. Illuminants used in the simulations have different spectra and, as a result of their illumination, the colors of the scene change. The influence of the illuminants on the reconstruction of the scene's reflectance is estimated. Demonstrative images and reflectance showing the operation of the algorithm are illustrated.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
A test of size-scaling and relative-size hypotheses for the moon illusion.
Redding, Gordon M
2002-11-01
In two experiments participants reproduced the size of the moon in pictorial scenes under two conditions: when the scene element was normally oriented, producing a depth gradient like a floor, or when the scene element was inverted, producing a depth gradient like a ceiling. Target moons were located near to or far from the scene element. Consistent with size constancy scaling, the illusion reversed when the "floor" of a pictorial scene was inverted to represent a "ceiling." Relative size contrast predicted a reduction or increase in the illusion with no change in direction. The relation between pictorial and natural moon illusions is discussed.
A bio-inspired method and system for visual object-based attention and segmentation
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak
2010-04-01
This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.
Experimental study of digital image processing techniques for LANDSAT data
NASA Technical Reports Server (NTRS)
Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.
1976-01-01
The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.
Loy Rodas, Nicolas; Barrera, Fernando; Padoy, Nicolas
2017-02-01
We present an approach to provide awareness to the harmful ionizing radiation generated during X-ray-guided minimally invasive procedures. A hand-held screen is used to display directly in the user's view information related to radiation safety in a mobile augmented reality (AR) manner. Instead of using markers, we propose a method to track the observer's viewpoint, which relies on the use of multiple RGB-D sensors and combines equipment detection for tracking initialization with a KinectFusion-like approach for frame-to-frame tracking. Two of the sensors are ceiling-mounted and a third one is attached to the hand-held screen. The ceiling cameras keep an updated model of the room's layout, which is used to exploit context information and improve the relocalization procedure. The system is evaluated on a multicamera dataset generated inside an operating room (OR) and containing ground-truth poses of the AR display. This dataset includes a wide variety of sequences with different scene configurations, occlusions, motion in the scene, and abrupt viewpoint changes. Qualitative results illustrating the different AR visualization modes for radiation awareness provided by the system are also presented. Our approach allows the user to benefit from a large AR visualization area and permits to recover from tracking failure caused by vast motion or changes in the scene just by looking at a piece of equipment. The system enables the user to see the 3-D propagation of radiation, the medical staff's exposure, and/or the doses deposited on the patient's surface as seen through his own eyes.
NASA Astrophysics Data System (ADS)
Appel, Marius; Lahn, Florian; Buytaert, Wouter; Pebesma, Edzer
2018-04-01
Earth observation (EO) datasets are commonly provided as collection of scenes, where individual scenes represent a temporal snapshot and cover a particular region on the Earth's surface. Using these data in complex spatiotemporal modeling becomes difficult as soon as data volumes exceed a certain capacity or analyses include many scenes, which may spatially overlap and may have been recorded at different dates. In order to facilitate analytics on large EO datasets, we combine and extend the geospatial data abstraction library (GDAL) and the array-based data management and analytics system SciDB. We present an approach to automatically convert collections of scenes to multidimensional arrays and use SciDB to scale computationally intensive analytics. We evaluate the approach in three study cases on national scale land use change monitoring with Landsat imagery, global empirical orthogonal function analysis of daily precipitation, and combining historical climate model projections with satellite-based observations. Results indicate that the approach can be used to represent various EO datasets and that analyses in SciDB scale well with available computational resources. To simplify analyses of higher-dimensional datasets as from climate model output, however, a generalization of the GDAL data model might be needed. All parts of this work have been implemented as open-source software and we discuss how this may facilitate open and reproducible EO analyses.
Perspectives: Unconventional Wisdom
ERIC Educational Resources Information Center
Smith, Burck
2013-01-01
Since online learning burst on the scene in the late 1990s, predictions of traditional higher education's obsolescence and disruption have been steady fare in the trade and popular media. This time the change is shaping up to be more profound than most had envisioned. As alternatives to the degree system (and the accreditation/financial…
The Changing School Finance Scene: Local, State, and Federal Issues.
ERIC Educational Resources Information Center
Cambron-McCabe, Nelda H.
This chapter provides an overview of recent school finance litigation at the local, state, and federal levels. The first section addresses legal challenges to state school finance systems and reviews decisions from Arkansas, California, Colorado, Georgia, Michigan, New York, and West Virginia. Litigation attacking states' methods of funding public…
Changes are in Store for Pulping Technology
ERIC Educational Resources Information Center
Environmental Science and Technology, 1975
1975-01-01
The pulp and paper industry are being forced by economic considerations and air pollution regulations to consider alternatives to the use of sulfur systems, be they kraft, acid or neutral sulfite. To meet environmental requirements and combat erosion of profits, modernized non-sulfur pulping methods will increasingly appear on the scene. (BT)
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
The fate of object memory traces under change detection and change blindness.
Busch, Niko A
2013-07-03
Observers often fail to detect substantial changes in a visual scene. This so-called change blindness is often taken as evidence that visual representations are sparse and volatile. This notion rests on the assumption that the failure to detect a change implies that representations of the changing objects are lost all together. However, recent evidence suggests that under change blindness, object memory representations may be formed and stored, but not retrieved. This study investigated the fate of object memory representations when changes go unnoticed. Participants were presented with scenes consisting of real world objects, one of which changed on each trial, while recording event-related potentials (ERPs). Participants were first asked to localize where the change had occurred. In an additional recognition task, participants then discriminated old objects, either from the pre-change or the post-change scene, from entirely new objects. Neural traces of object memories were studied by comparing ERPs for old and novel objects. Participants performed poorly in the detection task and often failed to recognize objects from the scene, especially pre-change objects. However, a robust old/novel effect was observed in the ERP, even when participants were change blind and did not recognize the old object. This implicit memory trace was found both for pre-change and post-change objects. These findings suggest that object memories are stored even under change blindness. Thus, visual representations may not be as sparse and volatile as previously thought. Rather, change blindness may point to a failure to retrieve and use these representations for change detection. Copyright © 2013 Elsevier B.V. All rights reserved.
Can IR scene projectors reduce total system cost?
NASA Astrophysics Data System (ADS)
Ginn, Robert; Solomon, Steven
2006-05-01
There is an incredible amount of system engineering involved in turning the typical infrared system needs of probability of detection, probability of identification, and probability of false alarm into focal plane array (FPA) requirements of noise equivalent irradiance (NEI), modulation transfer function (MTF), fixed pattern noise (FPN), and defective pixels. Unfortunately, there are no analytic solutions to this problem so many approximations and plenty of "seat of the pants" engineering is employed. This leads to conservative specifications, which needlessly drive up system costs by increasing system engineering costs, reducing FPA yields, increasing test costs, increasing rework and the never ending renegotiation of requirements in an effort to rein in costs. These issues do not include the added complexity to the FPA factory manager of trying to meet varied, and changing, requirements for similar products because different customers have made different approximations and flown down different specifications. Scene generation technology may well be mature and cost effective enough to generate considerable overall savings for FPA based systems. We will compare the costs and capabilities of various existing scene generation systems and estimate the potential savings if implemented at several locations in the IR system fabrication cycle. The costs of implementing this new testing methodology will be compared to the probable savings in systems engineering, test, rework, yield improvement and others. The diverse requirements and techniques required for testing missile warning systems, missile seekers, and FLIRs will be defined. Last, we will discuss both the hardware and software requirements necessary to meet the new test paradigm and discuss additional cost improvements related to the incorporation of these technologies.
Crime scene units: a look to the future
NASA Astrophysics Data System (ADS)
Baldwin, Hayden B.
1999-02-01
The scientific examination of physical evidence is well recognized as a critical element in conducting successful criminal investigations and prosecutions. The forensic science field is an ever changing discipline. With the arrival of DNA, new processing techniques for latent prints, portable lasers, and electro-static dust print lifters, and training of evidence technicians has become more important than ever. These scientific and technology breakthroughs have increased the possibility of collecting and analyzing physical evidence that was never possible before. The problem arises with the collection of physical evidence from the crime scene not from the analysis of the evidence. The need for specialized units in the processing of all crime scenes is imperative. These specialized units, called crime scene units, should be trained and equipped to handle all forms of crime scenes. The crime scenes units would have the capability to professionally evaluate and collect pertinent physical evidence from the crime scenes.
Visual memory for moving scenes.
DeLucia, Patricia R; Maldia, Maria M
2006-02-01
In the present study, memory for picture boundaries was measured with scenes that simulated self-motion along the depth axis. The results indicated that boundary extension (a distortion in memory for picture boundaries) occurred with moving scenes in the same manner as that reported previously for static scenes. Furthermore, motion affected memory for the boundaries but this effect of motion was not consistent with representational momentum of the self (memory being further forward in a motion trajectory than actually shown). We also found that memory for the final position of the depicted self in a moving scene was influenced by properties of the optical expansion pattern. The results are consistent with a conceptual framework in which the mechanisms that underlie boundary extension and representational momentum (a) process different information and (b) both contribute to the integration of successive views of a scene while the scene is changing.
Direct versus indirect processing changes the influence of color in natural scene categorization.
Otsuka, Sachio; Kawaguchi, Jun
2009-10-01
We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.
Fluoroscopic image-guided intervention system for transbronchial localization
NASA Astrophysics Data System (ADS)
Rai, Lav; Keast, Thomas M.; Wibowo, Henky; Yu, Kun-Chang; Draper, Jeffrey W.; Gibbs, Jason D.
2012-02-01
Reliable transbronchial access of peripheral lung lesions is desirable for the diagnosis and potential treatment of lung cancer. This procedure can be difficult, however, because accessory devices (e.g., needle or forceps) cannot be reliably localized while deployed. We present a fluoroscopic image-guided intervention (IGI) system for tracking such bronchoscopic accessories. Fluoroscopy, an imaging technology currently utilized by many bronchoscopists, has a fundamental shortcoming - many lung lesions are invisible in its images. Our IGI system aligns a digitally reconstructed radiograph (DRR) defined from a pre-operative computed tomography (CT) scan with live fluoroscopic images. Radiopaque accessory devices are readily apparent in fluoroscopic video, while lesions lacking a fluoroscopic signature but identifiable in the CT scan are superimposed in the scene. The IGI system processing steps consist of: (1) calibrating the fluoroscopic imaging system; (2) registering the CT anatomy with its depiction in the fluoroscopic scene; (3) optical tracking to continually update the DRR and target positions as the fluoroscope is moved about the patient. The end result is a continuous correlation of the DRR and projected targets with the anatomy depicted in the live fluoroscopic video feed. Because both targets and bronchoscopic devices are readily apparent in arbitrary fluoroscopic orientations, multiplane guidance is straightforward. The system tracks in real-time with no computational lag. We have measured a mean projected tracking accuracy of 1.0 mm in a phantom and present results from an in vivo animal study.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
NASA Astrophysics Data System (ADS)
Graham, James; Ternovskiy, Igor V.
2013-06-01
We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.
Description of the dynamic infrared background/target simulator (DIBS)
NASA Astrophysics Data System (ADS)
Lujan, Ignacio
1988-01-01
The purpose of the Dynamic Infrared Background/Target Simulator (DIBS) is to project dynamic infrared scenes to a test sensor; e.g., a missile seeker that is sensitive to infrared energy. The projected scene will include target(s) and background. This system was designed to present flicker-free infrared scenes in the 8 micron to 12 micron wavelength region. The major subassemblies of the DIBS are the laser write system (LWS), vanadium dioxide modulator assembly, scene data buffer (SDB), and the optical image translator (OIT). This paper describes the overall concept and design of the infrared scene projector followed by some details of the LWS and VO2 modulator. Also presented are brief descriptions of the SDB and OIT.
Forensic 3D Scene Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3Dmore » measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.« less
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Beck, Christoph; Garreau, Guillaume; Georgiou, Julius
2016-01-01
Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.
Modeling the effects of contrast enhancement on target acquisition performance
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Fanning, Jonathan D.
2008-04-01
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.
Landsat Time-Series Analysis Opens New Approaches for Regional Glacier Mapping
NASA Astrophysics Data System (ADS)
Winsvold, S. H.; Kääb, A.; Nuth, C.; Altena, B.
2016-12-01
The archive of Landsat satellite scenes is important for mapping of glaciers, especially as it represents the longest running and continuous satellite record of sufficient resolution to track glacier changes over time. Contributing optical sensors newly launched (Landsat 8 and Sentinel-2A) or upcoming in the near future (Sentinel-2B), will promote very high temporal resolution of optical satellite images especially in high-latitude regions. Because of the potential that lies within such near-future dense time series, methods for mapping glaciers from space should be revisited. We present application scenarios that utilize and explore dense time series of optical data for automatic mapping of glacier outlines and glacier facies. Throughout the season, glaciers display a temporal sequence of properties in optical reflection as the seasonal snow melts away, and glacier ice appears in the ablation area and firn in the accumulation area. In one application scenario presented we simulated potential future seasonal resolution using several years of Landsat 5TM/7ETM+ data, and found a sinusoidal evolution of the spectral reflectance for on-glacier pixels throughout a year. We believe this is because of the short wave infrared band and its sensitivity to snow grain size. The parameters retrieved from the fitting sinus curve can be used for glacier mapping purposes, thus we also found similar results using e.g. the mean of summer band ratio images. In individual optical mapping scenes, conditions will vary (e.g., snow, ice, and clouds) and will not be equally optimal over the entire scene. Using robust statistics on stacked pixels reveals a potential for synthesizing optimal mapping scenes from a temporal stack, as we present in a further application scenario. The dense time series available from satellite imagery will also promote multi-temporal and multi-sensor based analyses. The seasonal pattern of snow and ice on a glacier seen in the optical time series can in the summer season also be observed using radar backscatter series. Optical sensors reveal the reflective properties at the surface, while radar sensors may penetrate the surface revealing properties from a certain volume.In an outlook to this contribution we have explored how we can combine information from SAR and optical sensor systems for different purposes.
Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.
Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed
2009-06-01
Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.
Liquid crystal uncooled thermal imager development
NASA Astrophysics Data System (ADS)
Clark, H. R.; Bozler, C. O.; Berry, S. R.; Reich, R. K.; Bos, P. J.; Finnemeyer, V. A.; Bryant, D. R.; McGinty, C.
2016-09-01
An uncooled thermal imager is being developed based on a liquid crystal (LC) transducer. Without any electrical connections, the LC transducer pixels change the long-wavelength infrared (LWIR) scene directly into a visible image as opposed to an electric signal in microbolometers. The objectives are to develop an imager technology scalable to large formats (tens of megapixels) while maintaining or improving the noise equivalent temperature difference (NETD) compared to microbolometers. The present work is demonstrating that the LCs have the required performance (sensitivity, dynamic range, speed, etc.) to enable a more flexible uncooled imager. Utilizing 200-mm wafers, a process has been developed and arrays have been fabricated using aligned LCs confined in 20×20-μm cavities elevated on thermal legs. Detectors have been successfully fabricated on both silicon and fused silica wafers using less than 10 photolithographic mask steps. A breadboard camera system has been assembled to test the imagers. Various sensor configurations are described along with advantages and disadvantages of component arrangements.
Rover imaging system for the Mars rover/sample return mission
NASA Technical Reports Server (NTRS)
1993-01-01
In the past year, the conceptual design of a panoramic imager for the Mars Environmental Survey (MESUR) Pathfinder was finished. A prototype camera was built and its performace in the laboratory was tested. The performance of this camera was excellent. Based on this work, we have recently proposed a small, lightweight, rugged, and highly capable Mars Surface Imager (MSI) instrument for the MESUR Pathfinder mission. A key aspect of our approach to optimization of the MSI design is that we treat image gathering, coding, and restoration as a whole, rather than as separate and independent tasks. Our approach leads to higher image quality, especially in the representation of fine detail with good contrast and clarity, without increasing either the complexity of the camera or the amount of data transmission. We have made significant progress over the past year in both the overall MSI system design and in the detailed design of the MSI optics. We have taken a simple panoramic camera and have upgraded it substantially to become a prototype of the MSI flight instrument. The most recent version of the camera utilizes miniature wide-angle optics that image directly onto a 3-color, 2096-element CCD line array. There are several data-taking modes, providing resolution as high as 0.3 mrad/pixel. Analysis tasks that were performed or that are underway with the test data from the prototype camera include the following: construction of 3-D models of imaged scenes from stereo data, first for controlled scenes and later for field scenes; and checks on geometric fidelity, including alignment errors, mast vibration, and oscillation in the drive system. We have outlined a number of tasks planned for Fiscal Year '93 in order to prepare us for submission of a flight instrument proposal for MESUR Pathfinder.
Reduced Change Blindness Suggests Enhanced Attention to Detail in Individuals with Autism
ERIC Educational Resources Information Center
Smith, Hayley; Milne, Elizabeth
2009-01-01
Background: The phenomenon of change blindness illustrates that a limited number of items within the visual scene are attended to at any one time. It has been suggested that individuals with autism focus attention on less contextually relevant aspects of the visual scene, show superior perceptual discrimination and notice details which are often…
Possibility of Engineering Education That Makes Use of Algebraic Calculators by Various Scenes
NASA Astrophysics Data System (ADS)
Umeno, Yoshio
Algebraic calculators are graphing calculators with a feature of computer algebra system. It can be said that we can solve mathematics only by pushing some keys of these calculators in technical colleges or universities. They also possess another feature, so we can make extensive use in engineering education. For example, we can use them for a basic education, a programming education, English education, and creative thinking tools for excellent students. In this paper, we will introduce the summary of algebraic calculators, then, consider how we utilize them in engineer education.
Advanced telemedicine development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forslund, D.W.; George, J.E.; Gavrilov, E.M.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop a Java-based, electronic, medical-record system that can handle multimedia data and work over a wide-area network based on open standards, and that can utilize an existing database back end. The physician is to be totally unaware that there is a database behind the scenes and is only aware that he/she can access and manage the relevant information to treat the patient.
NASA Technical Reports Server (NTRS)
Jacobberger, P. A.
1986-01-01
Two Thematic Mapper (TM) scenes were acquired. A scene was acquired for the Bahariya, Egypt field area, and one was acquired covering the Okavango Delta site. Investigations at the northwest Botswana study sites have concentrated upon a system of large linear (alab) dunes possessing an average wavelength of 2 kilometers and an east-west orientation. These dunes exist to the north and west of the Okavango Swamp, the pseudodeltaic end-sink of the internal Okavango-Cubango-Cuito drainage network. One archival scene and two TM acquisitions are on order, but at present no TM data were acquired for the Tombouctou/Azaouad Dunes, Mali. The three areas taken together comprise an environmental series ranging from hyperarid to semi-arid, with desertization processes operational or incipient in each. The long range goal is to predict normal seasonal variations, so that aperiodic spectral changes resulting from soil erosion, vegetation damage, and associated surface processes would be distinguishable as departures from the norm.
Improved linearity using harmonic error rejection in a full-field range imaging system
NASA Astrophysics Data System (ADS)
Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.
2008-02-01
Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.
NASA Astrophysics Data System (ADS)
Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian
2018-02-01
For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.
Cloud Top Scanning radiometer (CTS): User's guide
NASA Technical Reports Server (NTRS)
Brown, K. S.
1981-01-01
The CTS maps the Earth's surface with a resolution of 0.1 km from an altitude of 18km with 60km side-to-side coverage of the field. It has three spectral channels. The 0.625 micrometer centered visual channel detects reflectance to within 1 percent. The 6.75 micrometer centered water vapor channel detects changes in temperature of less than one degree Kelvin at 175 K. The 11.5 micrometer centered infrared window channel detects changes of less one half degree Kelvin at 175 K. The data can be converted graphically into three display images of the scene. Values for scene temperature and albedo are calculated from calibration equations. The equations were derived from in-situ and laboratory measurements. Intercomparisons of the flight data temperatures with ground based and other remote sensor results established the certainty of the derived temperature values to within 3 K over a wide temperature range (180 to 320 K). The system performance, calibration, and operation is successful and the engineering information describing this system should prove useful to scientists and potential users of the data.
Steiger, Tineke K; Bunzeck, Nico
2017-01-01
Motivation can have invigorating effects on behavior via dopaminergic neuromodulation. While this relationship has mainly been established in theoretical models and studies in younger subjects, the impact of structural declines of the dopaminergic system during healthy aging remains unclear. To investigate this issue, we used electroencephalography (EEG) in healthy young and elderly humans in a reward-learning paradigm. Specifically, scene images were initially encoded by combining them with cues predicting monetary reward (high vs. low reward). Subsequently, recognition memory for the scenes was tested. As a main finding, we can show that response times (RTs) during encoding were faster for high reward predicting images in the young but not elderly participants. This pattern was resembled in power changes in the theta-band (4-7 Hz). Importantly, analyses of structural MRI data revealed that individual reward-related differences in the elderlies' response time could be predicted by the structural integrity of the dopaminergic substantia nigra (SN; as measured by magnetization transfer (MT)). These findings suggest a close relationship between reward-based invigoration, theta oscillations and age-dependent changes of the dopaminergic system.
2012-05-21
Even in a peaceful looking scene such as this one of Saturn and its moon Tethys, NASA Cassini spacecraft reveals clues about how Saturn is ever-changing; scars are seen here of the huge storm that raged through much of 2011.
Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F
2013-06-04
A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.
2013-01-01
A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor. PMID:23590163
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dreifuerst, G R; Chew, D B; Mangonon, H L
The degradation and failure of cast-coil epoxy windings within 13.8kV control power transformers and metering potential transformers has been shown to be dangerous to both equipment and personnel, even though best industrial design practices were followed. Accident scenes will be examined for two events at a U.S. Department of Energy laboratory. Failure modes will be explained and current design practices discussed with changes suggested to prevent a recurrence and to minimize future risk. New maintenance philosophies utilizing partial discharge testing of the transformers as a prediction of end-of-life will be examined.
Visual wetness perception based on image color statistics.
Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya
2017-05-01
Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.
The elephant in the room: Inconsistency in scene viewing and representation.
Spotorno, Sara; Tatler, Benjamin W
2017-10-01
We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Li, Ya-Pin; Gao, Hong-Wei; Fan, Hao-Jun; Wei, Wei; Xu, Bo; Dong, Wen-Long; Li, Qing-Feng; Song, Wen-Jing; Hou, Shi-Ke
2017-12-01
The objective of this study was to build a database to collect infectious disease information at the scene of a disaster through the use of 128 epidemiological questionnaires and 47 types of options, with rapid acquisition of information regarding infectious disease and rapid questionnaire customization at the scene of disaster relief by use of a personal digital assistant (PDA). SQL Server 2005 (Microsoft Corp, Redmond, WA) was used to create the option database for the infectious disease investigation, to develop a client application for the PDA, and to deploy the application on the server side. The users accessed the server for data collection and questionnaire customization with the PDA. A database with a set of comprehensive options was created and an application system was developed for the Android operating system (Google Inc, Mountain View, CA). On this basis, an infectious disease information collection system was built for use at the scene of disaster relief. The creation of an infectious disease information collection system and rapid questionnaire customization through the use of a PDA was achieved. This system integrated computer technology and mobile communication technology to develop an infectious disease information collection system and to allow for rapid questionnaire customization at the scene of disaster relief. (Disaster Med Public Health Preparedness. 2017;11:668-673).
The use of an image registration technique in the urban growth monitoring
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Foresti, C.; Deoliveira, M. D. L. N.; Niero, M.; Parreira, E. M. D. M. F.
1984-01-01
The use of an image registration program in the studies of urban growth is described. This program permits a quick identification of growing areas with the overlap of the same scene in different periods, and with the use of adequate filters. The city of Brasilia, Brazil, is selected for the test area. The dynamics of Brasilia urban growth are analyzed with the overlap of scenes dated June 1973, 1978 and 1983. The results showed the utilization of the image registration technique for the monitoring of dynamic urban growth.
Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene
NASA Astrophysics Data System (ADS)
Bonner, David
2010-05-01
Connecting physics concepts with real-world events allows students to establish a strong conceptual foundation. When such events are particularly interesting to students, it can greatly impact their engagement and enthusiasm in an activity. Activities that involve studying real-world events of high interest can provide students a long-lasting understanding and positive memorable experiences, both of which heighten the learning experiences of those students. One such activity, described in depth in this paper, utilizes a murder mystery and crime scene investigation as an application of basic projectile motion.
Framework of passive millimeter-wave scene simulation based on material classification
NASA Astrophysics Data System (ADS)
Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun
2006-05-01
Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting using actual PMMW sensors. With the reliable PMMW scene simulator, it will be more efficient to apply the PMMW sensor to various applications.
Hayes, Scott M.; Baena, Elsa; Truong, Trong-Kha; Cabeza, Roberto
2011-01-01
Although people do not normally try to remember associations between faces and physical contexts, these associations are established automatically, as indicated by the difficulty of recognizing familiar faces in different contexts (“butcher-on-the-bus” phenomenon). The present functional MRI (fMRI) study investigated the automatic binding of faces and scenes. In the Face-Face (F-F) condition, faces were presented alone during both encoding and retrieval, whereas in the Face/Scene-Face (FS-F) condition, they were presented overlaid on scenes during encoding but alone during retrieval (context change). Although participants were instructed to focus only on the faces during both encoding and retrieval, recognition performance was worse in the FS-F than the F-F condition (“context shift decrement”—CSD), confirming automatic face-scene binding during encoding. This binding was mediated by the hippocampus as indicated by greater subsequent memory effects (remembered > forgotten) in this region for the FS-F than the F-F condition. Scene memory was mediated by the right parahippocampal cortex, which was reactivated during successful retrieval when the faces were associated with a scene during encoding (FS-F condition). Analyses using the CSD as a regressor yielded a clear hemispheric asymmetry in medial temporal lobe activity during encoding: left hippocampal and parahippocampal activity was associated with a smaller CSD, indicating more flexible memory representations immune to context changes, whereas right hippocampal/rhinal activity was associated with a larger CSD, indicating less flexible representations sensitive to context change. Taken together, the results clarify the neural mechanisms of context effects on face recognition. PMID:19925208
The effect of background and illumination on color identification of real, 3D objects.
Allred, Sarah R; Olkkonen, Maria
2013-01-01
For the surface reflectance of an object to be a useful cue to object identity, judgments of its color should remain stable across changes in the object's environment. In 2D scenes, there is general consensus that color judgments are much more stable across illumination changes than background changes. Here we investigate whether these findings generalize to real 3D objects. Observers made color matches to cubes as we independently varied both the illumination impinging on the cube and the 3D background of the cube. As in 2D scenes, we found relatively high but imperfect stability of color judgments under an illuminant shift. In contrast to 2D scenes, we found that background had little effect on average color judgments. In addition, variability of color judgments was increased by an illuminant shift and decreased by embedding the cube within a background. Taken together, these results suggest that in real 3D scenes with ample cues to object segregation, the addition of a background may improve stability of color identification.
Bradley, Margaret M.; Lang, Peter J.
2013-01-01
During rapid serial visual presentation (RSVP), the perceptual system is confronted with a rapidly changing array of sensory information demanding resolution. At rapid rates of presentation, previous studies have found an early (e.g., 150–280 ms) negativity over occipital sensors that is enhanced when emotional, as compared with neutral, pictures are viewed, suggesting facilitated perception. In the present study, we explored how picture composition and the presence of people in the image affect perceptual processing of pictures of natural scenes. Using RSVP, pictures that differed in perceptual composition (figure–ground or scenes), content (presence of people or not), and emotional content (emotionally arousing or neutral) were presented in a continuous stream for 330 ms each with no intertrial interval. In both subject and picture analyses, all three variables affected the amplitude of occipital negativity, with the greatest enhancement for figure–ground compositions (as compared with scenes), irrespective of content and emotional arousal, supporting an interpretation that ease of perceptual processing is associated with enhanced occipital negativity. Viewing emotional pictures prompted enhanced negativity only for pictures that depicted people, suggesting that specific features of emotionally arousing images are associated with facilitated perceptual processing, rather than all emotional content. PMID:23780520
Statistics of high-level scene context
Greene, Michelle R.
2013-01-01
Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition. PMID:24194723
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
Vos, Leia; Whitman, Douglas
2014-01-01
A considerable literature suggests that the right hemisphere is dominant in vigilance for novel and survival-related stimuli, such as predators, across a wide range of species. In contrast to vigilance for change, change blindness is a failure to detect obvious changes in a visual scene when they are obscured by a disruption in scene presentation. We studied lateralised change detection using a series of scenes with salient changes in either the left or right visual fields. In Study 1 left visual field changes were detected more rapidly than right visual field changes, confirming a right hemisphere advantage for change detection. Increasing stimulus difficulty resulted in greater right visual field detections and left hemisphere detection was more likely when change occurred in the right visual field on a prior trial. In Study 2 an intervening distractor task disrupted the influence of prior trials. Again, faster detection speeds were observed for the left visual field changes with a shift to a right visual field advantage with increasing time-to-detection. This suggests that a right hemisphere role for vigilance, or catching attention, and a left hemisphere role for target evaluation, or maintaining attention, is present at the earliest stage of change detection.
Detection of multiple airborne targets from multisensor data
NASA Astrophysics Data System (ADS)
Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf
1995-08-01
Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.
ASTER cloud coverage reassessment using MODIS cloud mask products
NASA Astrophysics Data System (ADS)
Tonooka, Hideyuki; Omagari, Kunjuro; Yamamoto, Hirokazu; Tachikawa, Tetsushi; Fujita, Masaru; Paitaer, Zaoreguli
2010-10-01
In the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) Project, two kinds of algorithms are used for cloud assessment in Level-1 processing. The first algorithm based on the LANDSAT-5 TM Automatic Cloud Cover Assessment (ACCA) algorithm is used for a part of daytime scenes observed with only VNIR bands and all nighttime scenes, and the second algorithm based on the LANDSAT-7 ETM+ ACCA algorithm is used for most of daytime scenes observed with all spectral bands. However, the first algorithm does not work well for lack of some spectral bands sensitive to cloud detection, and the two algorithms have been less accurate over snow/ice covered areas since April 2008 when the SWIR subsystem developed troubles. In addition, they perform less well for some combinations of surface type and sun elevation angle. We, therefore, have developed the ASTER cloud coverage reassessment system using MODIS cloud mask (MOD35) products, and have reassessed cloud coverage for all ASTER archived scenes (>1.7 million scenes). All of the new cloud coverage data are included in Image Management System (IMS) databases of the ASTER Ground Data System (GDS) and NASA's Land Process Data Active Archive Center (LP DAAC) and used for ASTER product search by users, and cloud mask images are distributed to users through Internet. Daily upcoming scenes (about 400 scenes per day) are reassessed and inserted into the IMS databases in 5 to 7 days after each scene observation date. Some validation studies for the new cloud coverage data and some mission-related analyses using those data are also demonstrated in the present paper.
Video content parsing based on combined audio and visual information
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-08-01
While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.
Automated content and quality assessment of full-motion-video for the generation of meta data
NASA Astrophysics Data System (ADS)
Harguess, Josh
2015-05-01
Virtually all of the video data (and full-motion-video (FMV)) that is currently collected and stored in support of missions has been corrupted to various extents by image acquisition and compression artifacts. Additionally, video collected by wide-area motion imagery (WAMI) surveillance systems and unmanned aerial vehicles (UAVs) and similar sources is often of low quality or in other ways corrupted so that it is not worth storing or analyzing. In order to make progress in the problem of automatic video analysis, the first problem that should be solved is deciding whether the content of the video is even worth analyzing to begin with. We present a work in progress to address three types of scenes which are typically found in real-world data stored in support of Department of Defense (DoD) missions: no or very little motion in the scene, large occlusions in the scene, and fast camera motion. Each of these produce video that is generally not usable to an analyst or automated algorithm for mission support and therefore should be removed or flagged to the user as such. We utilize recent computer vision advances in motion detection and optical flow to automatically assess FMV for the identification and generation of meta-data (or tagging) of video segments which exhibit unwanted scenarios as described above. Results are shown on representative real-world video data.
Guidance of visual attention by semantic information in real-world scenes
Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc
2014-01-01
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724
NASA Astrophysics Data System (ADS)
Wilson, Meredith
Geologic field trips are among the most beneficial learning experiences for students as they engage the topic of geology, but they are also difficult environments to maximize learning. This action research study explored one facet of the problems associated with teaching geology in the field by attempting to improve the transition of undergraduate students from a traditional laboratory setting to an authentic field environment. Utilizing an artificial outcrop, called the GeoScene, during an introductory college-level non-majors geology course, the transition was studied. The GeoScene was utilized in this study as an intermediary between laboratory and authentic field based experiences, allowing students to apply traditional laboratory learning in an outdoor environment. The GeoScene represented a faux field environment; outside, more complex and tangible than a laboratory, but also simplified geologically and located safely within the confines of an educational setting. This exploratory study employed a mixed-methods action research design. The action research design allowed for systematic inquiry by the teacher/researcher into how the students learned. The mixed-methods approach garnered several types of qualitative and quantitative data to explore phenomena and support conclusions. Several types of data were collected and analyzed, including: visual recordings of the intervention, interviews, analytic memos, student reflections, field practical exams, and a pre/post knowledge and skills survey, to determine whether the intervention affected student comprehension and interpretation of geologic phenomena in an authentic field environment, and if so, how. Students enrolled in two different sections of the same laboratory course, sharing a common lecture, participated in laboratory exercises implementing experiential learning and constructivist pedagogies that focused on learning the basic geological skills necessary for work in a field environment. These laboratory activities were followed by an approximate 15 minute intervention at the GeoScene for a treatment group of students (n=13) to attempt to mitigate potential barriers, such as: self-efficacy, novelty space, and spatial skills, which hinder student performance in an authentic field environment. Comparisons were made to a control group (n=12), who did not participate in GeoScene activities, but completed additional exercises and applications in the laboratory setting. Qualitative data sources suggested that the GeoScene treatment was a positive addition to the laboratory studies and improved the student transition to the field environment by: (1) reducing anxiety and decreasing heightened stimulus associated with the novelty of the authentic field environment, (2) allowing a physical transition between the laboratory and field that shifted concepts learned in the lab to the field environment, and (3) improving critical analysis of geologic phenomena. This was corroborated by the quantitative data that suggested the treatment group may have outperformed the control group in geology content related skills taught in the laboratory, and supported by the GeoScene, while in an authentic field environment (p≤0.01, delta=0.507).
Robust drone detection for day/night counter-UAV with static VIS and SWIR cameras
NASA Astrophysics Data System (ADS)
Müller, Thomas
2017-05-01
Recent progress in the development of unmanned aerial vehicles (UAVs) has led to more and more situations in which drones like quadrocopters or octocopters pose a potential serious thread or could be used as a powerful tool for illegal activities. Therefore, counter-UAV systems are required in a lot of applications to detect approaching drones as early as possible. In this paper, an efficient and robust algorithm is presented for UAV detection using static VIS and SWIR cameras. Whereas VIS cameras with a high resolution enable to detect UAVs in the daytime in further distances, surveillance at night can be performed with a SWIR camera. First, a background estimation and structural adaptive change detection process detects movements and other changes in the observed scene. Afterwards, the local density of changes is computed used for background density learning and to build up the foreground model which are compared in order to finally get the UAV alarm result. The density model is used to filter out noise effects, on the one hand. On the other hand, moving scene parts like moving leaves in the wind or driving cars on a street can easily be learned in order to mask such areas out and suppress false alarms there. This scene learning is done automatically simply by processing without UAVs in order to capture the normal situation. The given results document the performance of the presented approach in VIS and SWIR in different situations.
Interleaved Observation Execution and Rescheduling on Earth Observing Systems
NASA Technical Reports Server (NTRS)
Khatib, Lina; Frank, Jeremy; Smith, David; Morris, Robert; Dungan, Jennifer
2003-01-01
Observation scheduling for Earth orbiting satellites solves the following problem: given a set of requests for images of the Earth, a set of instruments for acquiring those images distributed on a collecting of orbiting satellites, and a set of temporal and resource constraints, generate a set of assignments of instruments and viewing times to those requests that satisfy those constraints. Observation scheduling is often construed as a constrained optimization problem with the objective of maximizing the overall utility of the science data acquired. The utility of an image is typically based on the intrinsic importance of acquiring it (for example, its importance in meeting a mission or science campaign objective) as well as the expected value of the data given current viewing conditions (for example, if the image is occluded by clouds, its value is usually diminished). Currently, science observation scheduling for Earth Observing Systems is done on the ground, for periods covering a day or more. Schedules are uplinked to the satellites and are executed rigorously. An alternative to this scenario is to do some of the decision-making about what images are to be acquired on-board. The principal argument for this capability is that the desirability of making an observation can change dynamically, because of changes in meteorological conditions (e.g. cloud cover), unforeseen events such as fires, floods, or volcanic eruptions, or un-expected changes in satellite or ground station capability. Furthermore, since satellites can only communicate with the ground between 5% to 10% of the time, it may be infeasible to make the desired changes to the schedule on the ground, and uplink the revisions in time for the on-board system to execute them. Examples of scenarios that motivate an on-board capability for revising schedules include the following. First, if a desired visual scene is completely obscured by clouds, then there is little point in taking it. In this case, satellite resources, such as power and storage space can be better utilized taking another image that is higher quality. Second, if an unexpected but important event occurs (such as a fire, flood, or volcanic eruption), there may be good reason to take images of it, instead of expending satellite resources on some of the lower priority scheduled observations. Finally, if there is unexpected loss of capability, it may be impossible to carry out the schedule of planned observations. For example, if a ground station goes down temporarily, a satellite may not be able to free up enough storage space to continue with the remaining schedule of observations. This paper describes an approach for interleaving execution of observation schedules with dynamic schedule revision based on changes to the expected utility of the acquired images. We describe the problem in detail, formulate an algorithm for interleaving schedule revision and execution, and discuss refinements to the algorithm based on the need for search efficiency. We summarize with a brief discussion of the tests performed on the system.
Mickley Steinmetz, Katherine R; Sturkie, Charlee M; Rochester, Nina M; Liu, Xiaodong; Gutchess, Angela H
2018-07-01
After viewing a scene, individuals differ in what they prioritise and remember. Culture may be one factor that influences scene memory, as Westerners have been shown to be more item-focused than Easterners (see Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 922-934). However, cultures may differ in their sensitivity to scene incongruences and emotion processing, which may account for cross-cultural differences in scene memory. The current study uses hierarchical linear modeling (HLM) to examine scene memory while controlling for scene congruency and the perceived emotional intensity of the images. American and East Asian participants encoded pictures that included a positive, negative, or neutral item placed on a neutral background. After a 20-min delay, participants were shown the item and background separately along with similar and new items and backgrounds to assess memory specificity. Results indicated that even when congruency and emotional intensity were controlled, there was evidence that Americans had better item memory than East Asians. Incongruent scenes were better remembered than congruent scenes. However, this effect did not differ by culture. This suggests that Americans' item focus may result in memory changes that are robust despite variations in scene congruency and perceived emotion.
The Identification and Modeling of Visual Cue Usage in Manual Control Task Experiments
NASA Technical Reports Server (NTRS)
Sweet, Barbara Townsend; Trejo, Leonard J. (Technical Monitor)
1999-01-01
Many fields of endeavor require humans to conduct manual control tasks while viewing a perspective scene. Manual control refers to tasks in which continuous, or nearly continuous, control adjustments are required. Examples include flying an aircraft, driving a car, and riding a bicycle. Perspective scenes can arise through natural viewing of the world, simulation of a scene (as in flight simulators), or through imaging devices (such as the cameras on an unmanned aerospace vehicle). Designers frequently have some degree of control over the content and characteristics of a perspective scene; airport designers can choose runway markings, vehicle designers can influence the size and shape of windows, as well as the location of the pilot, and simulator database designers can choose scene complexity and content. Little theoretical framework exists to help designers determine the answers to questions related to perspective scene content. An empirical approach is most commonly used to determine optimum perspective scene configurations. The goal of the research effort described in this dissertation has been to provide a tool for modeling the characteristics of human operators conducting manual control tasks with perspective-scene viewing. This is done for the purpose of providing an algorithmic, as opposed to empirical, method for analyzing the effects of changing perspective scene content for closed-loop manual control tasks.
A distributed automatic target recognition system using multiple low resolution sensors
NASA Astrophysics Data System (ADS)
Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj
2008-04-01
In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.
Reduced change blindness suggests enhanced attention to detail in individuals with autism.
Smith, Hayley; Milne, Elizabeth
2009-03-01
The phenomenon of change blindness illustrates that a limited number of items within the visual scene are attended to at any one time. It has been suggested that individuals with autism focus attention on less contextually relevant aspects of the visual scene, show superior perceptual discrimination and notice details which are often ignored by typical observers. In this study we investigated change blindness in autism by asking participants to detect continuity errors deliberately introduced into a short film. Whether the continuity errors involved central/marginal or social/non-social aspects of the visual scene was varied. Thirty adolescent participants, 15 with autistic spectrum disorder (ASD) and 15 typically developing (TD) controls participated. The participants with ASD detected significantly more errors than the TD participants. Both groups identified more errors involving central rather than marginal aspects of the scene, although this effect was larger in the TD participants. There was no difference in the number of social or non-social errors detected by either group of participants. In line with previous data suggesting an abnormally broad attentional spotlight and enhanced perceptual function in individuals with ASD, the results of this study suggest enhanced awareness of the visual scene in ASD. The results of this study could reflect superior top-down control of visual search in autism, enhanced perceptual function, or inefficient filtering of visual information in ASD.
An optical systems analysis approach to image resampling
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
1997-01-01
All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.
Adaptive convergence nonuniformity correction algorithm.
Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua
2011-01-01
Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.
NASA Technical Reports Server (NTRS)
Wrigley, R. C. (Principal Investigator)
1984-01-01
The Thematic Mapper scene of Sacramento, CA acquired during the TDRSS test was received in TIPS format. Quadrants for both scenes were tested for band-to-band registration using reimplemented block correlation techniques. Summary statistics for band-to-band registrations of TM band combinations for Quadrant 4 of the NE Arkansas scene in TIPS format are tabulated as well as those for Quadrant 1 of the Sacramento scene. The system MTF analysis for the San Francisco scene is completed. The thermal band did not have sufficient contrast for the targets used and was not analyzed.
Oculomotor capture during real-world scene viewing depends on cognitive load.
Matsukura, Michi; Brockmole, James R; Boot, Walter R; Henderson, John M
2011-03-25
It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. Copyright © 2011 Elsevier Ltd. All rights reserved.
Spatial Modulation Improves Performance in CTIS
NASA Technical Reports Server (NTRS)
Bearman, Gregory H.; Wilson, Daniel W.; Johnson, William R.
2009-01-01
Suitably formulated spatial modulation of a scene imaged by a computed-tomography imaging spectrometer (CTIS) has been found to be useful as a means of improving the imaging performance of the CTIS. As used here, "spatial modulation" signifies the imposition of additional, artificial structure on a scene from within the CTIS optics. The basic principles of a CTIS were described in "Improvements in Computed- Tomography Imaging Spectrometry" (NPO-20561) NASA Tech Briefs, Vol. 24, No. 12 (December 2000), page 38 and "All-Reflective Computed-Tomography Imaging Spectrometers" (NPO-20836), NASA Tech Briefs, Vol. 26, No. 11 (November 2002), page 7a. To recapitulate: A CTIS offers capabilities for imaging a scene with spatial, spectral, and temporal resolution. The spectral disperser in a CTIS is a two-dimensional diffraction grating. It is positioned between two relay lenses (or on one of two relay mirrors) in a video imaging system. If the disperser were removed, the system would produce ordinary images of the scene in its field of view. In the presence of the grating, the image on the focal plane of the system contains both spectral and spatial information because the multiple diffraction orders of the grating give rise to multiple, spectrally dispersed images of the scene. By use of algorithms adapted from computed tomography, the image on the focal plane can be processed into an image cube a three-dimensional collection of data on the image intensity as a function of the two spatial dimensions (x and y) in the scene and of wavelength (lambda). Thus, both spectrally and spatially resolved information on the scene at a given instant of time can be obtained, without scanning, from a single snapshot; this is what makes the CTIS such a potentially powerful tool for spatially, spectrally, and temporally resolved imaging. A CTIS performs poorly in imaging some types of scenes in particular, scenes that contain little spatial or spectral variation. The computed spectra of such scenes tend to approximate correct values to within acceptably small errors near the edges of the field of view but to be poor approximations away from the edges. The additional structure imposed on a scene according to the present method enables the CTIS algorithms to reconstruct acceptable approximations of the spectral data throughout the scene.
Infrared Thermal Imaging System on a Mobile Phone
Lee, Fu-Feng; Chen, Feng; Liu, Jing
2015-01-01
A novel concept towards pervasively available low-cost infrared thermal imaging system lunched on a mobile phone (MTIS) was proposed and demonstrated in this article. Through digestion on the evolutional development of milestone technologies in the area, it can be found that the portable and low-cost design would become the main stream of thermal imager for civilian purposes. As a representative trial towards this important goal, a MTIS consisting of a thermal infrared module (TIM) and mobile phone with embedded exclusive software (IRAPP) was presented. The basic strategy for the TIM construction is illustrated, including sensor adoption and optical specification. The user-oriented software was developed in the Android environment by considering its popularity and expandability. Computational algorithms with non-uniformity correction and scene-change detection are established to optimize the imaging quality and efficiency of TIM. The performance experiments and analysis indicated that the currently available detective distance for the MTIS is about 29 m. Furthermore, some family-targeted utilization enabled by MTIS was also outlined, such as sudden infant death syndrome (SIDS) prevention, etc. This work suggests a ubiquitous way of significantly extending thermal infrared image into rather wide areas especially health care in the coming time. PMID:25942639
Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback
NASA Astrophysics Data System (ADS)
Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai
2012-01-01
With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.
Coding of navigational affordances in the human visual system
Epstein, Russell A.
2017-01-01
A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669
Optical system design of dynamic infrared scene projector based on DMD
NASA Astrophysics Data System (ADS)
Lu, Jing; Fu, Yuegang; Liu, Zhiying; Li, Yandong
2014-09-01
Infrared scene simulator is now widely used to simulate infrared scene practicality in the laboratory, which can greatly reduce the research cost of the optical electrical system and offer economical experiment environment. With the advantage of large dynamic range and high spatial resolution, dynamic infrared projection technology, which is the key part of the infrared scene simulator, based on digital micro-mirror device (DMD) has been rapidly developed and widely applied in recent years. In this paper, the principle of the digital micro-mirror device is briefly introduced and the characteristics of the DLP (Digital Light Procession) system based on digital micromirror device (DMD) are analyzed. The projection system worked at 8~12μm with 1024×768 pixel DMD is designed by ZEMAX. The MTF curve is close to the diffraction limited curve and the radius of the spot diagram is smaller than that of the airy disk. The result indicates that the system meets the design requirements.
The effect of distraction on change detection in crowded acoustic scenes.
Petsas, Theofilos; Harrison, Jemma; Kashino, Makio; Furukawa, Shigeto; Chait, Maria
2016-11-01
In this series of behavioural experiments we investigated the effect of distraction on the maintenance of acoustic scene information in short-term memory. Stimuli are artificial acoustic 'scenes' composed of several (up to twelve) concurrent tone-pip streams ('sources'). A gap (1000 ms) is inserted partway through the 'scene'; Changes in the form of an appearance of a new source or disappearance of an existing source, occur after the gap in 50% of the trials. Listeners were instructed to monitor the unfolding 'soundscapes' for these events. Distraction was measured by presenting distractor stimuli during the gap. Experiments 1 and 2 used a dual task design where listeners were required to perform a task with varying attentional demands ('High Demand' vs. 'Low Demand') on brief auditory (Experiment 1a) or visual (Experiment 1b) signals presented during the gap. Experiments 2 and 3 required participants to ignore distractor sounds and focus on the change detection task. Our results demonstrate that the maintenance of scene information in short-term memory is influenced by the availability of attentional and/or processing resources during the gap, and that this dependence appears to be modality specific. We also show that these processes are susceptible to bottom up driven distraction even in situations when the distractors are not novel, but occur on each trial. Change detection performance is systematically linked with the, independently determined, perceptual salience of the distractor sound. The findings also demonstrate that the present task may be a useful objective means for determining relative perceptual salience. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Fu, Kun; Jin, Junqi; Cui, Runpeng; Sha, Fei; Zhang, Changshui
2017-12-01
Recent progress on automatic generation of image captions has shown that it is possible to describe the most salient information conveyed by images with accurate and meaningful sentences. In this paper, we propose an image captioning system that exploits the parallel structures between images and sentences. In our model, the process of generating the next word, given the previously generated ones, is aligned with the visual perception experience where the attention shifts among the visual regions-such transitions impose a thread of ordering in visual perception. This alignment characterizes the flow of latent meaning, which encodes what is semantically shared by both the visual scene and the text description. Our system also makes another novel modeling contribution by introducing scene-specific contexts that capture higher-level semantic information encoded in an image. The contexts adapt language models for word generation to specific scene types. We benchmark our system and contrast to published results on several popular datasets, using both automatic evaluation metrics and human evaluation. We show that either region-based attention or scene-specific contexts improves systems without those components. Furthermore, combining these two modeling ingredients attains the state-of-the-art performance.
Illumination discrimination in the absence of a fixed surface-reflectance layout
Radonjić, Ana; Ding, Xiaomao; Krieger, Avery; Aston, Stacey; Hurlbert, Anya C.; Brainard, David H.
2018-01-01
Previous studies have shown that humans can discriminate spectral changes in illumination and that this sensitivity depends both on the chromatic direction of the illumination change and on the ensemble of surfaces in the scene. These studies, however, always used stimulus scenes with a fixed surface-reflectance layout. Here we compared illumination discrimination for scenes in which the surface reflectance layout remains fixed (fixed-surfaces condition) to those in which surface reflectances were shuffled randomly across scenes, but with the mean scene reflectance held approximately constant (shuffled-surfaces condition). Illumination discrimination thresholds in the fixed-surfaces condition were commensurate with previous reports. Thresholds in the shuffled-surfaces condition, however, were considerably elevated. Nonetheless, performance in the shuffled-surfaces condition exceeded that attainable through random guessing. Analysis of eye fixations revealed that in the fixed-surfaces condition, low illumination discrimination thresholds (across observers) were predicted by low overall fixation spread and high consistency of fixation location and fixated surface reflectances across trial intervals. Performance in the shuffled-surfaces condition was not systematically related to any of the eye-fixation characteristics we examined for that condition, but was correlated with performance in the fixed-surfaces condition. PMID:29904786
Color appearance and color rendering of HDR scenes: an experiment
NASA Astrophysics Data System (ADS)
Parraman, Carinna; Rizzi, Alessandro; McCann, John J.
2009-01-01
In order to gain a deeper understanding of the appearance of coloured objects in a three-dimensional scene, the research introduces a multidisciplinary experimental approach. The experiment employed two identical 3-D Mondrians, which were viewed and compared side by side. Each scene was subjected to different lighting conditions. First, we used an illumination cube to diffuse the light and illuminate all the objects from each direction. This produced a low-dynamicrange (LDR) image of the 3-D Mondrian scene. Second, in order to make a high-dynamic range (HDR) image of the same objects, we used a directional 150W spotlight and an array of WLEDs assembled in a flashlight. The scenes were significant as each contained exactly the same three-dimensional painted colour blocks that were arranged in the same position in the still life. The blocks comprised 6 hue colours and 5 tones from white to black. Participants from the CREATE project were asked to consider the change in the appearance of a selection of colours according to lightness, hue, and chroma, and to rate how the change in illumination affected appearance. We measured the light coming to the eye from still-life surfaces with a colorimeter (Yxy). We captured the scene radiance using multiple exposures with a number of different cameras. We have begun a programme of digital image processing of these scene capture methods. This multi-disciplinary programme continues until 2010, so this paper is an interim report on the initial phases and a description of the ongoing project.
NASA Technical Reports Server (NTRS)
Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.
1987-01-01
A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.
Atilola, Olayinka; Olayiwola, Funmilayo
2013-06-01
This study examines the modes of framing mental illness in the Yoruba genre of Nigerian movies. All Yoruba films on display in a convenient sample of movie rental shops in Ibadan (Nigeria) were sampled for content. Of the 103 films studied, 27 (26.2%) contained scenes depicting mental illness. Psychotic symptoms were the most commonly depicted, while effective treatments were mostly depicted as taking place in unorthodox settings. The most commonly depicted aetiology of mental illness was sorcery and enchantment by witches and wizards, as well as other supernatural forces. Scenes of mental illness are common in Nigerian movies and these depictions-though reflecting the popular explanatory models of Yoruba-speaking Nigerians about mental illness- may impede utilization of mental health care services and ongoing efforts to reduce psychiatry stigma in this region. Efforts to reduce stigma and improve service utilization should engage the film industry.
Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System
2015-03-26
camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene
- and Graph-Based Point Cloud Segmentation of 3d Scenes Using Perceptual Grouping Laws
NASA Astrophysics Data System (ADS)
Xu, Y.; Hoegner, L.; Tuttas, S.; Stilla, U.
2017-05-01
Segmentation is the fundamental step for recognizing and extracting objects from point clouds of 3D scene. In this paper, we present a strategy for point cloud segmentation using voxel structure and graph-based clustering with perceptual grouping laws, which allows a learning-free and completely automatic but parametric solution for segmenting 3D point cloud. To speak precisely, two segmentation methods utilizing voxel and supervoxel structures are reported and tested. The voxel-based data structure can increase efficiency and robustness of the segmentation process, suppressing the negative effect of noise, outliers, and uneven points densities. The clustering of voxels and supervoxel is carried out using graph theory on the basis of the local contextual information, which commonly conducted utilizing merely pairwise information in conventional clustering algorithms. By the use of perceptual laws, our method conducts the segmentation in a pure geometric way avoiding the use of RGB color and intensity information, so that it can be applied to more general applications. Experiments using different datasets have demonstrated that our proposed methods can achieve good results, especially for complex scenes and nonplanar surfaces of objects. Quantitative comparisons between our methods and other representative segmentation methods also confirms the effectiveness and efficiency of our proposals.
ERIC Educational Resources Information Center
Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen
2008-01-01
The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…
A Demonstration of ‘Broken’ Visual Space
Gilson, Stuart
2012-01-01
It has long been assumed that there is a distorted mapping between real and ‘perceived’ space, based on demonstrations of systematic errors in judgements of slant, curvature, direction and separation. Here, we have applied a direct test to the notion of a coherent visual space. In an immersive virtual environment, participants judged the relative distance of two squares displayed in separate intervals. On some trials, the virtual scene expanded by a factor of four between intervals although, in line with recent results, participants did not report any noticeable change in the scene. We found that there was no consistent depth ordering of objects that can explain the distance matches participants made in this environment (e.g. A>B>D yet also A
Colour analysis and verification of CCTV images under different lighting conditions
NASA Astrophysics Data System (ADS)
Smith, R. A.; MacLennan-Brown, K.; Tighe, J. F.; Cohen, N.; Triantaphillidou, S.; MacDonald, L. W.
2008-01-01
Colour information is not faithfully maintained by a CCTV imaging chain. Since colour can play an important role in identifying objects it is beneficial to be able to account accurately for changes to colour introduced by components in the chain. With this information it will be possible for law enforcement agencies and others to work back along the imaging chain to extract accurate colour information from CCTV recordings. A typical CCTV system has an imaging chain that may consist of scene, camera, compression, recording media and display. The response of each of these stages to colour scene information was characterised by measuring its response to a known input. The main variables that affect colour within a scene are illumination and the colour, orientation and texture of objects. The effects of illumination on the appearance of colour of a variety of test targets were tested using laboratory-based lighting, street lighting, car headlights and artificial daylight. A range of typical cameras used in CCTV applications, common compression schemes and representative displays were also characterised.
Signature simulation of mixed materials
NASA Astrophysics Data System (ADS)
Carson, Tyler D.; Salvaggio, Carl
2015-05-01
Soil target signatures vary due to geometry, chemical composition, and scene radiometry. Although radiative transfer models and function-fit physical models may describe certain targets in limited depth, the ability to incorporate all three signature variables is difficult. This work describes a method to simulate the transient signatures of soil by first considering scene geometry synthetically created using 3D physics engines. Through the assignment of spectral data from the Nonconventional Exploitation Factors Data System (NEFDS), the synthetic scene is represented as a physical mixture of particles. Finally, first principles radiometry is modeled using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. With DIRSIG, radiometric and sensing conditions were systematically manipulated to produce and record goniometric signatures. The implementation of this virtual goniometer allows users to examine how a target bidirectional reflectance distribution function (BRDF) will change with geometry, composition, and illumination direction. By using 3D computer graphics models, this process does not require geometric assumptions that are native to many radiative transfer models. It delivers a discrete method to circumnavigate the significant cost of time and treasure associated with hardware-based goniometric data collections.
A Theoretical and Experimental Analysis of the Outside World Perception Process
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1978-01-01
The outside scene is often an important source of information for manual control tasks. Important examples of these are car driving and aircraft control. This paper deals with modelling this visual scene perception process on the basis of linear perspective geometry and the relative motion cues. Model predictions utilizing psychophysical threshold data from base-line experiments and literature of a variety of visual approach tasks are compared with experimental data. Both the performance and workload results illustrate that the model provides a meaningful description of the outside world perception process, with a useful predictive capability.
Meta Data Mining in Earth Remote Sensing Data Archives
NASA Astrophysics Data System (ADS)
Davis, B.; Steinwand, D.
2014-12-01
Modern search and discovery tools for satellite based remote sensing data are often catalog based and rely on query systems which use scene- (or granule-) based meta data for those queries. While these traditional catalog systems are often robust, very little has been done in the way of meta data mining to aid in the search and discovery process. The recently coined term "Big Data" can be applied in the remote sensing world's efforts to derive information from the vast data holdings of satellite based land remote sensing data. Large catalog-based search and discovery systems such as the United States Geological Survey's Earth Explorer system and the NASA Earth Observing System Data and Information System's Reverb-ECHO system provide comprehensive access to these data holdings, but do little to expose the underlying scene-based meta data. These catalog-based systems are extremely flexible, but are manually intensive and often require a high level of user expertise. Exposing scene-based meta data to external, web-based services can enable machine-driven queries to aid in the search and discovery process. Furthermore, services which expose additional scene-based content data (such as product quality information) are now available and can provide a "deeper look" into remote sensing data archives too large for efficient manual search methods. This presentation shows examples of the mining of Landsat and Aster scene-based meta data, and an experimental service using OPeNDAP to extract information from quality band from multiple granules in the MODIS archive.
Navigable points estimation for mobile robots using binary image skeletonization
NASA Astrophysics Data System (ADS)
Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman
2017-02-01
This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.
Research and Construction Lunar Stereoscopic Visualization System Based on Chang'E Data
NASA Astrophysics Data System (ADS)
Gao, Xingye; Zeng, Xingguo; Zhang, Guihua; Zuo, Wei; Li, ChunLai
2017-04-01
With lunar exploration activities carried by Chang'E-1, Chang'E-2 and Chang'E-3 lunar probe, a large amount of lunar data has been obtained, including topographical and image data covering the whole moon, as well as the panoramic image data of the spot close to the landing point of Chang'E-3. In this paper, we constructed immersive virtual moon system based on acquired lunar exploration data by using advanced stereoscopic visualization technology, which will help scholars to carry out research on lunar topography, assist the further exploration of lunar science, and implement the facilitation of lunar science outreach to the public. In this paper, we focus on the building of lunar stereoscopic visualization system with the combination of software and hardware by using binocular stereoscopic display technology, real-time rendering algorithm for massive terrain data, and building virtual scene technology based on panorama, to achieve an immersive virtual tour of the whole moon and local moonscape of Chang'E-3 landing point.
Subliminal encoding and flexible retrieval of objects in scenes.
Wuethrich, Sergej; Hannula, Deborah E; Mast, Fred W; Henke, Katharina
2018-04-27
Our episodic memory stores what happened when and where in life. Episodic memory requires the rapid formation and flexible retrieval of where things are located in space. Consciousness of the encoding scene is considered crucial for episodic memory formation. Here, we question the necessity of consciousness and hypothesize that humans can form unconscious episodic memories. Participants were presented with subliminal scenes, i.e., scenes invisible to the conscious mind. The scenes displayed objects at certain locations for participants to form unconscious object-in-space memories. Later, the same scenes were presented supraliminally, i.e., visibly, for retrieval testing. Scenes were presented absent the objects and rotated by 90°-270° in perspective to assess the representational flexibility of unconsciously formed memories. During the test phase, participants performed a forced-choice task that required them to place an object in one of two highlighted scene locations and their eye movements were recorded. Evaluation of the eye tracking data revealed that participants remembered object locations unconsciously, irrespective of changes in viewing perspective. This effect of gaze was related to correct placements of objects in scenes, and an intuitive decision style was necessary for unconscious memories to influence intentional behavior to a significant degree. We conclude that conscious perception is not mandatory for spatial episodic memory formation. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
Multisensor Fusion for Change Detection
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.
2005-12-01
Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach with detecting surface elevation changes on the Byrd Glacier, Antarctica, with aerial imagery from 1980s and ICESat laser altimetry data from 2003-05. Change detection from such disparate data sets is an intricate fusion problem, beginning with sensor alignment, and on to reasoning with spatial information as to where changes occurred and to what extent.
Corn and soybean Landsat MSS classification performance as a function of scene characteristics
NASA Technical Reports Server (NTRS)
Batista, G. T.; Hixson, M. M.; Bauer, M. E.
1982-01-01
In order to fully utilize remote sensing to inventory crop production, it is important to identify the factors that affect the accuracy of Landsat classifications. The objective of this study was to investigate the effect of scene characteristics involving crop, soil, and weather variables on the accuracy of Landsat classifications of corn and soybeans. Segments sampling the U.S. Corn Belt were classified using a Gaussian maximum likelihood classifier on multitemporally registered data from two key acquisition periods. Field size had a strong effect on classification accuracy with small fields tending to have low accuracies even when the effect of mixed pixels was eliminated. Other scene characteristics accounting for variability in classification accuracy included proportions of corn and soybeans, crop diversity index, proportion of all field crops, soil drainage, slope, soil order, long-term average soybean yield, maximum yield, relative position of the segment in the Corn Belt, weather, and crop development stage.
Perkins, David Nikolaus; Gonzales, Antonio I
2014-04-08
A set of co-registered coherent change detection (CCD) products is produced from a set of temporally separated synthetic aperture radar (SAR) images of a target scene. A plurality of transformations are determined, which transformations are respectively for transforming a plurality of the SAR images to a predetermined image coordinate system. The transformations are used to create, from a set of CCD products produced from the set of SAR images, a corresponding set of co-registered CCD products.
Impact of LANDSAT MSS sensor differences on change detection analysis
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1983-01-01
Some 512 by 512 pixel subwindows for simultaneously acquired scene pairs obtained by LANDSAT 2,3 and 4 multispectral band scanners were coregistered using LANDSAT 4 scenes as the base to which the other images were registered. Scattergrams between the coregistered scenes (a form of contingency analysis) were used to radiometrically compare data from the various sensors. Mode values were derived and used to visually fit a linear regression. Root mean square errors of the registration varied between .1 and 1.5 pixels. There appear to be no major problem preventing the use of LANDSAT 4 MSS with previous MSS sensors for change detection, provided the noise interference can be removed or minimized. Data normalizations for change detection should be based on the data rather than solely on calibration information. This allows simultaneous normalization of the atmosphere as well as the radiometry.
NASA Astrophysics Data System (ADS)
Wajs, Jaroslaw
2018-01-01
The paper presents satellite imagery from active SENTINEL-1A and passive SENTINEL-2A/2B sensors for their application in the monitoring of mining areas focused on detecting land changes. Multispectral scenes of SENTINEL-2A/2B have allowed for detecting changes in land-cover near the region of interest (ROI), i.e. the Szczercow dumping site in the Belchatow open cast lignite mine, central Poland, Europe. Scenes from SENTINEL-1A/1B satellite have also been used in the research. Processing of the SLC signal enabled creating a return intensity map in VV polarization. The obtained SAR scene was reclassified and shows a strong return signal from the dumping site and the open pit. This fact may be used in detection and monitoring of changes occurring within the analysed engineering objects.
Application of composite small calibration objects in traffic accident scene photogrammetry.
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.
Interrupted Visual Searches Reveal Volatile Search Memory
ERIC Educational Resources Information Center
Shen, Y. Jeremy; Jiang, Yuhong V.
2006-01-01
This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by…
Hardware accelerator design for change detection in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil
2011-10-01
Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.
A Drastic Change in Background Luminance or Motion Degrades the Preview Benefit.
Osugi, Takayuki; Murakami, Ikuya
2017-01-01
When some distractors (old items) precede some others (new items) in an inefficient visual search task, the search is restricted to new items, and yields a phenomenon termed the preview benefit. It has recently been demonstrated that, in this preview search task, the onset of repetitive changes in the background disrupts the preview benefit, whereas a single transient change in the background does not. In the present study, we explored this effect with dynamic background changes occurring in the context of realistic scenes, to examine the robustness and usefulness of visual marking. We examined whether preview benefit in a preview search task survived through task-irrelevant changes in the scene, namely a luminance change and the initiation of coherent motion, both occurring in the background. Luminance change of the background disrupted preview benefit if it was synchronized with the onset of the search display. Furthermore, although the presence of coherent background motion per se did not affect preview benefit, its synchronized initiation with the onset of the search display did disrupt preview benefit if the motion speed was sufficiently high. These results suggest that visual marking can be destroyed by a transient event in the scene if that event is sufficiently drastic.
Dual Channel S-Band Frequency Modulated Continuous Wave Through-Wall Radar Imaging
Oh, Daegun; Kim, Sunwoo; Chong, Jong-Wha
2018-01-01
This article deals with the development of a dual channel S-Band frequency-modulated continuous wave (FMCW) system for a through-the-wall imaging (TWRI) system. Most existing TWRI systems using FMCW were developed for synthetic aperture radar (SAR) which has many drawbacks such as the need for several antenna elements and movement of the system. Our implemented TWRI system comprises a transmitting antenna and two receiving antennas, resulting in a significant reduction of the number of antenna elements. Moreover, a proposed algorithm for range-angle-Doppler 3D estimation based on a 3D shift invariant structure is utilized in our implemented dual channel S-band FMCW TWRI system. Indoor and outdoor experiments were conducted to image the scene beyond a wall for water targets and person targets, respectively. The experimental results demonstrate that high-quality imaging can be achieved under both experimental scenarios. PMID:29361777
Development and testing of the EVS 2000 enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.
Intelligent image capture of cartridge cases for firearms examiners
NASA Astrophysics Data System (ADS)
Jones, Brett C.; Guerci, Joseph R.
1997-02-01
The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model.
Li, Jing; Zhang, Fangbing; Wei, Lisong; Yang, Tao; Lu, Zhaoyang
2017-10-16
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model
Li, Jing; Zhang, Fangbing; Wei, Lisong; Lu, Zhaoyang
2017-01-01
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. PMID:29035295
Wide swath imaging spectrometer utilizing a multi-modular design
Chrisp, Michael P.
2010-10-05
A wide swath imaging spectrometer utilizing an array of individual spectrometer modules in the telescope focal plane to provide an extended field of view. The spectrometer modules with their individual detectors are arranged so that their slits overlap with motion on the scene providing contiguous spatial coverage. The number of modules can be varied to take full advantage of the field of view available from the telescope.
The Nature and Timing of Tele-Pseudoscopic Experiences
Hill, Harold; Allison, Robert S
2016-01-01
Interchanging the left and right eye views of a scene (pseudoscopic viewing) has been reported to produce vivid stereoscopic effects under certain conditions. In two separate field studies, we examined the experiences of 124 observers (76 in Study 1 and 48 in Study 2) while pseudoscopically viewing a distant natural outdoor scene. We found large individual differences in both the nature and the timing of their pseudoscopic experiences. While some observers failed to notice anything unusual about the pseudoscopic scene, most experienced multiple pseudoscopic phenomena, including apparent scene depth reversals, apparent object shape reversals, apparent size and flatness changes, apparent reversals of border ownership, and even complex illusory foreground surfaces. When multiple effects were experienced, patterns of co-occurrence suggested possible causal relationships between apparent scene depth reversals and several other pseudoscopic phenomena. The latency for experiencing pseudoscopic phenomena was found to correlate significantly with observer visual acuity, but not stereoacuity, in both studies. PMID:27482368
SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.
Öhlschläger, Sabine; Võ, Melissa Le-Hoa
2017-10-01
Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .
NASA Astrophysics Data System (ADS)
Chiodini, G.; Vilardo, G.; Augusti, V.; Granieri, D.; Caliro, S.; Minopoli, C.; Terranova, C.
2007-12-01
A permanent automatic infrared (IR) station was installed at Solfatara crater, the most active zone of Campi Flegrei caldera. After a positive in situ calibration of the IR camera, we analyze 2175 thermal IR images of the same scene from 2004 to 2007. The scene includes a portion of the steam heated hot soils of Solfatara. The experiment was initiated to detect and quantify temperature changes of the shallow thermal structure of a quiescent volcano such as Solfatara over long periods. Ambient temperature is the main parameter affecting IR temperatures, while air humidity and rain control image quality. A geometric correction of the images was necessary to remove the effects of slow movement of the camera. After a suitable correction the images give a reliable and detailed picture of the temperature changes, over the period October 2004 to January 2007, which suggests that origin of the changes were linked to anthropogenic activity, vegetation growth, and the increase of the flux of hydrothermal fluids in the area of the hottest fumaroles. Two positive temperature anomalies were registered after the occurrence of two seismic swarms which affected the hydrothermal system of Solfatara in October 2005 and October 2006. It is worth noting that these signs were detected in a system characterized by a low level of activity with respect to systems affected by real volcanic crisis where more spectacular results will be expected. Results of the experiment show that this kind of monitoring system can be a suitable tool for volcanic surveillance.
Differences in change blindness to real-life scenes in adults with autism spectrum conditions.
Ashwin, Chris; Wheelwright, Sally; Baron-Cohen, Simon
2017-01-01
People often fail to detect large changes to visual scenes following a brief interruption, an effect known as 'change blindness'. People with autism spectrum conditions (ASC) have superior attention to detail and better discrimination of targets, and often notice small details that are missed by others. Together these predict people with autism should show enhanced perception of changes in simple change detection paradigms, including reduced change blindness. However, change blindness studies to date have reported mixed results in ASC, which have sometimes included no differences to controls or even enhanced change blindness. Attenuated change blindness has only been reported to date in ASC in children and adolescents, with no study reporting reduced change blindness in adults with ASC. The present study used a change blindness flicker task to investigate the detection of changes in images of everyday life in adults with ASC (n = 22) and controls (n = 22) using a simple change detection task design and full range of original scenes as stimuli. Results showed the adults with ASC had reduced change blindness compared to adult controls for changes to items of marginal interest in scenes, with no group difference for changes to items of central interest. There were no group differences in overall response latencies to correctly detect changes nor in the overall number of missed detections in the experiment. However, the ASC group showed greater missed changes for marginal interest changes of location, showing some evidence of greater change blindness as well. These findings show both reduced change blindness to marginal interest changes in ASC, based on response latencies, as well as greater change blindness to changes of location of marginal interest items, based on detection rates. The findings of reduced change blindness are consistent with clinical reports that people with ASC often notice small changes to less salient items within their environment, and are in-line with theories of enhanced local processing and greater attention to detail in ASC. The findings of lower detection rates for one of the marginal interest conditions may be related to problems in shifting attention or an overly focused attention spotlight.
The design and application of a multi-band IR imager
NASA Astrophysics Data System (ADS)
Li, Lijuan
2018-02-01
Multi-band IR imaging system has many applications in security, national defense, petroleum and gas industry, etc. So the relevant technologies are getting more and more attention in rent years. As we know, when used in missile warning and missile seeker systems, multi-band IR imaging technology has the advantage of high target recognition capability and low false alarm rate if suitable spectral bands are selected. Compared with traditional single band IR imager, multi-band IR imager can make use of spectral features in addition to space and time domain features to discriminate target from background clutters and decoys. So, one of the key work is to select the right spectral bands in which the feature difference between target and false target is evident and is well utilized. Multi-band IR imager is a useful instrument to collect multi-band IR images of target, backgrounds and decoys for spectral band selection study at low cost and with adjustable parameters and property compared with commercial imaging spectrometer. In this paper, a multi-band IR imaging system is developed which is suitable to collect 4 spectral band images of various scenes at every turn and can be expanded to other short-wave and mid-wave IR spectral bands combination by changing filter groups. The multi-band IR imaging system consists of a broad band optical system, a cryogenic InSb large array detector, a spinning filter wheel and electronic processing system. The multi-band IR imaging system's performance is tested in real data collection experiments.
Real-time maritime scene simulation for ladar sensors
NASA Astrophysics Data System (ADS)
Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.
2011-06-01
Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.
Recognition of 3-D Scene with Partially Occluded Objects
NASA Astrophysics Data System (ADS)
Lu, Siwei; Wong, Andrew K. C...
1987-03-01
This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.
Bio-inspired display of polarization information using selected visual cues
NASA Astrophysics Data System (ADS)
Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader
2003-12-01
For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
Fusion of monocular cues to detect man-made structures in aerial imagery
NASA Technical Reports Server (NTRS)
Shufelt, Jefferey; Mckeown, David M.
1991-01-01
The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.
Developmental changes in attention to faces and bodies in static and dynamic scenes.
Stoesz, Brenda M; Jakobson, Lorna S
2014-01-01
Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviors of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes-a strategy that could reduce the cognitive and the affective load imposed by having to divide one's attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviors in typical and atypical development.
NASA Astrophysics Data System (ADS)
Holifield Collins, C.; Kautz, M. A.; Skirvin, S. M.; Metz, L. J.
2016-12-01
There are over 180 million hectares of rangelands and grazed forests in the central and western United States. Due to the loss of perennial grasses and subsequent increased runoff and erosion that can degrade the system, woody cover species cannot be allowed to proliferate unchecked. The USDA-Natural Resources Conservation Service (NRCS) has allocated extensive resources to employ brush management (removal) as a conservation practice to control woody species encroachment. The Rangeland-Conservation Effects Assessment Project (CEAP) has been tasked with determining how effective the practice has been, however their land managers lack a cost-effective means to conduct these assessments at the necessary scale. An ArcGIS toolbox for generating large-scale, Landsat-based, spatial maps of woody cover on grazing lands in the western United States was developed through a collaboration with NRCS Rangeland-CEAP. The toolbox contains two main components of operation, image generation and temporal analysis, and utilizes simple interfaces requiring minimum user inputs. The image generation tool utilizes geographically specific algorithms developed from combining moderate-resolution (30-m) Landsat imagery and high-resolution (1-m) National Agricultural Imagery Program (NAIP) aerial photography to produce the woody cover scenes at the Major Land Resource (MLRA) scale. The temporal analysis tool can be used on these scenes to assess treatment effectiveness and monitor woody cover reemergence. RaBET provides rangeland managers an operational, inexpensive decision support tool to aid in the application of brush removal treatments and assessing their effectiveness.
Multispectral system analysis through modeling and simulation
NASA Technical Reports Server (NTRS)
Malila, W. A.; Gleason, J. M.; Cicone, R. C.
1977-01-01
The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in Landsat data, examining system design and operational configuration, and development of information extraction techniques.
Multispectral system analysis through modeling and simulation
NASA Technical Reports Server (NTRS)
Malila, W. A.; Gleason, J. M.; Cicone, R. C.
1977-01-01
The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in LANDSAT data, examining system design and operational configuration, and development of information extraction techniques.
A study of payload specialist station monitor size constraints. [space shuttle orbiters
NASA Technical Reports Server (NTRS)
Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.
1975-01-01
Constraints on the CRT display size for the shuttle orbiter cabin are studied. The viewing requirements placed on these monitors were assumed to involve display of imaged scenes providing visual feedback during payload operations and display of alphanumeric characters. Data on target recognition/resolution, target recognition, and range rate detection by human observers were utilized to determine viewing requirements for imaged scenes. Field-of-view and acuity requirements for a variety of payload operations were obtained along with the necessary detection capability in terms of range-to-target size ratios. The monitor size necessary to meet the acuity requirements was established. An empirical test was conducted to determine required recognition sizes for displayed alphanumeric characters. The results of the test were used to determine the number of characters which could be simultaneously displayed based on the recognition size requirements using the proposed monitor size. A CRT display of 20 x 20 cm is recommended. A portion of the display area is used for displaying imaged scenes and the remaining display area is used for alphanumeric characters pertaining to the displayed scene. The entire display is used for the character alone mode.
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Blind prediction of natural video quality.
Saad, Michele A; Bovik, Alan C; Charrier, Christophe
2014-03-01
We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.
A Fast MEANSHIFT Algorithm-Based Target Tracking System
Sun, Jian
2012-01-01
Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397
Contextual cueing in naturalistic scenes: Global and local contexts.
Brockmole, James R; Castelhano, Monica S; Henderson, John M
2006-07-01
In contextual cueing, the position of a target within a group of distractors is learned over repeated exposure to a display with reference to a few nearby items rather than to the global pattern created by the elements. The authors contrasted the role of global and local contexts for contextual cueing in naturalistic scenes. Experiment 1 showed that learned target positions transfer when local information is altered but not when global information is changed. Experiment 2 showed that scene-target covariation is learned more slowly when local, but not global, information is repeated across trials than when global but not local information is repeated. Thus, in naturalistic scenes, observers are biased to associate target locations with global contexts. Copyright 2006 APA, all rights reserved.
Horizons in Learning Innovation through Technology: Prospects for Air Force Education Benefits
2010-06-10
prototyping, and implementation. Successfully implementing disruptive innovations requires change management to help steward the identification ...systems and environments for Air Force education benefits goes beyond the identification and analysis of emerging horizons. Processes and methods...scene, a patrol area, or a suspect lineup (“Augmented- reality,” 2010). Connection to Innovation Triangle. The concepts of LVC and AR are quickly
House, Darlene R; Huffman, Gretchen; Walthall, Jennifer D H
2012-11-01
Motor vehicle collisions (MVCs) are the leading cause of death and disability among children older than 1 year. Many states currently mandate all children between the ages of 4 and 8 years be restrained in booster seats. The implementation of a booster-seat law is generally thought to decrease the occurrence of injury to children. We hypothesized that appropriate restraint with booster seats would also cause a decrease in emergency department (ED) visits compared with children who were unrestrained. This is an important measure as ED visits are a surrogate marker for injury. The main purpose of this study was to look at the rate of ED visits between children in booster seats compared with those in other or no restraint systems involved in MVCs. Injury severity was compared across restraint types as a secondary outcome of booster-seat use after the implementation of a state law. A prospective observational study was performed including all children 4 to 8 years old involved in MVCs to which emergency medical services was dispatched. Ambulance services used a novel on-scene computer charting system for all MVC-related encounters to collect age, sex, child-restraint system, Glasgow Coma Scale score, injuries, and final disposition. One hundred fifty-nine children were studied with 58 children (35.6%) in booster seats, 73 children in seatbelts alone (45.2%), and 28 children (19.1%) in no restraint system. 76 children (47.7%), 74 by emergency medical services and 2 by private vehicle, were transported to the ED with no significant difference between restraint use (P = 0.534). Utilization of a restraint system did not significantly impact MVC injury severity. However, of those children who either died (n = 2) or had an on-scene decreased Glasgow Coma Scale score (n = 6), 75% (6/8) were not restrained in a booster seat. The use of booster-seat restraints does not appear to be associated with whether a child will be transported to the ED for trauma evaluation.
Use of anomolous thermal imaging effects for multi-mode systems control during crystal growth
NASA Technical Reports Server (NTRS)
Wargo, Michael J.
1989-01-01
Real time image processing techniques, combined with multitasking computational capabilities are used to establish thermal imaging as a multimode sensor for systems control during crystal growth. Whereas certain regions of the high temperature scene are presently unusable for quantitative determination of temperature, the anomalous information thus obtained is found to serve as a potentially low noise source of other important systems control output. Using this approach, the light emission/reflection characteristics of the crystal, meniscus and melt system are used to infer the crystal diameter and a linear regression algorithm is employed to determine the local diameter trend. This data is utilized as input for closed loop control of crystal shape. No performance penalty in thermal imaging speed is paid for this added functionality. Approach to secondary (diameter) sensor design and systems control structure is discussed. Preliminary experimental results are presented.
Change blindness, aging, and cognition
Rizzo, Matthew; Sparks, JonDavid; McEvoy, Sean; Viamonte, Sarah; Kellison, Ida; Vecera, Shaun P.
2011-01-01
Change blindness (CB), the inability to detect changes in visual scenes, may increase with age and early Alzheimer’s disease (AD). To test this hypothesis, participants were asked to localize changes in natural scenes. Dependent measures were response time (RT), hit rate, false positives (FP), and true sensitivity (d′). Increased age correlated with increased sensitivity and RT; AD predicted even slower RT. Accuracy and RT were negatively correlated. Differences in FP were nonsignificant. CB correlated with impaired attention, working memory, and executive function. Advanced age and AD were associated with increased CB, perhaps due to declining memory and attention. CB could affect real-world tasks, like automobile driving. PMID:19051127
Change blindness, aging, and cognition.
Rizzo, Matthew; Sparks, Jondavid; McEvoy, Sean; Viamonte, Sarah; Kellison, Ida; Vecera, Shaun P
2009-02-01
Change blindness (CB), the inability to detect changes in visual scenes, may increase with age and early Alzheimer's disease (AD). To test this hypothesis, participants were asked to localize changes in natural scenes. Dependent measures were response time (RT), hit rate, false positives (FP), and true sensitivity (d'). Increased age correlated with increased sensitivity and RT; AD predicted even slower RT. Accuracy and RT were negatively correlated. Differences in FP were nonsignificant. CB correlated with impaired attention, working memory, and executive function. Advanced age and AD were associated with increased CB, perhaps due to declining memory and attention. CB could affect real-world tasks, like automobile driving.
Hierarchical, Three-Dimensional Measurement System for Crime Scene Scanning.
Marcin, Adamczyk; Maciej, Sieniło; Robert, Sitnik; Adam, Woźniak
2017-07-01
We present a new generation of three-dimensional (3D) measuring systems, developed for the process of crime scene documentation. This measuring system facilitates the preparation of more insightful, complete, and objective documentation for crime scenes. Our system reflects the actual requirements for hierarchical documentation, and it consists of three independent 3D scanners: a laser scanner for overall measurements, a situational structured light scanner for more minute measurements, and a detailed structured light scanner for the most detailed parts of tscene. Each scanner has its own spatial resolution, of 2.0, 0.3, and 0.05 mm, respectively. The results of interviews we have conducted with technicians indicate that our developed 3D measuring system has significant potential to become a useful tool for forensic technicians. To ensure the maximum compatibility of our measuring system with the standards that regulate the documentation process, we have also performed a metrological validation and designated the maximum permissible length measurement error E MPE for each structured light scanner. In this study, we present additional results regarding documentation processes conducted during crime scene inspections and a training session. © 2017 American Academy of Forensic Sciences.
Hu, Jian; Xu, Xiang-yang; Song, En-min; Tan, Hong-bao; Wang, Yi-ning
2009-09-01
To establish a new visual educational system of virtual reality for clinical dentistry based on world wide web (WWW) webpage in order to provide more three-dimensional multimedia resources to dental students and an online three-dimensional consulting system for patients. Based on computer graphics and three-dimensional webpage technologies, the software of 3Dsmax and Webmax were adopted in the system development. In the Windows environment, the architecture of whole system was established step by step, including three-dimensional model construction, three-dimensional scene setup, transplanting three-dimensional scene into webpage, reediting the virtual scene, realization of interactions within the webpage, initial test, and necessary adjustment. Five cases of three-dimensional interactive webpage for clinical dentistry were completed. The three-dimensional interactive webpage could be accessible through web browser on personal computer, and users could interact with the webpage through rotating, panning and zooming the virtual scene. It is technically feasible to implement the visual educational system of virtual reality for clinical dentistry based on WWW webpage. Information related to clinical dentistry can be transmitted properly, visually and interactively through three-dimensional webpage.
Exploring Shakespeare through the Cinematic Image: Seeing "Hamlet."
ERIC Educational Resources Information Center
Felter, Douglas P.
1993-01-01
Describes an innovative approach to teaching William Shakespeare's "Hamlet" utilizing various film versions of the play. Outlines a method of showing several versions of the same scene from different film adaptations. Describes student reaction to the variations among the different films. (HB)
Tracking of Environment Changes by Exploitation of Suomi-NPP VIIRS Data
NASA Astrophysics Data System (ADS)
Ibrahim, W.; Greene, E.; van Poollen, C.; Cumpton, D.
2017-12-01
NOAA's next-generation environmental satellite system, Joint Polar Satellite System (JPSS), replaces the current Polar-orbiting Operational Environmental Satellites. JPSS satellites carry sensors which collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The first JPSS satellite, Suomi National Polar-orbiting Partnership (S-NPP), was launched in 2011. The JPSS ground system is the Common Ground System (CGS), and provides command, control, and communications (C3) and data processing (DP). S-NPP satellite includes the Visible Infrared Imaging Radiometer Suite (VIIRS), a 22-band scanning radiometer that provides top-of-atmosphere radiances and reflectances at a range of visible and infrared frequencies. Data collected from VIIRS are output by CGS DP into Raw Data Records (RDRs; Level-0), Sensor Data Records (SDRs; Level-1B) and Environmental Data Records (EDRs; Level-1C). This paper presents a methodology of monitoring and tracking impact of weather conditions on environment changes by exploitation of data from S-NPP VIIRS products. Three different products created from VIIRS data, SDR M-band True-Color (TC) composite visible imagery RGB (M5, M4 and M3), SDR M-band Natural-Color (NC) composite imagery RGB (M10, M7 and M5) and Vegetation Index (VI) EDR, are used to analyze the change in springtime vegetation and snowpack in California, USA, over four years from the height of the drought in 2014 to its end in 2017. While the TC composite images are more appealing to the human observer, utilization of the NC composite images allows for tracking and monitoring the changes in the snowpack in the Sierra Nevada, the reappearance of bodies of water and the changes in the vegetation composite. The VI product uses NDVI to characterize the vegetation temporally. By combining multiple VIIRS products, complex scenes can be visualized and analyzed temporally and spatially more accurately than just using a single product. Assimilation of both imagery and EDR products allows for a better characterization of impact of weather conditions on environment changes. This method can be expanded to characterize impact of weather conditions on environment changes in sea ice, snow, forest, agricultural land, population centers, etc.
Multi-modal cockpit interface for improved airport surface operations
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)
2010-01-01
A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.
Temporal and peripheral extraction of contextual cues from scenes during visual search.
Koehler, Kathryn; Eckstein, Miguel P
2017-02-01
Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.
Does object view influence the scene consistency effect?
Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko
2015-04-01
Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.
Digital forensics: an analytical crime scene procedure model (ACSPM).
Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut
2013-12-10
In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic investigations each case is unique and needs special examination, it is not possible to cover every aspect of crime scene digital forensics, but the proposed procedure model is supposed to be a general guideline for practitioners. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
System and method for extracting dominant orientations from a scene
Straub, Julian; Rosman, Guy; Freifeld, Oren; Leonard, John J.; Fisher, III; , John W.
2017-05-30
In one embodiment, a method of identifying the dominant orientations of a scene comprises representing a scene as a plurality of directional vectors. The scene may comprise a three-dimensional representation of a scene, and the plurality of directional vectors may comprise a plurality of surface normals. The method further comprises determining, based on the plurality of directional vectors, a plurality of orientations describing the scene. The determined plurality of orientations explains the directionality of the plurality of directional vectors. In certain embodiments, the plurality of orientations may have independent axes of rotation. The plurality of orientations may be determined by representing the plurality of directional vectors as lying on a mathematical representation of a sphere, and inferring the parameters of a statistical model to adapt the plurality of orientations to explain the positioning of the plurality of directional vectors lying on the mathematical representation of the sphere.
Reach Out and Touch Someone: West Alabama Designs a New Emergency Link.
ERIC Educational Resources Information Center
Coogan, Mercy Hardie
1980-01-01
Quality on-the-scene emergency care for a rural area is provided by West Alabama's Emergency Medical Services. The success of this delivery system is attributed to a radio/telephone communications system that provides quick, direct contact between paramedics at the scene and medical doctors miles away. (DS)
15 CFR 743.1 - Wassenaar Arrangement.
Code of Federal Regulations, 2011 CFR
2011-01-01
...' are defined as “focal plane arrays” designed for use with a scanning optical system that images a scene in a sequential manner to produce an image. 'Staring Arrays' are defined as “focal plane arrays” unfortunately designed for use with a non-scanning optical system that images a scene. h. Gallium Arsenide or...
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
Lin, Qing; Han, Youngjoon
2014-01-01
A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. PMID:25302812
Simulating Optical Correlation on a Digital Image Processing
NASA Astrophysics Data System (ADS)
Denning, Bryan
1998-04-01
Optical Correlation is a useful tool for recognizing objects in video scenes. In this paper, we explore the characteristics of a composite filter known as the equal correlation peak synthetic discriminant function (ECP SDF). Although the ECP SDF is commonly used in coherent optical correlation systems, the authors simulated the operation of a correlator using an EPIX frame grabber/image processor board to complete this work. Issues pertaining to simulating correlation using an EPIX board will be discussed. Additionally, the ability of the ECP SDF to detect objects that have been subjected to inplane rotation and small scale changes will be addressed by correlating filters against true-class objects placed randomly within a scene. To test the robustness of the filters, the results of correlating the filter against false-class objects that closely resemble the true class will also be presented.
Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052
The new generation of OpenGL support in ROOT
NASA Astrophysics Data System (ADS)
Tadel, M.
2008-07-01
OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.
Did limits on payments for tobacco placements in US movies affect how movies are made?
Morgenstern, Matthis; Stoolmiller, Mike; Bergamini, Elaina; Sargent, James D
2017-01-01
Objective To compare how smoking was depicted in Hollywood movies before and after an intervention limiting paid product placement for cigarette brands. Design Correlational analysis. Setting/Participants Top box office hits released in the USA primarily between 1988 and 2011 (n=2134). Intervention The Master Settlement Agreement (MSA), implemented in 1998. Main outcome measures This study analyses trends for whether or not movies depicted smoking, and among movies with smoking, counts for character smoking scenes and average smoking scene duration. Results There was no detectable trend for any measure prior to the MSA. In 1999, 79% of movies contained smoking, and movies with smoking contained 8 scenes of character smoking, with the average duration of a character smoking scene being 81 s. After the MSA, there were significant negative post-MSA changes (p<0.05) for linear trends in proportion of movies with any smoking (which declined to 41% by 2011) and, in movies with smoking, counts of character smoking scenes (which declined to 4 by 2011). Between 1999 and 2000, there was an immediate and dramatic drop in average length of a character smoking scene, which decreased to 19 s, and remained there for the duration of the study. The probability that the drop of −62.5 (95% CI −55.1 to −70.0) seconds was due to chance was p<10−16. Conclusions This study's correlational data suggest that restricting payments for tobacco product placement coincided with profound changes in the duration of smoking depictions in movies. PMID:26822189
Watanabe, Hiroshi; Teramoto, Wataru; Umemura, Hiroyuki
2007-01-01
Objective We studied the effects of the presentation of a visual sign that warned subjects of acceleration around the yaw and pitch axes in virtual reality (VR) on their heart rate variability. Methods Synchronization of the immersive virtual reality equipment (CAVE) and motion base system generated a driving scene and provided subjects with dynamic and wide-ranging depth information and vestibular input. The heart rate variability of 21 subjects was measured while the subjects observed a simulated driving scene for 16 minutes under three different conditions. Results When the predictive sign of the acceleration appeared 3500 ms before the acceleration, the index of the activity of the autonomic nervous system (low/high frequency ratio; LF/HF ratio) of subjects did not change much, whereas when no sign appeared the LF/HF ratio increased over the observation time. When the predictive sign of the acceleration appeared 750 ms before the acceleration, no systematic change occurred. Conclusion The visual sign which informed subjects of the acceleration affected the activity of the autonomic nervous system when it appeared long enough before the acceleration. Also, our results showed the importance of the interval between the sign and the event and the relationship between the gradual representation of events and their quantity. PMID:17903267
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
Signature modelling and radiometric rendering equations in infrared scene simulation systems
NASA Astrophysics Data System (ADS)
Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian
2011-11-01
The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.
A habituation based approach for detection of visual changes in surveillance camera
NASA Astrophysics Data System (ADS)
Sha'abani, M. N. A. H.; Adan, N. F.; Sabani, M. S. M.; Abdullah, F.; Nadira, J. H. S.; Yasin, M. S. M.
2017-09-01
This paper investigates a habituation based approach in detecting visual changes using video surveillance systems in a passive environment. Various techniques have been introduced for dynamic environment such as motion detection, object classification and behaviour analysis. However, in a passive environment, most of the scenes recorded by the surveillance system are normal. Therefore, implementing a complex analysis all the time in the passive environment resulting on computationally expensive, especially when using a high video resolution. Thus, a mechanism of attention is required, where the system only responds to an abnormal event. This paper proposed a novelty detection mechanism in detecting visual changes and a habituation based approach to measure the level of novelty. The objective of the paper is to investigate the feasibility of the habituation based approach in detecting visual changes. Experiment results show that the approach are able to accurately detect the presence of novelty as deviations from the learned knowledge.
An Interactive Logistics Centre Information Integration System Using Virtual Reality
NASA Astrophysics Data System (ADS)
Hong, S.; Mao, B.
2018-04-01
The logistics industry plays a very important role in the operation of modern cities. Meanwhile, the development of logistics industry has derived various problems that are urgent to be solved, such as the safety of logistics products. This paper combines the study of logistics industry traceability and logistics centre environment safety supervision with virtual reality technology, creates an interactive logistics centre information integration system. The proposed system utilizes the immerse characteristic of virtual reality, to simulate the real logistics centre scene distinctly, which can make operation staff conduct safety supervision training at any time without regional restrictions. On the one hand, a large number of sensor data can be used to simulate a variety of disaster emergency situations. On the other hand, collecting personnel operation data, to analyse the improper operation, which can improve the training efficiency greatly.
Slow changing postural cues cancel visual field dependence on self-tilt detection.
Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L
2015-01-01
Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.
Hierarchy-associated semantic-rule inference framework for classifying indoor scenes
NASA Astrophysics Data System (ADS)
Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei
2016-03-01
Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.
Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc
2014-12-01
A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. Copyright © 2014 Elsevier Ltd. All rights reserved.
Unconscious analyses of visual scenes based on feature conjunctions.
Tachibana, Ryosuke; Noguchi, Yasuki
2015-06-01
To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).
System for Thermal Imaging of Hot Moving Objects
NASA Technical Reports Server (NTRS)
Weinstein, Leonard; Hundley, Jason
2007-01-01
The High Altitude/Re-Entry Vehicle Infrared Imaging (HARVII) system is a portable instrumentation system for tracking and thermal imaging of a possibly distant and moving object. The HARVII is designed specifically for measuring the changing temperature distribution on a space shuttle as it reenters the atmosphere. The HARVII system or other systems based on the design of the HARVII system could also be used for such purposes as determining temperature distributions in fires, on volcanoes, and on surfaces of hot models in wind tunnels. In yet another potential application, the HARVII or a similar system would be used to infer atmospheric pollution levels from images of the Sun acquired at multiple wavelengths over regions of interest. The HARVII system includes the Ratio Intensity Thermography System (RITS) and a tracking subsystem that keeps the RITS aimed at the moving object of interest. The subsystem of primary interest here is the RITS (see figure), which acquires and digitizes images of the same scene at different wavelengths in rapid succession. Assuming that the time interval between successive measurements is short enough that temperatures do not change appreciably, the digitized image data at the different wavelengths are processed to extract temperatures according to the principle of ratio-intensity thermography: The temperature at a given location in a scene is inferred from the ratios between or among intensities of infrared radiation from that location at two or more wavelengths. This principle, based on the Stefan-Boltzmann equation for the intensity of electromagnetic radiation as a function of wavelength and temperature, is valid as long as the observed body is a gray or black body and there is minimal atmospheric absorption of radiation.
NASA Technical Reports Server (NTRS)
1982-01-01
A project to develop an effective mobility aid for blind pedestrians which acquires consecutive images of the scenes before a moving pedestrian, which locates and identifies the pedestrian's path and potential obstacles in the path, which presents path and obstacle information to the pedestrian, and which operates in real-time is discussed. The mobility aid has three principal components: an image acquisition system, an image interpretation system, and an information presentation system. The image acquisition system consists of a miniature, solid-state TV camera which transforms the scene before the blind pedestrian into an image which can be received by the image interpretation system. The image interpretation system is implemented on a microprocessor which has been programmed to execute real-time feature extraction and scene analysis algorithms for locating and identifying the pedestrian's path and potential obstacles. Identity and location information is presented to the pedestrian by means of tactile coding and machine-generated speech.
Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John
2002-01-01
The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.
Multi- and hyperspectral scene modeling
NASA Astrophysics Data System (ADS)
Borel, Christoph C.; Tuttle, Ronald F.
2011-06-01
This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.
Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo
2015-01-01
Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Spectral feature characterization methods for blood stain detection in crime scene backgrounds
NASA Astrophysics Data System (ADS)
Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.
2016-05-01
Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Calibration of UAS imagery inside and outside of shadows for improved vegetation index computation
NASA Astrophysics Data System (ADS)
Bondi, Elizabeth; Salvaggio, Carl; Montanaro, Matthew; Gerace, Aaron D.
2016-05-01
Vegetation health and vigor can be assessed with data from multi- and hyperspectral airborne and satellite- borne sensors using index products such as the normalized difference vegetation index (NDVI). Recent advances in unmanned aerial systems (UAS) technology have created the opportunity to access these same image data sets in a more cost effective manner with higher temporal and spatial resolution. Another advantage of these systems includes the ability to gather data in almost any weather condition, including complete cloud cover, when data has not been available before from traditional platforms. The ability to collect in these varied conditions, meteorological and temporal, will present researchers and producers with many new challenges. Particularly, cloud shadows and self-shadowing by vegetation must be taken into consideration in imagery collected from UAS platforms to avoid variation in NDVI due to changes in illumination within a single scene, and between collection flights. A workflow is presented to compensate for variations in vegetation indices due to shadows and variation in illumination levels in high resolution imagery collected from UAS platforms. Other calibration methods that producers may currently be utilizing produce NDVI products that still contain shadow boundaries and variations due to illumination, whereas the final NDVI mosaic from this workflow does not.
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2015-01-01
Objects rarely appear in isolation in natural scenes. Although many studies have investigated how nearby objects influence perception in cluttered scenes (i.e., crowding), none has studied how nearby objects influence visually guided action. In Experiment 1, we found that participants could scale their grasp to the size of a crowded target even when they could not perceive its size, demonstrating for the first time that neurologically intact participants can use visual information that is not available to conscious report to scale their grasp to real objects in real scenes. In Experiments 2 and 3, we found that changing the eccentricity of the display and the orientation of the flankers had no effect on grasping but strongly affected perception. The differential effects of eccentricity and flanker orientation on perception and grasping show that the known differences in retinotopy between the ventral and dorsal streams are reflected in the way in which people deal with targets in cluttered scenes. © The Author(s) 2014.
A mobile unit for memory retrieval in daily life based on image and sensor processing
NASA Astrophysics Data System (ADS)
Takesumi, Ryuji; Ueda, Yasuhiro; Nakanishi, Hidenobu; Nakamura, Atsuyoshi; Kakimori, Nobuaki
2003-10-01
We developed a Mobile Unit which purpose is to support memory retrieval of daily life. In this paper, we describe the two characteristic factors of this unit. (1)The behavior classification with an acceleration sensor. (2)Extracting the difference of environment with image processing technology. In (1), By analyzing power and frequency of an acceleration sensor which turns to gravity direction, the one's activities can be classified using some techniques to walk, stay, and so on. In (2), By extracting the difference between the beginning scene and the ending scene of a stay scene with image processing, the result which is done by user is recognized as the difference of environment. Using those 2 techniques, specific scenes of daily life can be extracted, and important information at the change of scenes can be realized to record. Especially we describe the effect to support retrieving important things, such as a thing left behind and a state of working halfway.
NASA Astrophysics Data System (ADS)
Knuth, F.; Crone, T. J.; Marburg, A.
2017-12-01
The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.
ERIC Educational Resources Information Center
Brady, Timothy F.; Tenenbaum, Joshua B.
2013-01-01
When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…
Prehospital traumatic cardiac arrest: the cost of futility.
Rosemurgy, A S; Norris, P A; Olson, S M; Hurst, J M; Albrink, M H
1993-09-01
Of 12,462 trauma patients cared for by prehospital services from October 1, 1989 to March 31, 1991, 138 patients underwent CPR at the scene or during transport because of the absence of blood pressure, pulse, and respiration. Ninety-six (70%) suffered blunt trauma, 42 (30%) suffered penetrating trauma. Sixty (43%) were transported by air utilizing county-wide transport protocols. None of the patients survived. Aggregate care cost $871,186.00. In 11 cases (8%), tissue for transplantation was procured (only corneas). Trauma patients who require CPR at the scene or in transport die. Infrequent organ procurement does not seem to justify the cost (primarily borne by hospitals), consumption of resources, and exposure of health care providers to occupational health hazards. The wisdom of transporting trauma victims suffering cardiopulmonary arrest at the scene or during transport must be questioned. Allocation of resources to these patients is not an insular medical issue, but a broad concern for our society, and society should decide if the "cost of futility" is excessive.
Use of the TM tasseled cap transform for interpretation of spectral contrasts in an urban scene
NASA Technical Reports Server (NTRS)
Goward, S. N.; Wharton, S. W.
1984-01-01
Investigations are being conducted with the objective to develop automated numerical image analysis procedures. In this context, an examination is performed of physically-based multispectral data transforms as a means to incorporate a priori knowledge of land radiance properties in the analysis process. A physically-based transform of TM observations was developed. This transform extends the Landsat MSS Tasseled Cap transform reported by Kauth and Thomas (1976) to TM data observations. The present study has the aim to examine the utility of the TM Tasseled Cap transform as applied to TM data from an urban landscape. The analysis conducted is based on 512 x 512 subset of the Washington, DC November 2, 1982 TM scene, centered on Springfield, VA. It appears that the TM tasseled cap transformation provides a good means to explain land physical attributes of the Washington scene. This result provides a suggestion regarding a direction by which a priori knowledge of landscape spectral patterns may be incorporated into numerical image analysis.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira
2014-08-01
This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Hulse-Smith, Lee; Illes, Mike
2007-01-01
In a previous study, mechanical engineering models were utilized to deduce impact velocity and droplet volume of circular bloodstains by measuring stain diameter and counting spines radiating from their outer edge. A blind trial study was subsequently undertaken to evaluate the accuracy of this technique, using an applied, crime scene methodology. Calculations from bloodstains produced on paper, drywall, and wood were used to derive surface-specific equations to predict 39 unknown mock crime scene bloodstains created over a range of impact velocities (2.2-5.7 m/sec) and droplet volumes (12-45 microL). Strong correlations were found between expected and observed results, with correlation coefficients ranging between 0.83 and 0.99. The 95% confidence limit associated with predictions of impact velocity and droplet volume was calculated for paper (0.28 m/sec, 1.7 microL), drywall (0.37 m/sec, 1.7 microL), and wood (0.65 m/sec, 5.2 microL).
The saccadic flow baseline: Accounting for image-independent biases in fixation behavior.
Clarke, Alasdair D F; Stainer, Matthew J; Tatler, Benjamin W; Hunt, Amelia R
2017-09-01
Much effort has been made to explain eye guidance during natural scene viewing. However, a substantial component of fixation placement appears to be a set of consistent biases in eye movement behavior. We introduce the concept of saccadic flow, a generalization of the central bias that describes the image-independent conditional probability of making a saccade to (xi+1, yi+1), given a fixation at (xi, yi). We suggest that saccadic flow can be a useful prior when carrying out analyses of fixation locations, and can be used as a submodule in models of eye movements during scene viewing. We demonstrate the utility of this idea by presenting bias-weighted gaze landscapes, and show that there is a link between the likelihood of a saccade under the flow model, and the salience of the following fixation. We also present a minor improvement to our central bias model (based on using a multivariate truncated Gaussian), and investigate the leftwards and coarse-to-fine biases in scene viewing.
Parallel phase-sensitive three-dimensional imaging camera
Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.
2007-09-25
An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Windham, Joe P.
1992-04-01
Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.
Heterogeneity Measurement Based on Distance Measure for Polarimetric SAR Data
NASA Astrophysics Data System (ADS)
Xing, Xiaoli; Chen, Qihao; Liu, Xiuguo
2018-04-01
To effectively test the scene heterogeneity for polarimetric synthetic aperture radar (PolSAR) data, in this paper, the distance measure is introduced by utilizing the similarity between the sample and pixels. Moreover, given the influence of the distribution and modeling texture, the K distance measure is deduced according to the Wishart distance measure. Specifically, the average of the pixels in the local window replaces the class center coherency or covariance matrix. The Wishart and K distance measure are calculated between the average matrix and the pixels. Then, the ratio of the standard deviation to the mean is established for the Wishart and K distance measure, and the two features are defined and applied to reflect the complexity of the scene. The proposed heterogeneity measure is proceeded by integrating the two features using the Pauli basis. The experiments conducted on the single-look and multilook PolSAR data demonstrate the effectiveness of the proposed method for the detection of the scene heterogeneity.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
Cybersickness in the presence of scene rotational movements along different axes.
Lo, W T; So, R H
2001-02-01
Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.
Space Launch System Booster Test- Behind the Scenes
2016-06-24
Get a sneak peek behind the scenes of how engineers and technicians at Orbital ATK in Promontory, Utah, are coming together to test the most powerful booster for NASA’s new rocket, the Space Launch System. SLS will make missions possible to an asteroid and the journey to Mars. For more information on SLS, visit www.nasa.gov/sls.
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, Donald P.
1998-01-01
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, D.P.
1998-05-12
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.
Reduction of background clutter in structured lighting systems
Carlson, Jeffrey J.; Giles, Michael K.; Padilla, Denise D.; Davidson, Jr., Patrick A.; Novick, David K.; Wilson, Christopher W.
2010-06-22
Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging device and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.
Marshall Space Flight Center 1960-1985: 25th anniversary report
NASA Technical Reports Server (NTRS)
1985-01-01
The Marshall Space FLight Center marks its 25th aniversary with a record of notable achievements. These accomplishments are the essence of the Marshall Center's history. Behind the scenes of the space launches and missions, however, lies the story of challenges faced and problems solved. The highlights of that story are presented. The story is organized not as a straight chronology but as three parallel reviews of the major assignments: propulsion systems and launch vehicles, space science research and technology, and manned space systems. The general goals were to reach space, to know and understand the space environment, and to inhabit and utilize space for the benefit of mankind. Also included is a chronology of major events, presented as a fold-out chart for ready reference.
Loose fusion based on SLAM and IMU for indoor environment
NASA Astrophysics Data System (ADS)
Zhu, Haijiang; Wang, Zhicheng; Zhou, Jinglin; Wang, Xuejing
2018-04-01
The simultaneous localization and mapping (SLAM) method based on the RGB-D sensor is widely researched in recent years. However, the accuracy of the RGB-D SLAM relies heavily on correspondence feature points, and the position would be lost in case of scenes with sparse textures. Therefore, plenty of fusion methods using the RGB-D information and inertial measurement unit (IMU) data have investigated to improve the accuracy of SLAM system. However, these fusion methods usually do not take into account the size of matched feature points. The pose estimation calculated by RGB-D information may not be accurate while the number of correct matches is too few. Thus, considering the impact of matches in SLAM system and the problem of missing position in scenes with few textures, a loose fusion method combining RGB-D with IMU is proposed in this paper. In the proposed method, we design a loose fusion strategy based on the RGB-D camera information and IMU data, which is to utilize the IMU data for position estimation when the corresponding point matches are quite few. While there are a lot of matches, the RGB-D information is still used to estimate position. The final pose would be optimized by General Graph Optimization (g2o) framework to reduce error. The experimental results show that the proposed method is better than the RGB-D camera's method. And this method can continue working stably for indoor environment with sparse textures in the SLAM system.
Hubble Space Telescope Deploy, Cuba, Bahamas and Gulf of Mexico
1990-04-29
STS031-151-010 (25 April 1990) --- The Hubble Space Telescope (HST), still in the grasp of Discovery's Remote Manipulator System (RMS), is backdropped over Cuba and the Bahama Islands. In this scene, it has yet to have deployment of its solar array panels and its high gain antennae. This scene was captured with a large format Aero Linhof camera used by several previous flight crews to record Earth scenes.
Synchronization of spontaneous eyeblinks while viewing video stories
Nakano, Tamami; Yamamoto, Yoshiharu; Kitajo, Keiichi; Takahashi, Toshimitsu; Kitazawa, Shigeru
2009-01-01
Blinks are generally suppressed during a task that requires visual attention and tend to occur immediately before or after the task when the timing of its onset and offset are explicitly given. During the viewing of video stories, blinks are expected to occur at explicit breaks such as scene changes. However, given that the scene length is unpredictable, there should also be appropriate timing for blinking within a scene to prevent temporal loss of critical visual information. Here, we show that spontaneous blinks were highly synchronized between and within subjects when they viewed the same short video stories, but were not explicitly tied to the scene breaks. Synchronized blinks occurred during scenes that required less attention such as at the conclusion of an action, during the absence of the main character, during a long shot and during repeated presentations of a similar scene. In contrast, blink synchronization was not observed when subjects viewed a background video or when they listened to a story read aloud. The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events. PMID:19640888
Mass balance investigation of alpine glaciers through LANDSAT TM data
NASA Technical Reports Server (NTRS)
Bayr, Klaus J.
1989-01-01
An analysis of LANDSAT Thematic Mapper (TM) data of the Pasterze Glacier and the Kleines Fleisskees in the Austrian Alps was undertaken and compared with meteorological data of nearby weather stations. Alpine or valley glaciers can be used to study regional and worldwide climate changes. Alpine glaciers respond relatively fast to a warming or cooling trend in temperature through an advance or a retreat of the terminus. In addition, the mass balance of the glacier is being affected. Last year two TM scenes of the Pasterze Glacier of Aug. 1984 and Aug. 1986 were used to study the difference in reflectance. This year, in addition to the scenes from last year, one MSS scene of Aug. 1976 and a TM scene from 1988 were examined for both the Pasterze Glacier and the Kleines Fleisskees. During the overpass of the LANDSAT on 6 Aug. 1988 ground truthing on the Pasterze Glacier was undertaken. The results indicate that there was considerable more reflectance in 1976 and 1984 than in 1986 and 1988. The climatological data of the weather stations Sonnblick and Rudolfshuette were examined and compared with the results found through the LANDSAT data. There were relations between the meteorological and LANDSAT data: the average temperature over the last 100 years showed an increase of .4 C, the snowfall was declining during the same time period but the overall precipitation did not reveal any significant change over the same period. With the use of an interactive image analysis computer, the LANDSAT scenes were studied. The terminus of the Pasterze Glacier retreated 348 m and the terminus of the Kleines Fleisskees 121 m since 1965. This approach using LANDSAT MSS and TM digital data in conjunction with meteorological data can be effectively used to monitor regional and worldwide climate changes.
Nolan, Brodie; Ackery, Alun; Nathens, Avery; Sawadsky, Bruce; Tien, Homer
In our trauma system, helicopter emergency medical services (HEMS) can be requested to attend a scene call for an injured patient before arrival by land paramedics. Land paramedics can cancel this response if they deem it unnecessary. The purpose of this study is to describe the frequency of canceled HEMS scene calls that were subsequently transferred to 2 trauma centers and to assess for any impact on morbidity and mortality. Probabilistic matching was used to identify canceled HEMS scene call patients who were later transported to 2 trauma centers over a 48-month period. Registry data were used to compare canceled scene call patients with direct from scene patients. There were 290 requests for HEMS scene calls, of which 35.2% were canceled. Of those canceled, 24.5% were later transported to our trauma centers. Canceled scene call patients were more likely to be older and to be discharged home from the trauma center without being admitted. There is a significant amount of undertriage of patients for whom an HEMS response was canceled and later transported to a trauma center. These patients face similar morbidity and mortality as patients who are brought directly from scene to a trauma center. Copyright © 2018 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.
AgRISTARS. Supporting research: Algorithms for scene modelling
NASA Technical Reports Server (NTRS)
Rassbach, M. E. (Principal Investigator)
1982-01-01
The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.
A bio-inspired system for spatio-temporal recognition in static and video imagery
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas
2007-04-01
This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.
NASA Astrophysics Data System (ADS)
Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David
2012-06-01
Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.
Focus information is used to interpret binocular images
Hoffman, David M.; Banks, Martin S.
2011-01-01
Focus information—blur and accommodation—is highly correlated with depth in natural viewing. We examined the use of focus information in solving the binocular correspondence problem and in interpreting monocular occlusions. We presented transparent scenes consisting of two planes. Observers judged the slant of the farther plane, which was seen through the nearer plane. To do this, they had to solve the correspondence problem. In one condition, the two planes were presented with sharp rendering on one image plane, as is done in conventional stereo displays. In another condition, the planes were presented on two image planes at different focal distances, simulating focus information in natural viewing. Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence. In a second experiment, we presented images in which one eye could see texture behind an occluder that the other eye could not see. When the occluder's texture was sharp along with the occluded texture, binocular rivalry was prominent. When the occluded and occluding textures were presented with different blurs, rivalry was significantly reduced. This shows that blur aids the interpretation of scene layout near monocular occlusions. PMID:20616139
NASA Astrophysics Data System (ADS)
Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei
2016-06-01
Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.
Changes in nursing ethics education in Lithuania.
Toliusiene, Jolanta; Peicius, Eimantas
2007-11-01
The post-Soviet scene in Lithuania is one of rapid change in medical and nursing ethics. A short introduction to the current background sets the scene for a wider discussion of ethics in health care professionals' education. Lithuania had to adapt rapidly from a politicized nursing and ethics curriculum to European regulations, and from a paternalistic style of care to one of engagement with choices and dilemmas. The relationships between professionals, and between professionals and patients, are affected by this in particular. This short article highlights these issues and how they impact on all involved.
Did limits on payments for tobacco placements in US movies affect how movies are made?
Morgenstern, Matthis; Stoolmiller, Mike; Bergamini, Elaina; Sargent, James D
2017-01-01
To compare how smoking was depicted in Hollywood movies before and after an intervention limiting paid product placement for cigarette brands. Correlational analysis. Top box office hits released in the USA primarily between 1988 and 2011 (n=2134). The Master Settlement Agreement (MSA), implemented in 1998. This study analyses trends for whether or not movies depicted smoking, and among movies with smoking, counts for character smoking scenes and average smoking scene duration. There was no detectable trend for any measure prior to the MSA. In 1999, 79% of movies contained smoking, and movies with smoking contained 8 scenes of character smoking, with the average duration of a character smoking scene being 81 s. After the MSA, there were significant negative post-MSA changes (p<0.05) for linear trends in proportion of movies with any smoking (which declined to 41% by 2011) and, in movies with smoking, counts of character smoking scenes (which declined to 4 by 2011). Between 1999 and 2000, there was an immediate and dramatic drop in average length of a character smoking scene, which decreased to 19 s, and remained there for the duration of the study. The probability that the drop of -62.5 (95% CI -55.1 to -70.0) seconds was due to chance was p<10 -16 . This study's correlational data suggest that restricting payments for tobacco product placement coincided with profound changes in the duration of smoking depictions in movies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Visual encoding and fixation target selection in free viewing: presaccadic brain potentials
Nikolaev, Andrey R.; Jurica, Peter; Nakatani, Chie; Plomp, Gijs; van Leeuwen, Cees
2013-01-01
In scrutinizing a scene, the eyes alternate between fixations and saccades. During a fixation, two component processes can be distinguished: visual encoding and selection of the next fixation target. We aimed to distinguish the neural correlates of these processes in the electrical brain activity prior to a saccade onset. Participants viewed color photographs of natural scenes, in preparation for a change detection task. Then, for each participant and each scene we computed an image heat map, with temperature representing the duration and density of fixations. The temperature difference between the start and end points of saccades was taken as a measure of the expected task-relevance of the information concentrated in specific regions of a scene. Visual encoding was evaluated according to whether subsequent change was correctly detected. Saccades with larger temperature difference were more likely to be followed by correct detection than ones with smaller temperature differences. The amplitude of presaccadic activity over anterior brain areas was larger for correct detection than for detection failure. This difference was observed for short “scrutinizing” but not for long “explorative” saccades, suggesting that presaccadic activity reflects top-down saccade guidance. Thus, successful encoding requires local scanning of scene regions which are expected to be task-relevant. Next, we evaluated fixation target selection. Saccades “moving up” in temperature were preceded by presaccadic activity of higher amplitude than those “moving down”. This finding suggests that presaccadic activity reflects attention deployed to the following fixation location. Our findings illustrate how presaccadic activity can elucidate concurrent brain processes related to the immediate goal of planning the next saccade and the larger-scale goal of constructing a robust representation of the visual scene. PMID:23818877
Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.
Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G
2017-05-01
We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.
FPGA implementation for real-time background subtraction based on Horprasert model.
Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo
2012-01-01
Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W.
FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model
Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J.; Diaz, Javier; Ros, Eduardo
2012-01-01
Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487
Video System for Viewing From a Remote or Windowless Cockpit
NASA Technical Reports Server (NTRS)
Banerjee, Amamath
2009-01-01
A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.
Global Transsaccadic Change Blindness During Scene Perception
2003-09-01
objects in natural scenes. Psychonomic Bulletin & Review , 8 , 761–768. Irwin, D.E. (1991). Information integration across saccadic eye... Psychonomic Bulletin & Review , 5 , 644–649. Tanaka, K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuro- science... Bulletin & Review , 8 , 753–760. Hoffman, J.R., & Subramanian, B. (1995). The role of visual attention in saccadic eye movements. Perception
Children Use Object-Level Category Knowledge to Detect Changes in Complex Auditory Scenes
ERIC Educational Resources Information Center
Vanden Bosch der Nederlanden, Christina M.; Snyder, Joel S.; Hannon, Erin E.
2016-01-01
Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a…
Stereoscopic augmented reality with pseudo-realistic global illumination effects
NASA Astrophysics Data System (ADS)
de Sorbier, Francois; Saito, Hideo
2014-03-01
Recently, augmented reality has become very popular and has appeared in our daily life with gaming, guiding systems or mobile phone applications. However, inserting object in such a way their appearance seems natural is still an issue, especially in an unknown environment. This paper presents a framework that demonstrates the capabilities of Kinect for convincing augmented reality in an unknown environment. Rather than pre-computing a reconstruction of the scene like proposed by most of the previous method, we propose a dynamic capture of the scene that allows adapting to live changes of the environment. Our approach, based on the update of an environment map, can also detect the position of the light sources. Combining information from the environment map, the light sources and the camera tracking, we can display virtual objects using stereoscopic devices with global illumination effects such as diffuse and mirror reflections, refractions and shadows in real time.
Topography-Dependent Motion Compensation: Application to UAVSAR Data
NASA Technical Reports Server (NTRS)
Jones, Cathleen E.; Hensley, Scott; Michel, Thierry
2009-01-01
The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.
Robotic vision techniques for space operations
NASA Technical Reports Server (NTRS)
Krishen, Kumar
1994-01-01
Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.
A prototype molecular interactive collaborative environment (MICE).
Bourne, P; Gribskov, M; Johnson, G; Moreland, J; Wavra, S; Weissig, H
1998-01-01
Illustrations of macromolecular structure in the scientific literature contain a high level of semantic content through which the authors convey, among other features, the biological function of that macromolecule. We refer to these illustrations as molecular scenes. Such scenes, if available electronically, are not readily accessible for further interactive interrogation. The basic PDB format does not retain features of the scene; formats like PostScript retain the scene but are not interactive; and the many formats used by individual graphics programs, while capable of reproducing the scene, are neither interchangeable nor can they be stored in a database and queried for features of the scene. MICE defines a Molecular Scene Description Language (MSDL) which allows scenes to be stored in a relational database (a molecular scene gallery) and queried. Scenes retrieved from the gallery are rendered in Virtual Reality Modeling Language (VRML) and currently displayed in WebView, a VRML browser modified to support the Virtual Reality Behavior System (VRBS) protocol. VRBS provides communication between multiple client browsers, each capable of manipulating the scene. This level of collaboration works well over standard Internet connections and holds promise for collaborative research at a distance and distance learning. Further, via VRBS, the VRML world can be used as a visual cue to trigger an application such as a remote MEME search. MICE is very much work in progress. Current work seeks to replace WebView with Netscape, Cosmoplayer, a standard VRML plug-in, and a Java-based console. The console consists of a generic kernel suitable for multiple collaborative applications and additional application-specific controls. Further details of the MICE project are available at http:/(/)mice.sdsc.edu.
Very low cost real time histogram-based contrast enhancer utilizing fixed-point DSP processing
NASA Astrophysics Data System (ADS)
McCaffrey, Nathaniel J.; Pantuso, Francis P.
1998-03-01
A real time contrast enhancement system utilizing histogram- based algorithms has been developed to operate on standard composite video signals. This low-cost DSP based system is designed with fixed-point algorithms and an off-chip look up table (LUT) to reduce the cost considerably over other contemporary approaches. This paper describes several real- time contrast enhancing systems advanced at the Sarnoff Corporation for high-speed visible and infrared cameras. The fixed-point enhancer was derived from these high performance cameras. The enhancer digitizes analog video and spatially subsamples the stream to qualify the scene's luminance. Simultaneously, the video is streamed through a LUT that has been programmed with the previous calculation. Reducing division operations by subsampling reduces calculation- cycles and also allows the processor to be used with cameras of nominal resolutions. All values are written to the LUT during blanking so no frames are lost. The enhancer measures 13 cm X 6.4 cm X 3.2 cm, operates off 9 VAC and consumes 12 W. This processor is small and inexpensive enough to be mounted with field deployed security cameras and can be used for surveillance, video forensics and real- time medical imaging.
Helicopter EMS: Research Endpoints and Potential Benefits
Thomas, Stephen H.; Arthur, Annette O.
2012-01-01
Patients, EMS systems, and healthcare regions benefit from Helicopter EMS (HEMS) utilization. This article discusses these benefits in terms of specific endpoints utilized in research projects. The endpoint of interest, be it primary, secondary, or surrogate, is important to understand in the deployment of HEMS resources or in planning further HEMS outcomes research. The most important outcomes are those which show potential benefits to the patients, such as functional survival, pain relief, and earlier ALS care. Case reports are also important “outcomes” publications. The benefits of HEMS in the rural setting is the ability to provide timely access to Level I or Level II trauma centers and in nontrauma, interfacility transport of cardiac, stroke, and even sepsis patients. Many HEMS crews have pharmacologic and procedural capabilities that bring a different level of care to a trauma scene or small referring hospital, especially in the rural setting. Regional healthcare and EMS system's benefit from HEMS by their capability to extend the advanced level of care throughout a region, provide a “backup” for areas with limited ALS coverage, minimize transport times, make available direct transport to specialized centers, and offer flexibility of transport in overloaded hospital systems. PMID:22203905
Advanced Weapon System (AWS) Sensor Prediction Techniques Study. Volume II
1981-09-01
models are suggested. TV. 1-1 ’ICourant Com’p’uter Sctence Report #9 December 1975 Scene Analysis: A Survey Carl Weiman Cou rant Institute of...some crucial differences. In the psycho- logical model of mechanical vision, the aim of scene analysis is to perceive and understand 2-0 images of 3-D...scenes. The meaning of this analogy can be clarified using a rudimentary informational model ; this yields a natural hierarchy from physical
Computer Vision Research and its Applications to Automated Cartography
1985-09-01
D Scene Geometry Thomas M. Strat and Martin A. Fischler Appendix D A New Sense for Depth of Field Alex P. Pentland iv 9.* qb CONTENTS (cont’d...D modeling. A. Baseline Stereo System As a framework for integration and evaluation of our research in modeling * 3-D scene geometry , as well as a...B. New Methods for Stereo Compilation As we previously indicated, the conventional approach to recovering scene geometry from a stereo pair of
Use of cameras for monitoring visibility impairment
NASA Astrophysics Data System (ADS)
Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie
2018-02-01
Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.
47 CFR 80.1127 - On-scene communications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 5 2012-10-01 2012-10-01 false On-scene communications. 80.1127 Section 80.1127 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS) Operating Procedures...
Concurrent-scene/alternate-pattern analysis for robust video-based docking systems
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol
1991-01-01
A typical docking target employs a three-point design of retroreflective tape, one at each endpoint of the center-line, and one on the tip of the central post. Scenes, sensed via laser diode illumination, produce pictures with spots corresponding to desired reflection from the retroreflectors and other reflections. Control corrections for each axis of the vehicle can then be properly applied if the desired spots are accurately tracked. However, initial acquisition of these three spots (detection and identification problem) are non-trivial under a severe noise environment. Signal-to-noise enhancement, accomplished by subtracting the non-illuminated scene from the target scene illuminated by laser diodes, can not eliminate every false spot. Hence, minimization of docking failures due to target mistracking would suggest needed inclusion of added processing features pertaining to target locations. In this paper, we present a concurrent processing scheme for a modified docking target scene which could lead to a perfect docking system. Since the non-illuminated target scene is already available, adding another feature to the three-point design by marking two non-reflective lines, one between the two end-points and one from the tip of the central post to the center-line, would allow this line feature to be picked-up only when capturing the background scene (sensor data without laser illumination). Therefore, instead of performing the image subtraction to generate a picture with a high signal-to-noise ratio, a processed line-image based on the robust line detection technique (Hough transform) can be used to fuse with the actively sensed three-point target image to deduce the true locations of the docking target. This dual-channel confirmation scheme is necessary if a fail-safe system is to be realized from both the sensing and processing point-of-views. Detailed algorithms and preliminary results are presented.
Acceptable bit-rates for human face identification from CCTV imagery
NASA Astrophysics Data System (ADS)
Tsifouti, Anastasia; Triantaphillidou, Sophie; Bilissi, Efthimia; Larabi, Mohamed-Chaker
2013-01-01
The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal `average' bit-rates.
Modelling Technology for Building Fire Scene with Virtual Geographic Environment
NASA Astrophysics Data System (ADS)
Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.
2017-09-01
Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.
You think you know where you looked? You better look again.
Võ, Melissa L-H; Aizenman, Avigael M; Wolfe, Jeremy M
2016-10-01
People are surprisingly bad at knowing where they have looked in a scene. We tested participants' ability to recall their own eye movements in 2 experiments using natural or artificial scenes. In each experiment, participants performed a change-detection (Exp.1) or search (Exp.2) task. On 25% of trials, after 3 seconds of viewing the scene, participants were asked to indicate where they thought they had just fixated. They responded by making mouse clicks on 12 locations in the unchanged scene. After 135 trials, observers saw 10 new scenes and were asked to put 12 clicks where they thought someone else would have looked. Although observers located their own fixations more successfully than a random model, their performance was no better than when they were guessing someone else's fixations. Performance with artificial scenes was worse, though judging one's own fixations was slightly superior. Even after repeating the fixation-location task on 30 scenes immediately after scene viewing, performance was far from the prediction of an ideal observer. Memory for our own fixation locations appears to add next to nothing beyond what common sense tells us about the likely fixations of others. These results have important implications for socially important visual search tasks. For example, a radiologist might think he has looked at "everything" in an image, but eye tracking data suggest that this is not so. Such shortcomings might be avoided by providing observers with better insights of where they have looked. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Stainer, Matthew J.; Scott-Brown, Kenneth C.; Tatler, Benjamin W.
2013-01-01
Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate “sub-scenes.” Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers. PMID:24069008
The Shuttle Mission Simulator computer generated imagery
NASA Technical Reports Server (NTRS)
Henderson, T. H.
1984-01-01
Equipment available in the primary training facility for the Space Transportation System (STS) flight crews includes the Fixed Base Simulator, the Motion Base Simulator, the Spacelab Simulator, and the Guidance and Navigation Simulator. The Shuttle Mission Simulator (SMS) consists of the Fixed Base Simulator and the Motion Base Simulator. The SMS utilizes four visual Computer Generated Image (CGI) systems. The Motion Base Simulator has a forward crew station with six-degrees of freedom motion simulation. Operation of the Spacelab Simulator is planned for the spring of 1983. The Guidance and Navigation Simulator went into operation in 1982. Aspects of orbital visual simulation are discussed, taking into account the earth scene, payload simulation, the generation and display of 1079 stars, the simulation of sun glare, and Reaction Control System jet firing plumes. Attention is also given to landing site visual simulation, and night launch and landing simulation.
Fusing Cubesat and Landsat 8 data for near-daily mapping of leaf area index at 3 m resolution
NASA Astrophysics Data System (ADS)
McCabe, M.; Houborg, R.
2017-12-01
Constellations of small cubesats are emerging as a relatively inexpensive observational resource with the potential to overcome spatio-temporal constraints of traditional single-sensor satellite missions. With more than 130 compact 3U (i.e., 10 x 10 x 30 cm) cubesats currently in orbit, the company "Planet" has realized near-daily image capture in RGB and the near-infrared (NIR) at 3 m resolution for every location on the earth. However cross-sensor inconsistencies can be a limiting factor, which result from relatively low signal-to-noise ratios, varying overpass times, and sensor-specific spectral response functions. In addition, the sensor radiometric information content is more limited compared to conventional satellite systems such as Landsat. In this study, a synergistic machine-learning framework utilizing Planet, Landsat 8, and MODIS data is developed to produce Landsat 8 consistent LAI with a factor of 10 increase in spatial resolution and a daily observing potential, globally. The Cubist machine-learning technique is used to establish scene-specific links between scale-consistent cubesat RGB+NIR imagery and Landsat 8 LAI. The scheme implements a novel LAI target sampling technique for model training purposes, which accounts for changes in cover conditions over the cubesat and Landsat acquisition timespans. Results over an agricultural region in Saudi Arabia highlight the utility of the approach for detecting high frequency (i.e., near-daily) and fine-scale (i.e., 3 m) intra-field dynamics in LAI with demonstrated potential for timely identification of developing crop risks. The framework maximizes the utility of ultra-high resolution cubesat data for agricultural management and resource efficiency optimization at the precision scale.
Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow
NASA Astrophysics Data System (ADS)
Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar
2018-03-01
Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.
NASA Technical Reports Server (NTRS)
Maxwell, M. S.
1984-01-01
Present technology allows radiometric monitoring of the Earth, ocean and atmosphere from a geosynchronous platform with good spatial, spectral and temporal resolution. The proposed system could provide a capability for multispectral remote sensing with a 50 m nadir spatial resolution in the visible bands, 250 m in the 4 micron band and 1 km in the 11 micron thermal infrared band. The diffraction limited telescope has a 1 m aperture, a 10 m focal length (with a shorter focal length in the infrared) and linear and area arrays of detectors. The diffraction limited resolution applies to scenes of any brightness but for a dark low contrast scenes, the good signal to noise ratio of the system contribute to the observation capability. The capabilities of the AGP system are assessed for quantitative observations of ocean scenes. Instrument and ground system configuration are presented and projected sensor capabilities are analyzed.
Real-time scene and signature generation for ladar and imaging sensors
NASA Astrophysics Data System (ADS)
Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios
2014-05-01
This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.
Research on the generation of the background with sea and sky in infrared scene
NASA Astrophysics Data System (ADS)
Dong, Yan-zhi; Han, Yan-li; Lou, Shu-li
2008-03-01
It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture of infrared image better and can be used in the infrared scene generation widely in future.
Improving semantic scene understanding using prior information
NASA Astrophysics Data System (ADS)
Laddha, Ankit; Hebert, Martial
2016-05-01
Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
Scene analysis for a breadboard Mars robot functioning in an indoor environment
NASA Technical Reports Server (NTRS)
Levine, M. D.
1973-01-01
The problem is delt with of computer perception in an indoor laboratory environment containing rocks of various sizes. The sensory data processing is required for the NASA/JPL breadboard mobile robot that is a test system for an adaptive variably-autonomous vehicle that will conduct scientific explorations on the surface of Mars. Scene analysis is discussed in terms of object segmentation followed by feature extraction, which results in a representation of the scene in the robot's world model.
NASA Technical Reports Server (NTRS)
Wiswell, E. R.; Cooper, G. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.
Detecting temporal changes in acoustic scenes: The variable benefit of selective attention.
Demany, Laurent; Bayle, Yann; Puginier, Emilie; Semal, Catherine
2017-09-01
Four experiments investigated change detection in acoustic scenes consisting of a sum of five amplitude-modulated pure tones. As the tones were about 0.7 octave apart and were amplitude-modulated with different frequencies (in the range 2-32 Hz), they were perceived as separate streams. Listeners had to detect a change in the frequency (experiments 1 and 2) or the shape (experiments 3 and 4) of the modulation of one of the five tones, in the presence of an informative cue orienting selective attention either before the scene (pre-cue) or after it (post-cue). The changes left intensity unchanged and were not detectable in the spectral (tonotopic) domain. Performance was much better with pre-cues than with post-cues. Thus, change deafness was manifest in the absence of an appropriate focusing of attention when the change occurred, even though the streams and the changes to be detected were acoustically very simple (in contrast to the conditions used in previous demonstrations of change deafness). In one case, the results were consistent with a model based on the assumption that change detection was possible if and only if attention was endogenously focused on a single tone. However, it was also found that changes resulting in a steepening of amplitude rises were to some extent able to draw attention exogenously. Change detection was not markedly facilitated when the change produced a discontinuity in the modulation domain, contrary to what could be expected from the perspective of predictive coding. Copyright © 2017 Elsevier B.V. All rights reserved.
Evans, Kris; Rotello, Caren M; Li, Xingshan; Rayner, Keith
2009-02-01
Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.
Richmond, Jenny L; Power, Jessica
2014-09-01
Relational memory, or the ability to bind components of an event into a network of linked representations, is a primary function of the hippocampus. Here we extend eye-tracking research showing that infants are capable of forming memories for the relation between arbitrarily paired scenes and faces, by looking at age-related changes in relational memory over the first year of life. Six- and 12-month-old infants were familiarized with pairs of faces and scenes before being tested with arrays of three familiar faces that were presented on a familiar scene. Preferential looking at the face that matches the scene is typically taken as evidence of relational memory. The results showed that while 6-month-old showed very early preferential looking when face/scene pairs were tested immediately, 12-month-old did not exhibit evidence of relational memory either immediately or after a short delay. Theoretical implications for the functional development of the hippocampus and practical implications for the use of eye tracking to measure memory during early life are discussed. © 2014 Wiley Periodicals, Inc.
Evaluation methodology for query-based scene understanding systems
NASA Astrophysics Data System (ADS)
Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.
2015-05-01
In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.
Time Series Analysis of Vegetation Change using Hyperspectral and Multispectral Data
2012-09-01
rivers clogged with sediment” (Hartman, 2008). In addition, backpackers, campers, and skiers are in danger of being hit by falling trees. Mountain...information from hyperspectral data without a priori knowledge or requiring ground observations” (Kruse & Perry, 2009). Figure 16. Spectral...known endmembers and the scene spectra (Boardman & Kruse, 2011). Known endmembers come from analysts’ knowledge of an area in a scene, or from
The Pop out of Scene-Relative Object Movement against Retinal Motion Due to Self-Movement
ERIC Educational Resources Information Center
Rushton, Simon K.; Bradshaw, Mark F.; Warren, Paul A.
2007-01-01
An object that moves is spotted almost effortlessly; it "pops out." When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion.…
ERIC Educational Resources Information Center
Sanocki, Thomas; Sulman, Noah
2013-01-01
Three experiments measured the efficiency of monitoring complex scenes composed of changing objects, or events. All events lasted about 4 s, but in a given block of trials, could be of a single type (single task) or of multiple types (multitask, with a total of four event types). Overall accuracy of detecting target events amid distractors was…
,
2008-01-01
The USGS Landsat archive holds an unequaled 36-year record of the Earth's surface that is invaluable to climate change studies, forest and resource management activities, and emergency response operations. An aggressive effort is taking place to provide all Landsat imagery [scenes currently held in the USGS Earth Resources Observation and Science (EROS) Center archive, as well as newly acquired scenes daily] free of charge to users with electronic access via the Web by the end of December 2008. The entire Landsat 7 Enhanced Thematic Mapper Plus (ETM+) archive acquired since 1999 and any newly acquired Landsat 7 ETM+ images that have less than 40 percent cloud cover are currently available for download. When this endeavor is complete all Landsat 1-5 data will also be available for download. This includes Landsat 1-5 Multispectral Scanner (MSS) scenes, as well as Landsat 4 and 5 Thematic Mapper (TM) scenes.
Politics of innovation in multi-level water governance systems
NASA Astrophysics Data System (ADS)
Daniell, Katherine A.; Coombes, Peter J.; White, Ian
2014-11-01
Innovations are being proposed in many countries in order to support change towards more sustainable and water secure futures. However, the extent to which they can be implemented is subject to complex politics and powerful coalitions across multi-level governance systems and scales of interest. Exactly how innovation uptake can be best facilitated or blocked in these complex systems is thus a matter of important practical and research interest in water cycle management. From intervention research studies in Australia, China and Bulgaria, this paper seeks to describe and analyse the behind-the-scenes struggles and coalition-building that occurs between water utility providers, private companies, experts, communities and all levels of government in an effort to support or block specific innovations. The research findings suggest that in order to ensure successful passage of the proposed innovations, champions for it are required from at least two administrative levels, including one with innovation implementation capacity, as part of a larger supportive coalition. Higher governance levels can play an important enabling role in facilitating the passage of certain types of innovations that may be in competition with currently entrenched systems of water management. Due to a range of natural biases, experts on certain innovations and disciplines may form part of supporting or blocking coalitions but their evaluations of worth for water system sustainability and security are likely to be subject to competing claims based on different values and expertise, so may not necessarily be of use in resolving questions of "best courses of action". This remains a political values-based decision to be negotiated through the receiving multi-level water governance system.
Transportation-Related Safety Behaviors in Top-Grossing Children's Movies from 2008 to 2013.
Boppana, Shilpa; Shen, Jiabin; Schwebel, David C
2016-05-01
Children regularly imitate behavior from movies. The authors assessed injury risk behaviors in top-grossing children's films. The 5 top-grossing G- or PG-rated movies annually from 2008 to 2013 were included, including animated movies and those set in the past/future. Researchers coded transportation scenes for risk taking in 3 domains: protection/equipment, unsafe behaviors, and distraction/attention. Safe and risky behaviors were recorded across the 3 domains. With regard to protection and equipment, 20% of motor vehicle scenes showed characters riding without seat belts and 27% of scenes with motorcycles showed characters riding without helmets. Eighty-nine percent of scenes with horses showed riders without helmets and 67% of boat operators failed to wear personal flotation devices. The most common unsafe behaviors were speeding and unsafe street-crossing. Twenty-one percent of scenes with motor vehicles showed drivers speeding and 90% of pedestrians in films failed to wait for signal changes. Distracted and inattentive behaviors were rare, with distracted driving of motor vehicles occurring in only approximately 2% of total driving scenes. Although many safe transportation behaviors were portrayed, the film industry continues to depict unsafe behaviors in movies designed for pediatric audiences. There is a need for the film industry to continue to balance entertainment and art with modeling of safe behavior for children.
Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.
Durant, Szonya; Wall, Matthew B; Zanker, Johannes M
2011-09-09
Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.
Brown, Joshua B; Stassen, Nicole A; Bankey, Paul E; Sangosanya, Ayodele T; Cheng, Julius D; Gestring, Mark L
2010-11-01
The role of helicopter transport (HT) in civilian trauma care remains controversial. The objective of this study was to compare patient outcomes after transport from the scene of injury by HT and ground transport using a national patient sample. Patients transported from the scene of injury by HT or ground transport in 2007 were identified using the National Trauma Databank version 8. Injury severity, utilization of hospital resources, and outcomes were compared. Stepwise logistic regression was used to determine whether transport modality was a predictor of survival or discharge to home after adjusting for covariates. There were 258,387 patients transported by helicopter (16%) or ground (84%). Mean Injury Severity Score was higher in HT patients (15.9 ± 12.3 vs. 10.2 ± 9.5, p < 0.01), as was the percentage of patients with Injury Severity Score >15 (42.6% vs. 20.8%; odds ratio [OR], 2.83; 95% confidence interval [CI], 2.76-2.89). HT patients had higher rates of intensive care unit admission (43.5% vs. 22.9%; OR, 2.58; 95% CI, 2.53-2.64) and mechanical ventilation (20.8% vs. 7.4%; OR, 3.30; 95% CI, 3.21-3.40). HT was a predictor of survival (OR, 1.22; 95% CI, 1.17-1.27) and discharge to home (OR, 1.05; 95% CI, 1.02-1.07) after adjustment for covariates. Trauma patients transported by helicopter were more severely injured, had longer transport times, and required more hospital resources than those transported by ground. Despite this, HT patients were more likely to survive and were more likely to be discharged home after treatment when compared with those transported by ground. Despite concerns regarding helicopter utilization in the civilian setting, this study shows that HT has merit and impacts outcome.
Hirshon, Jon Mark; Galvagno, Samuel M; Comer, Angela; Millin, Michael G; Floccare, Douglas J; Alcorta, Richard L; Lawner, Benjamin J; Margolis, Asa M; Nable, Jose V; Bass, Robert R
2016-03-01
Helicopter emergency medical services (EMS) has become a well-established component of modern trauma systems. It is an expensive, limited resource with potential safety concerns. Helicopter EMS activation criteria intended to increase efficiency and reduce inappropriate use remain elusive and difficult to measure. This study evaluates the effect of statewide field trauma triage changes on helicopter EMS use and patient outcomes. Data were extracted from the helicopter EMS computer-aided dispatch database for in-state scene flights and from the state Trauma Registry for all trauma patients directly admitted from the scene or transferred to trauma centers from July 1, 2000, to June 30, 2011. Computer-aided dispatch flights were analyzed for periods corresponding to field triage protocol modifications intended to improve system efficiency. Outcomes were separately analyzed for trauma registry patients by mode of transport. The helicopter EMS computer-aided dispatch data set included 44,073 transports. There was a statewide decrease in helicopter EMS usage for trauma patients of 55.9%, differentially affecting counties closer to trauma centers. The Trauma Registry data set included 182,809 patients (37,407 helicopter transports, 128,129 ambulance transports, and 17,273 transfers). There was an increase of 21% in overall annual EMS scene trauma patients transported; ground transports increased by 33%, whereas helicopter EMS transports decreased by 49%. Helicopter EMS patient acuity increased, with an attendant increase in patient mortality. However, when standardized with W statistics, both helicopter EMS- and ground-transported trauma patients showed sustained improvement in mortality. Modifications to state protocols were associated with decreased helicopter EMS use and overall improved trauma patient outcomes. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
The Neural Dynamics of Attentional Selection in Natural Scenes.
Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V
2016-10-12
The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
NASA Astrophysics Data System (ADS)
Huang, Xin; Chen, Huijun; Gong, Jianya
2018-01-01
Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).
USDA-ARS?s Scientific Manuscript database
Chrysomya rufifacies is a blow fly commonly found in corpses at crime scene investigations. This study was designed to develop laboratory colonization methods for Ch. rufifacies and utilize Chrysomya megacephala as its larval food source. Both fly species were collected in the wild and easily colon...
Ratings for emotion film clips.
Gabert-Quillen, Crystal A; Bartolini, Ellen E; Abravanel, Benjamin T; Sanislow, Charles A
2015-09-01
Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.
Winstock, A R; Griffiths, P; Stewart, D
2001-09-01
This study explores the utility of a self-completion survey method to quickly and cheaply generate information on patterns and trends among regular "recreational" drug consumers. Data is reported here from 1151 subjects accessed through a dance music publication. In keeping with previous studies of drug use within the dance scene polysubstance use was the norm. Many of those reporting use of "ecstasy" were regularly using multiple tablets often consumed in combination with other substances thus exposing themselves to serious health risks, in particular the risk of dose related neurotoxic effects. Seventy percent were drinking alcohol at hazardous levels. Subjects' patterns of drug purchasing also put them at risk of severe criminal sanction. Data supported evidence that cocaine use had become increasing popular in the UK, but contrasted with some commentators' views that ecstasy use was in decline. The utility of this method and how the results should be interpreted is discussed, as are the data's implications for harm and risk reduction activities.
MIRAGE: system overview and status
NASA Astrophysics Data System (ADS)
Robinson, Richard M.; Oleson, Jim; Rubin, Lane; McHugh, Stephen W.
2000-07-01
Santa Barbara Infrared's (SBIR) MIRAGE (Multispectral InfraRed Animation Generation Equipment) is a state-of-the-art dynamic infrared scene projector system. Imagery from the first MIRAGE system was presented to the scene simulation community during last year's SPIE AeroSense 99 Symposium. Since that time, SBIR has delivered five MIRAGE systems. This paper will provide an overview of the MIRAGE system and discuss the current status of the MIRAGE. Included is an update of system hardware, and the current configuration. Proposed upgrades to this configuration and options will be discussed. Updates on the latest installations, applications and measured data will also be presented.
Cortical Representations of Speech in a Multitalker Auditory Scene.
Puvvada, Krishna C; Simon, Jonathan Z
2017-09-20
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.
Extracting heading and temporal range from optic flow: Human performance issues
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Perrone, John A.; Stone, Leland; Banks, Martin S.; Crowell, James A.
1993-01-01
Pilots are able to extract information about their vehicle motion and environmental structure from dynamic transformations in the out-the-window scene. In this presentation, we focus on the information in the optic flow which specifies vehicle heading and distance to objects in the environment, scaled to a temporal metric. In particular, we are concerned with modeling how the human operators extract the necessary information, and what factors impact their ability to utilize the critical information. In general, the psychophysical data suggest that the human visual system is fairly robust to degradations in the visual display, e.g., reduced contrast and resolution or restricted field of view. However, extraneous motion flow, i.e., introduced by sensor rotation, greatly compromises human performance. The implications of these models and data for enhanced/synthetic vision systems are discussed.
Ishida, T; Ohta, M; Sugimoto, T
1985-01-01
Osaka, a modern urban metropolis in Japan, experienced a tragic gas explosion in 1970 when the dispatch room of the City Fire Department was in the process of being moved to a new building. Many unforseen problems arose during this disaster: eg, there was an overall lack of leadership, confusion of communication, a need for triage, and lack of control of mass media. The Osaka Medical Association organized a committee to resolve these problems. Their conclusions and recommendations were that a control headquarters be established at the scene of disaster, the number of ambulances and EMTs be increased, disaster tags be utilized, a special radio frequency be created, and a computer-aided command and control system for fire fighting and ambulance services be introduced. These recommendations have all been followed.
NASA Technical Reports Server (NTRS)
Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry
2009-01-01
The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach.
Liu, Mengyun; Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng; Pan, Yuanjin
2017-12-08
After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to "see" which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas.
Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach
Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng
2017-01-01
After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas. PMID:29292761
Tachistoscopic illumination and masking of real scenes.
Chichka, David; Philbeck, John W; Gajewski, Daniel A
2015-03-01
Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.
4D light-field sensing system for people counting
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Zhang, Chi; Wang, Yunlong; Sun, Zhenan
2016-03-01
Counting the number of people is still an important task in social security applications, and a few methods based on video surveillance have been proposed in recent years. In this paper, we design a novel optical sensing system to directly acquire the depth map of the scene from one light-field camera. The light-field sensing system can count the number of people crossing the passageway, and record the direction and intensity of rays at a snapshot without any assistant light devices. Depth maps are extracted from the raw light-ray sensing data. Our smart sensing system is equipped with a passive imaging sensor, which is able to naturally discern the depth difference between the head and shoulders for each person. Then a human model is built. Through detecting the human model from light-field images, the number of people passing the scene can be counted rapidly. We verify the feasibility of the sensing system as well as the accuracy by capturing real-world scenes passing single and multiple people under natural illumination.
Li, Rui; Zhang, Xiaodong; Li, Hanzhe; Zhang, Liming; Lu, Zhufeng; Chen, Jiangcheng
2018-08-01
Brain control technology can restore communication between the brain and a prosthesis, and choosing a Brain-Computer Interface (BCI) paradigm to evoke electroencephalogram (EEG) signals is an essential step for developing this technology. In this paper, the Scene Graph paradigm used for controlling prostheses was proposed; this paradigm is based on Steady-State Visual Evoked Potentials (SSVEPs) regarding the Scene Graph of a subject's intention. A mathematic model was built to predict SSVEPs evoked by the proposed paradigm and a sinusoidal stimulation method was used to present the Scene Graph stimulus to elicit SSVEPs from subjects. Then, a 2-degree of freedom (2-DOF) brain-controlled prosthesis system was constructed to validate the performance of the Scene Graph-SSVEP (SG-SSVEP)-based BCI. The classification of SG-SSVEPs was detected via the Canonical Correlation Analysis (CCA) approach. To assess the efficiency of proposed BCI system, the performances of traditional SSVEP-BCI system were compared. Experimental results from six subjects suggested that the proposed system effectively enhanced the SSVEP responses, decreased the degradation of SSVEP strength and reduced the visual fatigue in comparison with the traditional SSVEP-BCI system. The average signal to noise ratio (SNR) of SG-SSVEP was 6.31 ± 2.64 dB, versus 3.38 ± 0.78 dB of traditional-SSVEP. In addition, the proposed system achieved good performances in prosthesis control. The average accuracy was 94.58% ± 7.05%, and the corresponding high information transfer rate (IRT) was 19.55 ± 3.07 bit/min. The experimental results revealed that the SG-SSVEP based BCI system achieves the good performance and improved the stability relative to the conventional approach. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Keane, Tommy P.; Saber, Eli; Rhody, Harvey; Savakis, Andreas; Raj, Jeffrey
2012-04-01
Contemporary research in automated panorama creation utilizes camera calibration or extensive knowledge of camera locations and relations to each other to achieve successful results. Research in image registration attempts to restrict these same camera parameters or apply complex point-matching schemes to overcome the complications found in real-world scenarios. This paper presents a novel automated panorama creation algorithm by developing an affine transformation search based on maximized mutual information (MMI) for region-based registration. Standard MMI techniques have been limited to applications with airborne/satellite imagery or medical images. We show that a novel MMI algorithm can approximate an accurate registration between views of realistic scenes of varying depth distortion. The proposed algorithm has been developed using stationary, color, surveillance video data for a scenario with no a priori camera-to-camera parameters. This algorithm is robust for strict- and nearly-affine-related scenes, while providing a useful approximation for the overlap regions in scenes related by a projective homography or a more complex transformation, allowing for a set of efficient and accurate initial conditions for pixel-based registration.
Curran, Allison M; Prada, Paola A; Furton, Kenneth G
2010-06-15
In this study it is demonstrated that human odor collected from items recovered at a post-blast scene can be evaluated using human scent specific canine teams to locate and identify individuals who have been in contact with the improvised explosive device (IED) components and/or the delivery vehicle. The purpose of the experiments presented here was to document human scent survivability in both peroxide-based explosions as well as simulated roadside IEDs utilizing double-blind field trials. Human odor was collected from post-blast device and vehicle components. Human scent specific canine teams were then deployed at the blast scene and in locations removed from the blast scene to validate that human odor remains in sufficient quantities for reliable canine detection and identification. Human scent specific canines have shown the ability to identify individuals who have been in contact with IEDs using post-blast debris with an average success from site response of 82.2% verifying that this technology has great potential in criminal, investigative, and military applications. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
A fusion network for semantic segmentation using RGB-D data
NASA Astrophysics Data System (ADS)
Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu
2018-04-01
Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
NASA Technical Reports Server (NTRS)
Downward, James G.
1992-01-01
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.
The what, where and how of auditory-object perception.
Bizley, Jennifer K; Cohen, Yale E
2013-10-01
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
The what, where and how of auditory-object perception
Bizley, Jennifer K.; Cohen, Yale E.
2014-01-01
The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177
Memory for sound, with an ear toward hearing in complex auditory scenes.
Snyder, Joel S; Gregg, Melissa K
2011-10-01
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.
Criterion-free measurement of motion transparency perception at different speeds
Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.
2018-01-01
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154
Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu
2017-01-01
Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112