Sample records for area scene selection

  1. Large Area Scene Selection Interface (LASSI). Methodology of Selecting Landsat Imagery for the Global Land Survey 2005

    NASA Technical Reports Server (NTRS)

    Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry

    2009-01-01

    The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.

  2. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    PubMed Central

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  3. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior

    PubMed Central

    Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I

    2018-01-01

    Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information. PMID:29513219

  4. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior.

    PubMed

    Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I

    2018-03-07

    Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.

  5. A Comparison of the Visual Attention Patterns of People With Aphasia and Adults Without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes.

    PubMed

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-04-01

    The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.

  6. Figure-Ground Organization in Visual Cortex for Natural Scenes

    PubMed Central

    2016-01-01

    Abstract Figure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes, and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ∼30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge. PMID:28058269

  7. The occipital place area represents the local elements of scenes

    PubMed Central

    Kamps, Frederik S.; Julian, Joshua B.; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D.

    2016-01-01

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents in not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements – both in spatial boundary and scene content representation – while PPA and RSC represent global scene properties. PMID:26931815

  8. The occipital place area represents the local elements of scenes.

    PubMed

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. The neural bases of spatial frequency processing during scene perception

    PubMed Central

    Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole

    2014-01-01

    Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226

  10. Guidance of visual attention by semantic information in real-world scenes

    PubMed Central

    Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc

    2014-01-01

    Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724

  11. Psychophysiological responses and restorative values of wilderness environments

    Treesearch

    Chun-Yen Chang; Ping-Kun Chen; William E. Hammitt; Lisa Machnik

    2007-01-01

    Scenes of natural areas were used as stimuli to analyze the psychological and physiological responses of subjects while viewing wildland scenes. Attention Restoration Theory (Kaplan 1995) and theorized components of restorative environments were used as an orientation for selection of the visual stimuli. Conducted in Taiwan, the studies recorded the psychophysiological...

  12. Coding of navigational affordances in the human visual system

    PubMed Central

    Epstein, Russell A.

    2017-01-01

    A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669

  13. Mapping Eroded Areas on Mountain Grassland with Terrestrial Photogrammetry and Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Mayr, Andreas; Rutzinger, Martin; Bremer, Magnus; Geitner, Clemens

    2016-06-01

    In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time.

  14. Amygdala activation as a marker for selective attention toward neutral faces in a chronic traumatic brain injury population.

    PubMed

    Young, Leanne R; Yu, Weikei; Holloway, Michael; Rodgers, Barry N; Chapman, Sandra B; Krawczyk, Daniel C

    2017-09-01

    There has been great interest in characterizing the response of the amygdala to emotional faces, especially in the context of social cognition. Although amygdala activation is most often associated with fearful or angry stimuli, there is considerable evidence that the response of the amygdala to neutral faces is both robust and reliable. This characteristic of amygdala function is of particular interest in the context of assessing populations with executive function deficits, such as traumatic brain injuries, which can be evaluated using fMRI attention modulation tasks that evaluate prefrontal control over representations, notably faces. The current study tested the hypothesis that the amygdala may serve as a marker of selective attention to neutral faces. Using fMRI, we gathered data within a chronic traumatic brain injury population. Blood Oxygenation Level Dependent (BOLD) signal change within the left and right amygdalae and fusiform face areas was measured while participants viewed neutral faces and scenes, under conditions requiring participants to (1) categorize pictures of faces and scenes, (2) selectively attend to either faces or scenes, or (3) attend to both faces and scenes. Findings revealed that the amygdala is an effective marker for selective attention to neutral faces and, furthermore, it was more face-specific than the fusiform face area. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands

    USGS Publications Warehouse

    Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.

    2000-01-01

    The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc

  16. Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.

    PubMed

    Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno

    2015-05-01

    The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. How affective information from faces and scenes interacts in the brain

    PubMed Central

    Vandenbulcke, Mathieu; Sinke, Charlotte B. A.; Goebel, Rainer; de Gelder, Beatrice

    2014-01-01

    Facial expression perception can be influenced by the natural visual context in which the face is perceived. We performed an fMRI experiment presenting participants with fearful or neutral faces against threatening or neutral background scenes. Triangles and scrambled scenes served as control stimuli. The results showed that the valence of the background influences face selective activity in the right anterior parahippocampal place area (PPA) and subgenual anterior cingulate cortex (sgACC) with higher activation for neutral backgrounds compared to threatening backgrounds (controlled for isolated background effects) and that this effect correlated with trait empathy in the sgACC. In addition, the left fusiform gyrus (FG) responds to the affective congruence between face and background scene. The results show that valence of the background modulates face processing and support the hypothesis that empathic processing in sgACC is inhibited when affective information is present in the background. In addition, the findings reveal a pattern of complex scene perception showing a gradient of functional specialization along the posterior–anterior axis: from sensitivity to the affective content of scenes (extrastriate body area: EBA and posterior PPA), over scene emotion–face emotion interaction (left FG) via category–scene interaction (anterior PPA) to scene–category–personality interaction (sgACC). PMID:23956081

  18. Neural Codes for One's Own Position and Direction in a Real-World "Vista" Environment.

    PubMed

    Sulpizio, Valentina; Boccia, Maddalena; Guariglia, Cecilia; Galati, Gaspare

    2018-01-01

    Humans, like animals, rely on an accurate knowledge of one's spatial position and facing direction to keep orientated in the surrounding space. Although previous neuroimaging studies demonstrated that scene-selective regions (the parahippocampal place area or PPA, the occipital place area or OPA and the retrosplenial complex or RSC), and the hippocampus (HC) are implicated in coding position and facing direction within small-(room-sized) and large-scale navigational environments, little is known about how these regions represent these spatial quantities in a large open-field environment. Here, we used functional magnetic resonance imaging (fMRI) in humans to explore the neural codes of these navigationally-relevant information while participants viewed images which varied for position and facing direction within a familiar, real-world circular square. We observed neural adaptation for repeated directions in the HC, even if no navigational task was required. Further, we found that the amount of knowledge of the environment interacts with the PPA selectivity in encoding positions: individuals who needed more time to memorize positions in the square during a preliminary training task showed less neural attenuation in this scene-selective region. We also observed adaptation effects, which reflect the real distances between consecutive positions, in scene-selective regions but not in the HC. When examining the multi-voxel patterns of activity we observed that scene-responsive regions and the HC encoded both spatial information and that the RSC classification accuracy for positions was higher in individuals scoring higher to a self-reported questionnaire of spatial abilities. Our findings provide new insight into how the human brain represents a real, large-scale "vista" space, demonstrating the presence of neural codes for position and direction in both scene-selective and hippocampal regions, and revealing the existence, in the former regions, of a map-like spatial representation reflecting real-world distance between consecutive positions.

  19. Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.

    1983-01-01

    The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.

  20. The use of an image registration technique in the urban growth monitoring

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Foresti, C.; Deoliveira, M. D. L. N.; Niero, M.; Parreira, E. M. D. M. F.

    1984-01-01

    The use of an image registration program in the studies of urban growth is described. This program permits a quick identification of growing areas with the overlap of the same scene in different periods, and with the use of adequate filters. The city of Brasilia, Brazil, is selected for the test area. The dynamics of Brasilia urban growth are analyzed with the overlap of scenes dated June 1973, 1978 and 1983. The results showed the utilization of the image registration technique for the monitoring of dynamic urban growth.

  1. Global ensemble texture representations are critical to rapid scene perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-06-01

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Spatial-area selective retrieval of multiple object-place associations in a hierarchical cognitive map formed by theta phase coding.

    PubMed

    Sato, Naoyuki; Yamaguchi, Yoko

    2009-06-01

    The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.

  3. Conjoint representation of texture ensemble and location in the parahippocampal place area.

    PubMed

    Park, Jeongho; Park, Soojin

    2017-04-01

    Texture provides crucial information about the category or identity of a scene. Nonetheless, not much is known about how the texture information in a scene is represented in the brain. Previous studies have shown that the parahippocampal place area (PPA), a scene-selective part of visual cortex, responds to simple patches of texture ensemble. However, in natural scenes textures exist in spatial context within a scene. Here we tested two hypotheses that make different predictions on how textures within a scene context are represented in the PPA. The Texture-Only hypothesis suggests that the PPA represents texture ensemble (i.e., the kind of texture) as is, irrespective of its location in the scene. On the other hand, the Texture and Location hypothesis suggests that the PPA represents texture and its location within a scene (e.g., ceiling or wall) conjointly. We tested these two hypotheses across two experiments, using different but complementary methods. In experiment 1 , by using multivoxel pattern analysis (MVPA) and representational similarity analysis, we found that the representational similarity of the PPA activation patterns was significantly explained by the Texture-Only hypothesis but not by the Texture and Location hypothesis. In experiment 2 , using a repetition suppression paradigm, we found no repetition suppression for scenes that had the same texture ensemble but differed in location (supporting the Texture and Location hypothesis). On the basis of these results, we propose a framework that reconciles contrasting results from MVPA and repetition suppression and draw conclusions about how texture is represented in the PPA. NEW & NOTEWORTHY This study investigates how the parahippocampal place area (PPA) represents texture information within a scene context. We claim that texture is represented in the PPA at multiple levels: the texture ensemble information at the across-voxel level and the conjoint information of texture and its location at the within-voxel level. The study proposes a working hypothesis that reconciles contrasting results from multivoxel pattern analysis and repetition suppression, suggesting that the methods are complementary to each other but not necessarily interchangeable. Copyright © 2017 the American Physiological Society.

  4. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    PubMed

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. First experience with Remote Sensing methods and selected sensors in the monitoring of mining areas - a case study of the Belchatow open cast mine

    NASA Astrophysics Data System (ADS)

    Wajs, Jaroslaw

    2018-01-01

    The paper presents satellite imagery from active SENTINEL-1A and passive SENTINEL-2A/2B sensors for their application in the monitoring of mining areas focused on detecting land changes. Multispectral scenes of SENTINEL-2A/2B have allowed for detecting changes in land-cover near the region of interest (ROI), i.e. the Szczercow dumping site in the Belchatow open cast lignite mine, central Poland, Europe. Scenes from SENTINEL-1A/1B satellite have also been used in the research. Processing of the SLC signal enabled creating a return intensity map in VV polarization. The obtained SAR scene was reclassified and shows a strong return signal from the dumping site and the open pit. This fact may be used in detection and monitoring of changes occurring within the analysed engineering objects.

  6. Selective scene perception deficits in a case of topographical disorientation.

    PubMed

    Robin, Jessica; Lowe, Matthew X; Pishdadian, Sara; Rivest, Josée; Cant, Jonathan S; Moscovitch, Morris

    2017-07-01

    Topographical disorientation (TD) is a neuropsychological condition characterized by an inability to find one's way, even in familiar environments. One common contributing cause of TD is landmark agnosia, a visual recognition impairment specific to scenes and landmarks. Although many cases of TD with landmark agnosia have been documented, little is known about the perceptual mechanisms which lead to selective deficits in recognizing scenes. In the present study, we test LH, a man who exhibits TD and landmark agnosia, on measures of scene perception that require selectively attending to either the configural or surface properties of a scene. Compared to healthy controls, LH demonstrates perceptual impairments when attending to the configuration of a scene, but not when attending to its surface properties, such as the pattern of the walls or whether the ground is sand or grass. In contrast, when focusing on objects instead of scenes, LH demonstrates intact perception of both geometric and surface properties. This study demonstrates that in a case of TD and landmark agnosia, the perceptual impairments are selective to the layout of scenes, providing insight into the mechanism of landmark agnosia and scene-selective perceptual processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Research on three-dimensional visualization based on virtual reality and Internet

    NASA Astrophysics Data System (ADS)

    Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai

    2007-06-01

    To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.

  8. Visual encoding and fixation target selection in free viewing: presaccadic brain potentials

    PubMed Central

    Nikolaev, Andrey R.; Jurica, Peter; Nakatani, Chie; Plomp, Gijs; van Leeuwen, Cees

    2013-01-01

    In scrutinizing a scene, the eyes alternate between fixations and saccades. During a fixation, two component processes can be distinguished: visual encoding and selection of the next fixation target. We aimed to distinguish the neural correlates of these processes in the electrical brain activity prior to a saccade onset. Participants viewed color photographs of natural scenes, in preparation for a change detection task. Then, for each participant and each scene we computed an image heat map, with temperature representing the duration and density of fixations. The temperature difference between the start and end points of saccades was taken as a measure of the expected task-relevance of the information concentrated in specific regions of a scene. Visual encoding was evaluated according to whether subsequent change was correctly detected. Saccades with larger temperature difference were more likely to be followed by correct detection than ones with smaller temperature differences. The amplitude of presaccadic activity over anterior brain areas was larger for correct detection than for detection failure. This difference was observed for short “scrutinizing” but not for long “explorative” saccades, suggesting that presaccadic activity reflects top-down saccade guidance. Thus, successful encoding requires local scanning of scene regions which are expected to be task-relevant. Next, we evaluated fixation target selection. Saccades “moving up” in temperature were preceded by presaccadic activity of higher amplitude than those “moving down”. This finding suggests that presaccadic activity reflects attention deployed to the following fixation location. Our findings illustrate how presaccadic activity can elucidate concurrent brain processes related to the immediate goal of planning the next saccade and the larger-scale goal of constructing a robust representation of the visual scene. PMID:23818877

  9. Alterations in visual cortical activation and connectivity with prefrontal cortex during working memory updating in major depressive disorder.

    PubMed

    Le, Thang M; Borghi, John A; Kujawa, Autumn J; Klein, Daniel N; Leung, Hoi-Chung

    2017-01-01

    The present study examined the impacts of major depressive disorder (MDD) on visual and prefrontal cortical activity as well as their connectivity during visual working memory updating and related them to the core clinical features of the disorder. Impairment in working memory updating is typically associated with the retention of irrelevant negative information which can lead to persistent depressive mood and abnormal affect. However, performance deficits have been observed in MDD on tasks involving little or no demand on emotion processing, suggesting dysfunctions may also occur at the more basic level of information processing. Yet, it is unclear how various regions in the visual working memory circuit contribute to behavioral changes in MDD. We acquired functional magnetic resonance imaging data from 18 unmedicated participants with MDD and 21 age-matched healthy controls (CTL) while they performed a visual delayed recognition task with neutral faces and scenes as task stimuli. Selective working memory updating was manipulated by inserting a cue in the delay period to indicate which one or both of the two memorized stimuli (a face and a scene) would remain relevant for the recognition test. Our results revealed several key findings. Relative to the CTL group, the MDD group showed weaker postcue activations in visual association areas during selective maintenance of face and scene working memory. Across the MDD subjects, greater rumination and depressive symptoms were associated with more persistent activation and connectivity related to no-longer-relevant task information. Classification of postcue spatial activation patterns of the scene-related areas was also less consistent in the MDD subjects compared to the healthy controls. Such abnormalities appeared to result from a lack of updating effects in postcue functional connectivity between prefrontal and scene-related areas in the MDD group. In sum, disrupted working memory updating in MDD was revealed by alterations in activity patterns of the visual association areas, their connectivity with the prefrontal cortex, and their relationship with core clinical characteristics. These results highlight the role of information updating deficits in the cognitive control and symptomatology of depression.

  10. Space Shuttle Columbia views the world with imaging radar: The SIR-A experiment

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Cimino, J. B.; Elachi, C.

    1983-01-01

    Images acquired by the Shuttle Imaging Radar (SIR-A) in November 1981, demonstrate the capability of this microwave remote sensor system to perceive and map a wide range of different surface features around the Earth. A selection of 60 scenes displays this capability with respect to Earth resources - geology, hydrology, agriculture, forest cover, ocean surface features, and prominent man-made structures. The combined area covered by the scenes presented amounts to about 3% of the total acquired. Most of the SIR-A images are accompanied by a LANDSAT multispectral scanner (MSS) or SEASAT synthetic-aperture radar (SAR) image of the same scene for comparison. Differences between the SIR-A image and its companion LANDSAT or SEASAT image at each scene are related to the characteristics of the respective imaging systems, and to seasonal or other changes that occurred in the time interval between acquisition of the images.

  11. Top-down enhancement and suppression of activity in category-selective extrastriate cortex from an act of reflective attention.

    PubMed

    Johnson, Matthew R; Johnson, Marcia K

    2009-12-01

    Recent research has demonstrated top-down attentional modulation of activity in extrastriate category-selective visual areas while stimuli are in view (perceptual attention) and after they are removed from view (reflective attention). Perceptual attention is capable of both enhancing and suppressing activity in category-selective areas relative to a passive viewing baseline. In this study, we demonstrate that a brief, simple act of reflective attention ("refreshing") is also capable of both enhancing and suppressing activity in some scene-selective areas (the parahippocampal place area [PPA]) but not others (refreshing resulted in enhancement but not in suppression in the middle occipital gyrus [MOG]). This suggests that different category-selective extrastriate areas preferring the same class of stimuli may contribute differentially to reflective processing of one's internal representations of such stimuli.

  12. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.

    PubMed

    Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G

    2017-05-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Scene segmentation by spike synchronization in reciprocally connected visual areas. I. Local effects of cortical feedback.

    PubMed

    Knoblauch, Andreas; Palm, Günther

    2002-09-01

    To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation-selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with an enlarged synchronization range (fast state). When presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast states, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (designated as tower, castle, and hill peaks). On the fast time scale (tower peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either flat or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings, whereas standard phase-coding models would predict shifted peaks in the case of different objects.

  14. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  15. Feature Selection for Classification of Polar Regions Using a Fuzzy Expert System

    NASA Technical Reports Server (NTRS)

    Penaloza, Mauel A.; Welch, Ronald M.

    1996-01-01

    Labeling, feature selection, and the choice of classifier are critical elements for classification of scenes and for image understanding. This study examines several methods for feature selection in polar regions, including the list, of a fuzzy logic-based expert system for further refinement of a set of selected features. Six Advanced Very High Resolution Radiometer (AVHRR) Local Area Coverage (LAC) arctic scenes are classified into nine classes: water, snow / ice, ice cloud, land, thin stratus, stratus over water, cumulus over water, textured snow over water, and snow-covered mountains. Sixty-seven spectral and textural features are computed and analyzed by the feature selection algorithms. The divergence, histogram analysis, and discriminant analysis approaches are intercompared for their effectiveness in feature selection. The fuzzy expert system method is used not only to determine the effectiveness of each approach in classifying polar scenes, but also to further reduce the features into a more optimal set. For each selection method,features are ranked from best to worst, and the best half of the features are selected. Then, rules using these selected features are defined. The results of running the fuzzy expert system with these rules show that the divergence method produces the best set features, not only does it produce the highest classification accuracy, but also it has the lowest computation requirements. A reduction of the set of features produced by the divergence method using the fuzzy expert system results in an overall classification accuracy of over 95 %. However, this increase of accuracy has a high computation cost.

  16. The Neural Dynamics of Attentional Selection in Natural Scenes.

    PubMed

    Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V

    2016-10-12

    The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.

  17. Polarimetric Interferometry and Differential Interferometry

    DTIC Science & Technology

    2005-02-01

    example of the entropy or phase stability of a mixed scene, being the Oberpfaffenhofen area as collected by the DLR L-Band ESAR system. We note that...robust ratios of scattering elements as shown for example in table I. [10,11,12,13,14,15] The urban areas (upper right corner) in figure 2 show...height and biomass estimation, but there are many other application areas where this technology is being considered. Table I provides a selective

  18. The tongue of the ocean as a remote sensing ocean color calibration range

    NASA Technical Reports Server (NTRS)

    Strees, L. V.

    1972-01-01

    In general, terrestrial scenes remain stable in content from both temporal and spacial considerations. Ocean scenes, on the other hand, are constantly changing in content and position. The solar energy that enters the ocean waters undergoes a process of scattering and selective spectral absorption. Ocean scenes are thus characterized as low level radiance with the major portion of the energy in the blue region of the spectrum. Terrestrial scenes are typically of high level radiance with their spectral energies concentrated in the green-red regions of the visible spectrum. It appears that for the evaluation and calibration of ocean color remote sensing instrumentation, an ocean area whose optical ocean and atmospheric properties are known and remain seasonably stable over extended time periods is needed. The Tongue of the Ocean, a major submarine channel in the Bahama Banks, is one ocean are for which a large data base of oceanographic information and a limited amount of ocean optical data are available.

  19. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  20. Big Sky and Greenhorn Drilling Area on Mount Sharp

    NASA Image and Video Library

    2015-12-17

    This view from the Mast Camera (Mastcam) on NASA's Curiosity Mars rover covers an area in "Bridger Basin" that includes the locations where the rover drilled a target called "Big Sky" on the mission's Sol 1119 (Sept. 29, 2015) and a target called "Greenhorn" on Sol 1137 (Oct. 18, 2015). The scene combines portions of several observations taken from sols 1112 to 1126 (Sept. 22 to Oct. 6, 2015) while Curiosity was stationed at Big Sky drilling site. The Big Sky drill hole is visible in the lower part of the scene. The Greenhorn target, in a pale fracture zone near the center of the image, had not yet been drilled when the component images were taken. Researchers selected this pair of drilling sites to investigate the nature of silica enrichment in the fracture zones of the area. http://photojournal.jpl.nasa.gov/catalog/PIA20270

  1. Electrophysiological revelations of trial history effects in a color oddball search task.

    PubMed

    Shin, Eunsam; Chong, Sang Chul

    2016-12-01

    In visual oddball search tasks, viewing a no-target scene (i.e., no-target selection trial) leads to the facilitation or delay of the search time for a target in a subsequent trial. Presumably, this selection failure leads to biasing attentional set and prioritizing stimulus features unseen in the no-target scene. We observed attention-related ERP components and tracked the course of attentional biasing as a function of trial history. Participants were instructed to identify color oddballs (i.e., targets) shown in varied trial sequences. The number of no-target scenes preceding a target scene was increased from zero to two to reinforce attentional biasing, and colors presented in two successive no-target scenes were repeated or changed to systematically bias attention to specific colors. For the no-target scenes, the presentation of a second no-target scene resulted in an early selection of, and sustained attention to, the changed colors (mirrored in the frontal selection positivity, the anterior N2, and the P3b). For the target scenes, the N2pc indicated an earlier allocation of attention to the targets with unseen or remotely seen colors. Inhibitory control of attention, shown in the anterior N2, was greatest when the target scene was followed by repeated no-target scenes with repeated colors. Finally, search times and the P3b were influenced by both color previewing and its history. The current results demonstrate that attentional biasing can occur on a trial-by-trial basis and be influenced by both feature previewing and its history. © 2016 Society for Psychophysiological Research.

  2. Deconstructing Visual Scenes in Cortex: Gradients of Object and Spatial Layout Information

    PubMed Central

    Kravitz, Dwight J.; Baker, Chris I.

    2013-01-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity. PMID:22473894

  3. Identification, definition and mapping of terrestrial ecosystems in interior Alaska

    NASA Technical Reports Server (NTRS)

    Anderson, J. H. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Two new, as yet unfinished vegetation maps are presented. These tend further to substantiate the belief that ERTS-1 imagery is a valuable mapping tool. Newly selected scenes show that vegetation interpretations can be refined through use of non-growing season imagery, particularly through the different spectral characteristics of vegetation lacking foliage and through the effect of vegetation structure on apparent snow cover. Scenes now are available for all test area north of the Alaska Range except Mt. McKinley National Park. No support was obtained for the hypothesis that similar interband ratios, from two areas apparently different spectrally because of different sun angles, would indicate similar surface features. However, attempts to test this hypothesis have so far been casual.

  4. Domain Adaptation for Pedestrian Detection Based on Prediction Consistency

    PubMed Central

    Huan-ling, Tang; Zhi-yong, An

    2014-01-01

    Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene. PMID:25013850

  5. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  6. A Psychoevolutionary Approach to Identifying Preferred Nature Scenes With Potential to Provide Restoration From Stress.

    PubMed

    Thake, Carol L; Bambling, Matthew; Edirippulige, Sisira; Marx, Eric

    2017-10-01

    Research supports therapeutic use of nature scenes in healthcare settings, particularly to reduce stress. However, limited literature is available to provide a cohesive guide for selecting scenes that may provide optimal therapeutic effect. This study produced and tested a replicable process for selecting nature scenes with therapeutic potential. Psychoevolutionary theory informed the construction of the Importance for Survival Scale (IFSS), and its usefulness for identifying scenes that people generally prefer to view and that hold potential to reduce stress was tested. Relationships between Importance for Survival (IFS), preference, and restoration were tested. General community participants ( N = 20 males, 20 females; M age = 48 years) Q-sorted sets of landscape photographs (preranked by the researcher in terms of IFS using the IFSS) from most to least preferred, and then completed the Short-Version Revised Restoration Scale in response to viewing a selection of the scenes. Results showed significant positive relationships between IFS and each of scene preference (large effect), and restoration potential (medium effect), as well as between scene preference and restoration potential across the levels of IFS (medium effect), and for individual participants and scenes (large effect). IFS was supported as a framework for identifying nature scenes that people will generally prefer to view and that hold potential for restoration from emotional distress; however, greater therapeutic potential may be expected when people can choose which of the scenes they would prefer to view. Evidence for the effectiveness of the IFSS was produced.

  7. Constructing, Perceiving, and Maintaining Scenes: Hippocampal Activity and Connectivity

    PubMed Central

    Zeidman, Peter; Mullally, Sinéad L.; Maguire, Eleanor A.

    2015-01-01

    In recent years, evidence has accumulated to suggest the hippocampus plays a role beyond memory. A strong hippocampal response to scenes has been noted, and patients with bilateral hippocampal damage cannot vividly recall scenes from their past or construct scenes in their imagination. There is debate about whether the hippocampus is involved in the online processing of scenes independent of memory. Here, we investigated the hippocampal response to visually perceiving scenes, constructing scenes in the imagination, and maintaining scenes in working memory. We found extensive hippocampal activation for perceiving scenes, and a circumscribed area of anterior medial hippocampus common to perception and construction. There was significantly less hippocampal activity for maintaining scenes in working memory. We also explored the functional connectivity of the anterior medial hippocampus and found significantly stronger connectivity with a distributed set of brain areas during scene construction compared with scene perception. These results increase our knowledge of the hippocampus by identifying a subregion commonly engaged by scenes, whether perceived or constructed, by separating scene construction from working memory, and by revealing the functional network underlying scene construction, offering new insights into why patients with hippocampal lesions cannot construct scenes. PMID:25405941

  8. Analysis of Urban Terrain Data for Use in the Development of an Urban Camouflage Pattern

    DTIC Science & Technology

    1990-02-01

    the entire lightness gamut , but concentrated in the red, orange, yellow and neutral regions of color space. 20. DISTRIBUTION I AVAILABILITY OF...le·nents grouped by color. ) Summary of Scenes Filmed for Urban Camouflage Study. 01Jtirnum Number of Do·nains Separated by Type; Sele:::ted CIELAB ...Values for All Urban Scenes. Selected CIELAB Values for Type I Urban Scenes. Selected CIELAB Values for Type II Urban Scenes. v Page 3 6 7 8 9

  9. Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification

    PubMed Central

    Wu, Lin

    2017-01-01

    With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories. PMID:28706534

  10. Locus Coeruleus Activity Strengthens Prioritized Memories Under Arousal.

    PubMed

    Clewett, David V; Huang, Ringo; Velasco, Rico; Lee, Tae-Ho; Mather, Mara

    2018-02-07

    Recent models posit that bursts of locus ceruleus (LC) activity amplify neural gain such that limited attention and encoding resources focus even more on prioritized mental representations under arousal. Here, we tested this hypothesis in human males and females using fMRI, neuromelanin MRI, and pupil dilation, a biomarker of arousal and LC activity. During scanning, participants performed a monetary incentive encoding task in which threat of punishment motivated them to prioritize encoding of scene images over superimposed objects. Threat of punishment elicited arousal and selectively enhanced memory for goal-relevant scenes. Furthermore, trial-level pupil dilations predicted better scene memory under threat, but were not related to object memory outcomes. fMRI analyses revealed that greater threat-evoked pupil dilations were positively associated with greater scene encoding activity in LC and parahippocampal cortex, a region specialized to process scene information. Across participants, this pattern of LC engagement for goal-relevant encoding was correlated with neuromelanin signal intensity, providing the first evidence that LC structure relates to its activation pattern during cognitive processing. Threat also reduced dynamic functional connectivity between high-priority (parahippocampal place area) and lower-priority (lateral occipital cortex) category-selective visual cortex in ways that predicted increased memory selectivity. Together, these findings support the idea that, under arousal, LC activity selectively strengthens prioritized memory representations by modulating local and functional network-level patterns of information processing. SIGNIFICANCE STATEMENT Adaptive behavior relies on the ability to select and store important information amid distraction. Prioritizing encoding of task-relevant inputs is especially critical in threatening or arousing situations, when forming these memories is essential for avoiding danger in the future. However, little is known about the arousal mechanisms that support such memory selectivity. Using fMRI, neuromelanin MRI, and pupil measures, we demonstrate that locus ceruleus (LC) activity amplifies neural gain such that limited encoding resources focus even more on prioritized mental representations under arousal. For the first time, we also show that LC structure relates to its involvement in threat-related encoding processes. These results shed new light on the brain mechanisms by which we process important information when it is most needed. Copyright © 2018 the authors 0270-6474/18/381558-17$15.00/0.

  11. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education.

    PubMed

    Castaldelli-Maia, João Mauricio; Oliveira, Hercílio Pereira; Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh

    2012-01-01

    Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008). Qualitative study at two universities in the state of São Paulo. Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8). These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P < 0.001). Scenes with high scores (> 8) were defined as "quality scenes". Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  12. A COMPARISON OF INTER-ANALYST DIFFERENCES IN THE CLASSIFICATION OF A LANDSAT TEM+ SCENE IN SOUTH-CENTRAL VIRGINIA

    EPA Science Inventory

    This study examined inter-analyst classification variability based on training site signature selection only for six classifications from a 10 km2 Landsat ETM+ image centered over a highly heterogeneous area in south-central Virginia. Six analysts classified the image...

  13. Neural representations of contextual guidance in visual search of real-world scenes.

    PubMed

    Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P

    2013-05-01

    Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.

  14. Computational mechanisms underlying cortical responses to the affordance properties of visual scenes

    PubMed Central

    Epstein, Russell A.

    2018-01-01

    Biologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we develop a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes; that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that responses in the CNN to scene images were highly predictive of fMRI responses in the OPA. Moreover the CNN accounted for the portion of OPA variance relating to the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal operations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithms. PMID:29684011

  15. Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback

    NASA Astrophysics Data System (ADS)

    Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai

    2012-01-01

    With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.

  16. Technological Areas to Improve Soldier Decisiveness: Insights From the Soldier-System Design Perspective

    DTIC Science & Technology

    2012-03-01

    learning state of the Soldier (e.g., frustrated, confused, engaged), to select the best learning strategies (e.g., feedback, reflection, hints), and...targeted to areas of weakness. This training can be enhanced by the use of “intelligent” agents to perceive learner attributes (e.g., competence...auditory scene would be made, and outlying objects and sounds, or missing activity, could be automatically identified and displayed aurally or visually

  17. Aeronautical Knowledge (Selected Articles),

    DTIC Science & Technology

    1983-04-11

    distribution unlimited. THIS TRANSLATION IS A RENDITION OF THE ORIGI. NAL FOREIGN TEXT WITH4OUT ANY ANALYTICAL ORt EDITORIAL COMMENT. STATEMENTS ORt THEORIES...An operator busily touched a row of milky white switches on a computer . Groups of vermilion number codes incessantly flickered on a light blue display...On a Surface Observation Ship in the Launch Sea Area Blue sky and azure sea with light breeze and small waves were scenes of the launch sea area. In

  18. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  19. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  20. Specific and nonspecific neural activity during selective processing of visual representations in working memory.

    PubMed

    Oh, Hwamee; Leung, Hoi-Chung

    2010-02-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue ("Both") was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.

  1. Study of LANDSAT-D thematic mapper performance as applied to hydrocarbon exploration. [Southern Ontario, Lawton, Oklahoma; Owl Creek, Wyoming; Washington, D.C.; and Death Valley California

    NASA Technical Reports Server (NTRS)

    Everett, J. R. (Principal Investigator)

    1983-01-01

    Improved delineation of known oil and gas fields in southern Ontario and a spectacularly high amount of structural information on the Owl Creek, Wyoming scene were obtained from analysis of TM data. The use of hue, saturation, and value image processing techniques on a Death Valley, California scene permitted direct comparison of TM processed imagery with existing 1:250,000 scale geological maps of the area and revealed small outcrops of Tertiary volcanic material overlying Paleozoic sections. Analysis of TM data over Lawton, Oklahoma suggests that the reducing chemical environment associated with hydrocarbon seepage change ferric iron to soluble ferrous iron, allowing it to be leached. Results of the band selection algorithm show a suprising consistency, with the 1,4,5 combination selected as optimal in most cases.

  2. Guilty by his fibers: suspect confession versus textile fibers reconstructed simulation.

    PubMed

    Suzuki, Shinichi; Higashikawa, Yoshiyasu; Sugita, Ritsuko; Suzuki, Yasuhiro

    2009-08-10

    In one particular criminal case involving murder and theft, the arrested suspect admitted to the theft, but denied responsibility for the murder of the inhabitant of the crime scene. In his confession, the suspect stated that he found the victim's body when he broke into the crime scene to commit theft. For this report, the actual crime scene was reconstructed in accordance with the confession obtained during the interrogation of the suspect, and suspect behavior was simulated in accord to the suspect confession. The number of characteristic fibers retrieved from the simulated crime scene was compared with those of retrieved from the actual crime scene. By comparing the distribution and number of characteristic fibers collected in the simulation experiments and the actual investigation, the reliability of the suspect's confession was evaluated. The characteristic dark yellowish-green woolen fibers of the garment that the suspect wore when he entered the crime scene were selected as the target fiber in the reconstruction. The experimental simulations were conducted four times. The distributed target fibers were retrieved using the same type of adhesive tape and the same protocol by the same police officers who conducted the retrieval of the fibers at the actual crime scene. The fibers were identified both through morphological observation and by color comparisons of their ultaviolet-visible transmittance spectra measured with a microspectrophotometer. The fibers collected with the adhesive tape were counted for each area to compare with those collected in the actual crime scene investigation. The numbers of fibers found at each area of the body, mattress and blankets were compared between the simulated experiments and the actual investigation, and a significant difference was found. In particular, the numbers of fibers found near the victim's head were significantly different. As a result, the suspect's confession was not considered to be reliable, as a stronger contact with the victim was demonstrated by our simulations. During the control trial, traditional forensic traces like DNA or fingerprints were mute regarding the suspect's says. At the opposite, the fiber intelligence was highly significant to explain the suspect's behavior at the crime scene. The fiber results and simulations were presented at the court and the man was subsequently found guilty not only of theft and trespassing but also murder.

  3. Anisotropic scene geometry resampling with occlusion filling for 3DTV applications

    NASA Astrophysics Data System (ADS)

    Kim, Jangheon; Sikora, Thomas

    2006-02-01

    Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.

  4. Effects of aging on neural connectivity underlying selective memory for emotional scenes

    PubMed Central

    Waring, Jill D.; Addis, Donna Rose; Kensinger, Elizabeth A.

    2012-01-01

    Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults’ encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults’ connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. PMID:22542836

  5. Effects of aging on neural connectivity underlying selective memory for emotional scenes.

    PubMed

    Waring, Jill D; Addis, Donna Rose; Kensinger, Elizabeth A

    2013-02-01

    Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults' encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults' connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. Published by Elsevier Inc.

  6. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes.

    PubMed

    Smith, Tim J; Mital, Parag K

    2013-07-17

    Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.

  7. The Contribution of Object Shape and Surface Properties to Object Ensemble Representation in Anterior-medial Ventral Visual Cortex.

    PubMed

    Cant, Jonathan S; Xu, Yaoda

    2017-02-01

    Our visual system can extract summary statistics from large collections of objects without forming detailed representations of the individual objects in the ensemble. In a region in ventral visual cortex encompassing the collateral sulcus and the parahippocampal gyrus and overlapping extensively with the scene-selective parahippocampal place area (PPA), we have previously reported fMRI adaptation to object ensembles when ensemble statistics repeated, even when local image features differed across images (e.g., two different images of the same strawberry pile). We additionally showed that this ensemble representation is similar to (but still distinct from) how visual texture patterns are processed in this region and is not explained by appealing to differences in the color of the elements that make up the ensemble. To further explore the nature of ensemble representation in this brain region, here we used PPA as our ROI and investigated in detail how the shape and surface properties (i.e., both texture and color) of the individual objects constituting an ensemble affect the ensemble representation in anterior-medial ventral visual cortex. We photographed object ensembles of stone beads that varied in shape and surface properties. A given ensemble always contained beads of the same shape and surface properties (e.g., an ensemble of star-shaped rose quartz beads). A change to the shape and/or surface properties of all the beads in an ensemble resulted in a significant release from adaptation in PPA compared with conditions in which no ensemble feature changed. In contrast, in the object-sensitive lateral occipital area (LO), we only observed a significant release from adaptation when the shape of the ensemble elements varied, and found no significant results in additional scene-sensitive regions, namely, the retrosplenial complex and occipital place area. Together, these results demonstrate that the shape and surface properties of the individual objects comprising an ensemble both contribute significantly to object ensemble representation in anterior-medial ventral visual cortex and further demonstrate a functional dissociation between object- (LO) and scene-selective (PPA) visual cortical regions and within the broader scene-processing network itself.

  8. Looking Toward Curiosity Study Areas, Spring 2015

    NASA Image and Video Library

    2015-05-08

    This detailed panorama from the Mast Camera (Mastcam) on NASA's Curiosity Mars rover shows a view toward two areas on lower Mount Sharp chosen for close-up inspection: "Mount Shields" and "Logan Pass." The scene is a mosaic of images taken with Mastcam's right-eye camera, which has a telephoto lens, on April 16, 2015, during the 957th Martian day, or sol, of Curiosity's work on Mars, before that sol's drive. The view spans from southwest, at left, to west-northwest. The color has been approximately white-balanced to resemble how the scene would appear under daytime lighting conditions on Earth. By 10 sols later, Curiosity had driven about 328 meters (1,076 feet) from the location where it made this observation to an outcrop at the base of "Mount Shields." A 5-meter scale bar has been superimposed near the center of this scene beside the outcrop that the rover then examined in detail. (Five meters is 16.4 feet.) This study location was chosen on the basis of Mount Shields displaying a feature that geologists recognized from images like this as likely to be a site where an ancient valley was incised into bedrock, then refilled with other sediment. After a few sols examining the outcrop at the base of Mount Shields, Curiosity resumed driving toward a study area at Logan Pass, near the 5-meter scale bar in the left half of this scene. That location was selected earlier, on the basis of images from orbit indicating contact there between two different geological units. The rover's route from Mount Shields to Logan Pass runs behind "Jocko Butte" from the viewpoint where this panorama was taken. http://photojournal.jpl.nasa.gov/catalog/PIA19398

  9. Integrating UAV Flight outputs in Esri's CityEngine for semi-urban areas

    NASA Astrophysics Data System (ADS)

    Anca, Paula; Vasile, Alexandru; Sandric, Ionut

    2016-04-01

    One of the most pervasive technologies of recent years, which has crossed over into consumer products due to its lowering prince, is the UAV, commonly known as drones. Besides its ever-more accessible prices and growing functionality, what is truly impressive is the drastic reduction in processing time, from days to ours: from the initial flight preparation to the final output. This paper presents such a workflow and goes further by integrating the outputs into another growing technology: 3D. The software used for this purpose is Esri's CityEngine, which was developed for modeling 3D urban environments using existing 2D GIS data and computer generated architecture (CGA) rules, instead of modeling each feature individually. A semi-urban areas was selected for this study and captured using the E-Bee from Parrot. The output point cloud elevation from the E-Bee flight was transformed into a raster in order to be used as an elevation surface in CityEngine, and the mosaic raster dataset was draped over this surface. In order to model the buildings in this area CGA rules were written using the building footprints, as inputs, in the form of Feature Classes. The extrusion heights for the buildings were also extracted from the point cloud, and realistic textures were draped over the 3D building models. Finally the scene was shared as a 3D web-scene which can be accessed by anyone through a link, without any software besides an internet browser. This can serve as input for Smart City development through further analysis for urban ecology Keywords: 3D, drone, CityEngine, E-Bee, Esri, scene, web-scene

  10. A new approach to modeling the influence of image features on fixation selection in scenes

    PubMed Central

    Nuthmann, Antje; Einhäuser, Wolfgang

    2015-01-01

    Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. PMID:25752239

  11. Reflectance of vegetation, soil, and water

    NASA Technical Reports Server (NTRS)

    Wiegand, C. L. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The ability to read the 24-channel MSS CCT tapes, select specified agricultural land use areas from the CCT, and perform multivariate statistical and pattern recognition analyses has been demonstrated. The 5 optimum channels chosen for classifying an agricultural scene were, in the order of their selection the far red visible, short reflective IR, visible blue, thermal infrared, and ultraviolet portions of the electromagnetic spectrum, respectively. Although chosen by a training set containing only vegetal categories, the optimum 4 channels discriminated pavement, water, bare soil, and building roofs, as well as the vegetal categories. Among the vegetal categories, sugar cane and cotton had distinctive signatures that distinguished them from grass and citrus. Acreages estimated spectrally by the computer for the test scene were acceptably close to acreages estimated from aerial photographs for cotton, sugar cane, and water. Many nonfarmable land resolution elements representing drainage ditch, field road, and highway right-of-way as well as farm headquarters area fell into the grass, bare soil plus weeds, and citrus categories and lessened the accuracy of the farmable acreage estimates in these categories. The expertise developed using the 24-channel data will be applied to the ERTS-1 data.

  12. Oculomotor capture during real-world scene viewing depends on cognitive load.

    PubMed

    Matsukura, Michi; Brockmole, James R; Boot, Walter R; Henderson, John M

    2011-03-25

    It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Deciding what is possible and impossible following hippocampal damage in humans.

    PubMed

    McCormick, Cornelia; Rosenthal, Clive R; Miller, Thomas D; Maguire, Eleanor A

    2017-03-01

    There is currently much debate about whether the precise role of the hippocampus in scene processing is predominantly constructive, perceptual, or mnemonic. Here, we developed a novel experimental paradigm designed to control for general perceptual and mnemonic demands, thus enabling us to specifically vary the requirement for constructive processing. We tested the ability of patients with selective bilateral hippocampal damage and matched control participants to detect either semantic (e.g., an elephant with butterflies for ears) or constructive (e.g., an endless staircase) violations in realistic images of scenes. Thus, scenes could be semantically or constructively 'possible' or 'impossible'. Importantly, general perceptual and memory requirements were similar for both types of scene. We found that the patients performed comparably to control participants when deciding whether scenes were semantically possible or impossible, but were selectively impaired at judging if scenes were constructively possible or impossible. Post-task debriefing indicated that control participants constructed flexible mental representations of the scenes in order to make constructive judgements, whereas the patients were more constrained and typically focused on specific fragments of the scenes, with little indication of having constructed internal scene models. These results suggest that one contribution the hippocampus makes to scene processing is to construct internal representations of spatially coherent scenes, which may be vital for modelling the world during both perception and memory recall. © 2016 The Authors. Hippocampus Published by Wiley Periodicals, Inc. © 2016 The Authors. Hippocampus Published by Wiley Periodicals, Inc.

  14. Influence of Exposure to Sexually Explicit Films on the Sexual Behavior of Secondary School Students in Ibadan, Nigeria.

    PubMed

    Odeleye, Olubunmi; Ajuwon, Ademola J

    2015-01-01

    Young people in secondary schools who are prone to engage in risky sexual behaviors spend considerable time watching Television (TV) which often presents sex scenes. The influence of exposure to sex scenes on TV (SSTV) has been little researched in Nigeria. This study was therefore designed to determine the perceived influence of exposure to SSTV on the sexual behavior of secondary school students in Ibadan North Local Government Area. A total of 489 randomly selected students were surveyed. Mean age of respondents was 14.1 ± 1.9 years and 53.8% were females. About 91% had ever been exposed to sex scenes. The type of TV program from which most respondents reported exposure to sexual scenes was movies (86.9%). Majority reported exposure to all forms of SSTV from secondary storage devices. Students whose TV watching behavior was not monitored had heavier exposures to SSTV compared with those who were. About 56.3% of females and 26.5% of males affirmed that watching SSTV had affected their sexual behavior. Predictor of sex-related activities was exposure to heavy sex scenes. Peer education and school-based programs should include topics to teach young people on how to evaluate presentations of TV programs. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans

    PubMed Central

    Filippi, Massimo; Riccitelli, Gianna; Falini, Andrea; Di Salle, Francesco; Vuilleumier, Patrik; Comi, Giancarlo; Rocca, Maria A.

    2010-01-01

    Empathy and affective appraisals for conspecifics are among the hallmarks of social interaction. Using functional MRI, we hypothesized that vegetarians and vegans, who made their feeding choice for ethical reasons, might show brain responses to conditions of suffering involving humans or animals different from omnivores. We recruited 20 omnivore subjects, 19 vegetarians, and 21 vegans. The groups were matched for sex and age. Brain activation was investigated using fMRI and an event-related design during observation of negative affective pictures of human beings and animals (showing mutilations, murdered people, human/animal threat, tortures, wounds, etc.). Participants saw negative-valence scenes related to humans and animals, alternating with natural landscapes. During human negative valence scenes, compared with omnivores, vegetarians and vegans had an increased recruitment of the anterior cingulate cortex (ACC) and inferior frontal gyrus (IFG). More critically, during animal negative valence scenes, they had decreased amygdala activation and increased activation of the lingual gyri, the left cuneus, the posterior cingulate cortex and several areas mainly located in the frontal lobes, including the ACC, the IFG and the middle frontal gyrus. Nonetheless, also substantial differences between vegetarians and vegans have been found responding to negative scenes. Vegetarians showed a selective recruitment of the right inferior parietal lobule during human negative scenes, and a prevailing activation of the ACC during animal negative scenes. Conversely, during animal negative scenes an increased activation of the inferior prefrontal cortex was observed in vegans. These results suggest that empathy toward non conspecifics has different neural representation among individuals with different feeding habits, perhaps reflecting different motivational factors and beliefs. PMID:20520767

  16. Emotional and neutral scenes in competition: orienting, efficiency, and identification.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri; Hyönä, Jukka

    2007-12-01

    To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on--and shorter saccade latencies to--emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes.

  17. Full Scenes Produce More Activation than Close-Up Scenes and Scene-Diagnostic Objects in Parahippocampal and Retrosplenial Cortex: An fMRI Study

    ERIC Educational Resources Information Center

    Henderson, John M.; Larson, Christine L.; Zhu, David C.

    2008-01-01

    We used fMRI to directly compare activation in two cortical regions previously identified as relevant to real-world scene processing: retrosplenial cortex and a region of posterior parahippocampal cortex functionally defined as the parahippocampal place area (PPA). We compared activation in these regions to full views of scenes from a global…

  18. Shedding light on emotional perception: Interaction of brightness and semantic content in extrastriate visual cortex.

    PubMed

    Schettino, Antonio; Keil, Andreas; Porcu, Emanuele; Müller, Matthias M

    2016-06-01

    The rapid extraction of affective cues from the visual environment is crucial for flexible behavior. Previous studies have reported emotion-dependent amplitude modulations of two event-related potential (ERP) components - the N1 and EPN - reflecting sensory gain control mechanisms in extrastriate visual areas. However, it is unclear whether both components are selective electrophysiological markers of attentional orienting toward emotional material or are also influenced by physical features of the visual stimuli. To address this question, electrical brain activity was recorded from seventeen male participants while viewing original and bright versions of neutral and erotic pictures. Bright neutral scenes were rated as more pleasant compared to their original counterpart, whereas erotic scenes were judged more positively when presented in their original version. Classical and mass univariate ERP analysis showed larger N1 amplitude for original relative to bright erotic pictures, with no differences for original and bright neutral scenes. Conversely, the EPN was only modulated by picture content and not by brightness, substantiating the idea that this component is a unique electrophysiological marker of attention allocation toward emotional material. Complementary topographic analysis revealed the early selective expression of a centro-parietal positivity following the presentation of original erotic scenes only, reflecting the recruitment of neural networks associated with sustained attention and facilitated memory encoding for motivationally relevant material. Overall, these results indicate that neural networks subtending the extraction of emotional information are differentially recruited depending on low-level perceptual features, which ultimately influence affective evaluations. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Selective looking at natural scenes: Hedonic content and gender.

    PubMed

    Bradley, Margaret M; Costa, Vincent D; Lang, Peter J

    2015-10-01

    Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. The elephant in the room: Inconsistency in scene viewing and representation.

    PubMed

    Spotorno, Sara; Tatler, Benjamin W

    2017-10-01

    We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Scene segmentation by spike synchronization in reciprocally connected visual areas. II. Global assemblies and synchronization on larger space and time scales.

    PubMed

    Knoblauch, Andreas; Palm, Günther

    2002-09-01

    We present further simulation results of the model of two reciprocally connected visual areas proposed in the first paper [Knoblauch and Palm (2002) Biol Cybern 87:151-167]. One area corresponds to the orientation-selective subsystem of the primary visual cortex, the other is modeled as an associative memory representing stimulus objects according to Hebbian learning. We examine the scene-segmentation capability of our model on larger time and space scales, and relate it to experimental findings. Scene segmentation is achieved by attention switching on a time-scale longer than the gamma range. We find that the time-scale can vary depending on habituation parameters in the range of tens to hundreds of milliseconds. The switching process can be related to findings concerning attention and biased competition, and we reproduce experimental poststimulus time histograms (PSTHs) of single neurons under different stimulus and attentional conditions. In a larger variant the model exhibits traveling waves of activity on both slow and fast time-scales, with properties similar to those found in experiments. An apparent weakness of our standard model is the tendency to produce anti-phase correlations for fast activity from the two areas. Increasing the inter-areal delays in our model produces alternations of in-phase and anti-phase oscillations. The experimentally observed in-phase correlations can most naturally be obtained by the involvement of both fast and slow inter-areal connections; e.g., by two axon populations corresponding to fast-conducting myelinated and slow-conducting unmyelinated axons.

  2. Neural Correlates of Fixation Duration during Real-world Scene Viewing: Evidence from Fixation-related (FIRE) fMRI.

    PubMed

    Henderson, John M; Choi, Wonil

    2015-06-01

    During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.

  3. Cross-sensor comparisons between Landsat 5 TM and IRS-P6 AWiFS and disturbance detection using integrated Landsat and AWiFS time-series images

    USGS Publications Warehouse

    Chen, Xuexia; Vogelmann, James E.; Chander, Gyanesh; Ji, Lei; Tolk, Brian; Huang, Chengquan; Rollins, Matthew

    2013-01-01

    Routine acquisition of Landsat 5 Thematic Mapper (TM) data was discontinued recently and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) has an ongoing problem with the scan line corrector (SLC), thereby creating spatial gaps when covering images obtained during the process. Since temporal and spatial discontinuities of Landsat data are now imminent, it is therefore important to investigate other potential satellite data that can be used to replace Landsat data. We thus cross-compared two near-simultaneous images obtained from Landsat 5 TM and the Indian Remote Sensing (IRS)-P6 Advanced Wide Field Sensor (AWiFS), both captured on 29 May 2007 over Los Angeles, CA. TM and AWiFS reflectances were compared for the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands, as well as the normalized difference vegetation index (NDVI) based on manually selected polygons in homogeneous areas. All R2 values of linear regressions were found to be higher than 0.99. The temporally invariant cluster (TIC) method was used to calculate the NDVI correlation between the TM and AWiFS images. The NDVI regression line derived from selected polygons passed through several invariant cluster centres of the TIC density maps and demonstrated that both the scene-dependent polygon regression method and TIC method can generate accurate radiometric normalization. A scene-independent normalization method was also used to normalize the AWiFS data. Image agreement assessment demonstrated that the scene-dependent normalization using homogeneous polygons provided slightly higher accuracy values than those obtained by the scene-independent method. Finally, the non-normalized and relatively normalized ‘Landsat-like’ AWiFS 2007 images were integrated into 1984 to 2010 Landsat time-series stacks (LTSS) for disturbance detection using the Vegetation Change Tracker (VCT) model. Both scene-dependent and scene-independent normalized AWiFS data sets could generate disturbance maps similar to what were generated using the LTSS data set, and their kappa coefficients were higher than 0.97. These results indicate that AWiFS can be used instead of Landsat data to detect multitemporal disturbance in the event of Landsat data discontinuity.

  4. Development of a computer program data base of a navigation aid environment for simulated IFR flight and landing studies

    NASA Technical Reports Server (NTRS)

    Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.

    1980-01-01

    A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.

  5. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  6. Re-engaging with the past: recapitulation of encoding operations during episodic retrieval

    PubMed Central

    Morcom, Alexa M.

    2014-01-01

    Recollection of events is accompanied by selective reactivation of cortical regions which responded to specific sensory and cognitive dimensions of the original events. This reactivation is thought to reflect the reinstatement of stored memory representations and therefore to reflect memory content, but it may also reveal processes which support both encoding and retrieval. The present study used event-related functional magnetic resonance imaging to investigate whether regions selectively engaged in encoding face and scene context with studied words are also re-engaged when the context is later retrieved. As predicted, encoding face and scene context with visually presented words elicited activity in distinct, context-selective regions. Retrieval of face and scene context also re-engaged some of the regions which had shown successful encoding effects. However, this recapitulation of encoding activity did not show the same context selectivity observed at encoding. Successful retrieval of both face and scene context re-engaged regions which had been associated with encoding of the other type of context, as well as those associated with encoding the same type of context. This recapitulation may reflect retrieval attempts which are not context-selective, but use shared retrieval cues to re-engage encoding operations in service of recollection. PMID:24904386

  7. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  8. Overt attention in natural scenes: objects dominate features.

    PubMed

    Stoll, Josef; Thrun, Michael; Nuthmann, Antje; Einhäuser, Wolfgang

    2015-02-01

    Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this comparably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection--and by inference object-based attentional guidance--in natural scenes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Age Differences in Selective Memory of Goal-Relevant Stimuli Under Threat.

    PubMed

    Durbin, Kelly A; Clewett, David; Huang, Ringo; Mather, Mara

    2018-02-01

    When faced with threat, people often selectively focus on and remember the most pertinent information while simultaneously ignoring any irrelevant information. Filtering distractors under arousal requires inhibitory mechanisms, which take time to recruit and often decline in older age. Despite the adaptive nature of this ability, relatively little research has examined how both threat and time spent preparing these inhibitory mechanisms affect selective memory for goal-relevant information across the life span. In this study, 32 younger and 31 older adults were asked to encode task-relevant scenes, while ignoring transparent task-irrelevant objects superimposed onto them. Threat levels were increased on some trials by threatening participants with monetary deductions if they later forgot scenes that followed threat cues. We also varied the time between threat induction and a to-be-encoded scene (i.e., 2 s, 4 s, 6 s) to determine whether both threat and timing effects on memory selectivity differ by age. We found that age differences in memory selectivity only emerged after participants spent a long time (i.e., 6 s) preparing for selective encoding. Critically, this time-dependent age difference occurred under threatening, but not neutral, conditions. Under threat, longer preparation time led to enhanced memory for task-relevant scenes and greater memory suppression of task-irrelevant objects in younger adults. In contrast, increased preparation time after threat induction had no effect on older adults' scene memory and actually worsened memory suppression of task-irrelevant objects. These findings suggest that increased time to prepare top-down encoding processes benefits younger, but not older, adults' selective memory for goal-relevant information under threat. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. AIS-2 radiometry and a comparison of methods for the recovery of ground reflectance

    NASA Technical Reports Server (NTRS)

    Conel, James E.; Green, Robert O.; Vane, Gregg; Bruegge, Carol J.; Alley, Ronald E.; Curtiss, Brian J.

    1987-01-01

    A field experiment and its results involving Airborne Imaging Spectrometer-2 data are described. The radiometry and spectral calibration of the instrument are critically examined in light of laboratory and field measurements. Three methods of compensating for the atmosphere in the search for ground reflectance are compared. It was found that laboratory determined responsitivities are 30 to 50 percent less than expected for conditions of the flight for both short and long wavelength observations. The combined system atmosphere surface signal to noise ratio, as indexed by the mean response divided by the standard deviation for selected areas, lies between 40 and 110, depending upon how scene averages are taken, and is 30 percent less for flight conditions than for laboratory. Atmospheric and surface variations may contribute to this difference. It is not possible to isolate instrument performance from the present data. As for methods of data reduction, the so-called scene average or log-residual method fails to recover any feature present in the surface reflectance, probably because of the extreme homogeneity of the scene.

  11. Database improvements for motor vehicle/bicycle crash analysis

    PubMed Central

    Lusk, Anne C; Asgarzadeh, Morteza; Farvid, Maryam S

    2015-01-01

    Background Bicycling is healthy but needs to be safer for more to bike. Police crash templates are designed for reporting crashes between motor vehicles, but not between vehicles/bicycles. If written/drawn bicycle-crash-scene details exist, these are not entered into spreadsheets. Objective To assess which bicycle-crash-scene data might be added to spreadsheets for analysis. Methods Police crash templates from 50 states were analysed. Reports for 3350 motor vehicle/bicycle crashes (2011) were obtained for the New York City area and 300 cases selected (with drawings and on roads with sharrows, bike lanes, cycle tracks and no bike provisions). Crashes were redrawn and new bicycle-crash-scene details were coded and entered into the existing spreadsheet. The association between severity of injuries and bicycle-crash-scene codes was evaluated using multiple logistic regression. Results Police templates only consistently include pedal-cyclist and helmet. Bicycle-crash-scene coded variables for templates could include: 4 bicycle environments, 18 vehicle impact-points (opened-doors and mirrors), 4 bicycle impact-points, motor vehicle/bicycle crash patterns, in/out of the bicycle environment and bike/relevant motor vehicle categories. A test of including these variables suggested that, with bicyclists who had minor injuries as the control group, bicyclists on roads with bike lanes riding outside the lane had lower likelihood of severe injuries (OR, 0.40, 95% CI 0.16 to 0.98) compared with bicyclists riding on roads without bicycle facilities. Conclusions Police templates should include additional bicycle-crash-scene codes for entry into spreadsheets. Crash analysis, including with big data, could then be conducted on bicycle environments, motor vehicle potential impact points/doors/mirrors, bicycle potential impact points, motor vehicle characteristics, location and injury. PMID:25835304

  12. Selecting and perceiving multiple visual objects

    PubMed Central

    Xu, Yaoda; Chun, Marvin M.

    2010-01-01

    To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory. PMID:19269882

  13. A Photo Album of Earth Scheduling Landsat 7 Mission Daily Activities

    NASA Technical Reports Server (NTRS)

    Potter, William; Gasch, John; Bauer, Cynthia

    1998-01-01

    Landsat7 is a member of a new generation of Earth observation satellites. Landsat7 will carry on the mission of the aging Landsat 5 spacecraft by acquiring high resolution, multi-spectral images of the Earth surface for strategic, environmental, commercial, agricultural and civil analysis and research. One of the primary mission goals of Landsat7 is to accumulate and seasonally refresh an archive of global images with full coverage of Earth's landmass, less the central portion of Antarctica. This archive will enable further research into seasonal, annual and long-range trending analysis in such diverse research areas as crop yields, deforestation, population growth, and pollution control, to name just a few. A secondary goal of Landsat7 is to fulfill imaging requests from our international partners in the mission. Landsat7 will transmit raw image data from the spacecraft to 25 ground stations in 20 subscribing countries. Whereas earlier Landsat missions were scheduled manually (as are the majority of current low-orbit satellite missions), the task of manually planning and scheduling Landsat7 mission activities would be overwhelmingly complex when considering the large volume of image requests, the limited resources available, spacecraft instrument limitations, and the limited ground image processing capacity, not to mention avoidance of foul weather systems. The Landsat7 Mission Operation Center (MOC) includes an image scheduler subsystem that is designed to automate the majority of mission planning and scheduling, including selection of the images to be acquired, managing the recording and playback of the images by the spacecraft, scheduling ground station contacts for downlink of images, and generating the spacecraft commands for controlling the imager, recorder, transmitters and antennas. The image scheduler subsystem autonomously generates 90% of the spacecraft commanding with minimal manual intervention. The image scheduler produces a conflict-free schedule for acquiring images of the "best" 250 scenes daily for refreshing the global archive. It then equitably distributes the remaining resources for acquiring up to 430 scenes to satisfy requests by international subscribers. The image scheduler selects candidate scenes based on priority and age of the requests, and predicted cloud cover and sun angle at each scene. It also selects these scenes to avoid instrument constraint violations and maximizes efficiency of resource usage by encouraging acquisition of scenes in clusters. Of particular interest to the mission planners, it produces the resulting schedule in a reasonable time, typically within 15 minutes.

  14. Multi-stage approach to estimate forest biomass in degraded area by fire and selective logging

    NASA Astrophysics Data System (ADS)

    Santos, E. G.; Shimabukuro, Y. E.; Arai, E.; Duarte, V.; Jorge, A.; Gasparini, K.

    2017-12-01

    The Amazon forest has been the target of several threats throughout the years. Anthropogenic disturbances in the region can significantly alter this environment, affecting directly the dynamics and structure of tropical forests. Monitoring these threats of forest degradation across the Amazon is of paramount to understand the impacts of disturbances in the tropics. With the advance of new technologies such as Light Detection and Ranging (LiDAR) the quantification and development of methodologies to monitor forest degradation in the Amazon is possible and may bring considerable contributions to this topic. The objective of this study was to use remote sensing data to assess and estimate the aboveground biomass (AGB) across different levels of degradation (fire and selective logging) using multi-stage approach between airborne LiDAR and orbital image. The study area is in the northern part of the state of Mato Grosso, Brazil. It is predominantly characterized by agricultural land and remnants of the Amazon Forest intact and degraded by either anthropic or natural reasons (selective logging and/or fire). More specifically, the study area corresponds to path/row 226/69 of OLI/Landsat 8 image. With a forest mask generated from the multi-resolution segmentation, agriculture and forest areas, forest biomass was calculated from LiDAR data and correlated with texture images, vegetation indices and fraction images by Linear Spectral Unmixing of OLI/Landsat 8 image and extrapolated to the entire scene 226/69 and validated with field inventories. The results showed that there is a moderate to strong correlation between forest biomass and texture data, vegetation indices and fraction images. With that, it is possible to extract biomass information and create maps using optical data, specifically by combining vegetation indices, which contain forest greening information with texture data that contains forest structure information. Then it was possible to extrapolate the biomass to the entire scene (226/69) from the optical data and to obtain an overview of the biomass distribution throughout the area.

  15. Social Class and Racial Differences in Children's Perceptions of Television Violence.

    ERIC Educational Resources Information Center

    Greenberg, Bradley S.; Gordon, Thomas F.

    Perceptions of media violence and comparisons of those perceptions for different viewer subgroups were examined in a study of fifth-grade boys' perceptions of selected television scenes which differed in kind and degree of violence. Two parallel videotapes were edited to contain scenes of different kinds of physical violence, a practice scene, and…

  16. Large Area Crop Inventory Experiment (LACIE). Detection of episodic phenomena on LANDSAT imagery. [Kansas

    NASA Technical Reports Server (NTRS)

    Chesnutwood, C. M. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Episodic phenomena such as rainfall shortly before data pass, thin translucent clouds, cloud shadows, and aircraft condensation trails and their shadows are responsible for changes in the spectral reflectivities of some surfaces. These changes are readily detected on LANDSAT full-frame imagery. Histograms of selected areas in Kansas show a distinct decrease in mean radiance values, but also, an increase in scene contrast, in areas where recent rains had occurred. Histograms from a few individual fields indicate that the mean radiance values for winter wheat followed a different trend after a rainfall than alfalfa or grasses.

  17. Scrambled eyes? Disrupting scene structure impedes focal processing and increases bottom-up guidance.

    PubMed

    Foulsham, Tom; Alan, Rana; Kingstone, Alan

    2011-10-01

    Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.

  18. Effects of chromatic image statistics on illumination induced color differences.

    PubMed

    Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels

    2013-09-01

    We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.

  19. Delineation of soil temperature regimes from HCMM data

    NASA Technical Reports Server (NTRS)

    Day, R. L.; Petersen, G. W. (Principal Investigator)

    1981-01-01

    Supplementary data including photographs as well as topographic, geologic, and soil maps were obtained and evaluated for ground truth purposes and control point selection. A study area (approximately 450 by 450 pixels) was subset from LANDSAT scene No. 2477-17142. Geometric corrections and scaling were performed. Initial enhancement techniques were initiated to aid control point selection and soils interpretation. The SUBSET program was modified to read HCMM tapes and HCMM data were reformated so that they are compatible with the ORSER system. Initial NMAP products of geometrically corrected and scaled raw data tapes (unregistered) of the study were produced.

  20. Neotectonics of the San Andreas Fault system: Basin and range province juncture

    NASA Technical Reports Server (NTRS)

    Estes, J. E.; Crowell, J. C. (Principal Investigator)

    1981-01-01

    A thorough evaluation of all LANDSAT coverage of the study area (considering atmospheric clarity, seasonal aspects, specific swath location, and digital quality) resulted in the selection of two consecutive (continuously recorded) scenes for detailed analyses. The acquisition of HCMM and SEASAT imagery as well as high altitude U-2 uniform coverage is being considered. A bibliography of previous geological studies and methodological examples is estimated to be 70% complete.

  1. Neural representations of faces and body parts in macaque and human cortex: a comparative FMRI study.

    PubMed

    Pinsk, Mark A; Arcaro, Michael; Weiner, Kevin S; Kalkus, Jan F; Inati, Souheil J; Gross, Charles G; Kastner, Sabine

    2009-05-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part-selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part-selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas.

  2. Neural Representations of Faces and Body Parts in Macaque and Human Cortex: A Comparative fMRI Study

    PubMed Central

    Pinsk, Mark A.; Arcaro, Michael; Weiner, Kevin S.; Kalkus, Jan F.; Inati, Souheil J.; Gross, Charles G.; Kastner, Sabine

    2009-01-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part–selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part–selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between the two species and provide an initial step toward establishing functionally homologous category-selective areas. PMID:19225169

  3. Do Object-Category Selective Regions in the Ventral Visual Stream Represent Perceived Distance Information?

    ERIC Educational Resources Information Center

    Amit, Elinor; Mehoudar, Eyal; Trope, Yaacov; Yovel, Galit

    2012-01-01

    It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test…

  4. Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.

    PubMed

    Durant, Szonya; Wall, Matthew B; Zanker, Johannes M

    2011-09-09

    Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.

  5. Bulk silicon as photonic dynamic infrared scene projector

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Bogatyrenko, V. V.; Malyutenko, O. Yu.

    2013-04-01

    A Si-based fast (frame rate >1 kHz), large-scale (scene area 100 cm2), broadband (3-12 μm), dynamic contactless infrared (IR) scene projector is demonstrated. An IR movie appears on a scene because of the conversion of a visible scenario projected at a scene kept at elevated temperature. Light down conversion comes as a result of free carrier generation in a bulk Si scene followed by modulation of its thermal emission output in the spectral band of free carrier absorption. The experimental setup, an IR movie, figures of merit, and the process's advantages in comparison to other projector technologies are discussed.

  6. LARGE AREA LAND COVER MAPPING THROUGH SCENE-BASED CLASSIFICATION COMPOSITING

    EPA Science Inventory

    Over the past decade, a number of initiatives have been undertaken to create definitive national and global data sets consisting of precision corrected Landsat MSS and TM scenes. One important application of these data is the derivation of large area land cover products spanning ...

  7. Sea-Based Infrared Scene Interpretation by Background Type Classification and Coastal Region Detection for Small Target Detection

    PubMed Central

    Kim, Sungho

    2015-01-01

    Sea-based infrared search and track (IRST) is important for homeland security by detecting missiles and asymmetric boats. This paper proposes a novel scheme to interpret various infrared scenes by classifying the infrared background types and detecting the coastal regions in omni-directional images. The background type or region-selective small infrared target detector should be deployed to maximize the detection rate and to minimize the number of false alarms. A spatial filter-based small target detector is suitable for identifying stationary incoming targets in remote sea areas with sky only. Many false detections can occur if there is an image sector containing a coastal region, due to ground clutter and the difficulty in finding true targets using the same spatial filter-based detector. A temporal filter-based detector was used to handle these problems. Therefore, the scene type and coastal region information is critical to the success of IRST in real-world applications. In this paper, the infrared scene type was determined using the relationships between the sensor line-of-sight (LOS) and a horizontal line in an image. The proposed coastal region detector can be activated if the background type of the probing sector is determined to be a coastal region. Coastal regions can be detected by fusing the region map and curve map. The experimental results on real infrared images highlight the feasibility of the proposed sea-based scene interpretation. In addition, the effects of the proposed scheme were analyzed further by applying region-adaptive small target detection. PMID:26404308

  8. A causal relationship between face-patch activity and face-detection behavior.

    PubMed

    Sadagopan, Srivatsun; Zarco, Wilbert; Freiwald, Winrich A

    2017-04-04

    The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.

  9. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  10. Nature and origin of mineral coatings on volcanic rocks of the Black Mountain, Stonewall Mountain and Kane Springs Wash volcanic centers, southern Nevada

    NASA Technical Reports Server (NTRS)

    Taranik, J. V.; Noble, D. C.; Hsu, L. C.; Hutsinpiller, A.; Spatz, D.

    1986-01-01

    Surface coatings on volcanic rock assemblages that occur at select tertiary volcanic centers in southern Nevada were investigated using LANDSAT 5 Thematic Mapper imagery. Three project sites comprise the subject of this study: the Kane Springs Wash, Black Mountain, and Stonewall Mountain volcanic centers. LANDSAT 5 TM work scenes selected for each area are outlined along with local area geology. The nature and composition of surface coatings on the rock types within the subproject areas are determined, along with the origin of the coatings and their genetic link to host rocks, geologic interpretations are related to remote sensing units discriminated on TM imagery. Image processing was done using an ESL VAX/IDIMS image processing system, field sampling, and observation. Aerial photographs were acquired to facilitate location on the ground and to aid stratigraphic differentiation.

  11. Selective attention during scene perception: evidence from negative priming.

    PubMed

    Gordon, Robert D

    2006-10-01

    In two experiments, we examined the role of semantic scene content in guiding attention during scene viewing. In each experiment, performance on a lexical decision task was measured following the brief presentation of a scene. The lexical decision stimulus named an object that was either present or not present in the scene. The results of Experiment 1 revealed no priming from inconsistent objects (whose identities conflicted with the scene in which they appeared), but negative priming from consistent objects. The results of Experiment 2 indicated that negative priming from consistent objects occurs only when inconsistent objects are present in the scenes. Together, the results suggest that observers are likely to attend to inconsistent objects, and that representations of consistent objects are suppressed in the presence of an inconsistent object. Furthermore, the data suggest that inconsistent objects draw attention because they are relatively difficult to identify in an inappropriate context.

  12. Age-related macular degeneration changes the processing of visual scenes in the brain.

    PubMed

    Ramanoël, Stephen; Chokron, Sylvie; Hera, Ruxandra; Kauffmann, Louise; Chiquet, Christophe; Krainik, Alexandre; Peyrin, Carole

    2018-01-01

    In age-related macular degeneration (AMD), the processing of fine details in a visual scene, based on a high spatial frequency processing, is impaired, while the processing of global shapes, based on a low spatial frequency processing, is relatively well preserved. The present fMRI study aimed to investigate the residual abilities and functional brain changes of spatial frequency processing in visual scenes in AMD patients. AMD patients and normally sighted elderly participants performed a categorization task using large black and white photographs of scenes (indoors vs. outdoors) filtered in low and high spatial frequencies, and nonfiltered. The study also explored the effect of luminance contrast on the processing of high spatial frequencies. The contrast across scenes was either unmodified or equalized using a root-mean-square contrast normalization in order to increase contrast in high-pass filtered scenes. Performance was lower for high-pass filtered scenes than for low-pass and nonfiltered scenes, for both AMD patients and controls. The deficit for processing high spatial frequencies was more pronounced in AMD patients than in controls and was associated with lower activity for patients than controls not only in the occipital areas dedicated to central and peripheral visual fields but also in a distant cerebral region specialized for scene perception, the parahippocampal place area. Increasing the contrast improved the processing of high spatial frequency content and spurred activation of the occipital cortex for AMD patients. These findings may lead to new perspectives for rehabilitation procedures for AMD patients.

  13. How emotion leads to selective memory: neuroimaging evidence.

    PubMed

    Waring, Jill D; Kensinger, Elizabeth A

    2011-06-01

    Often memory for emotionally arousing items is enhanced relative to neutral items within complex visual scenes, but this enhancement can come at the expense of memory for peripheral background information. This 'trade-off' effect has been elicited by a range of stimulus valence and arousal levels, yet the magnitude of the effect has been shown to vary with these factors. Using fMRI, this study investigated the neural mechanisms underlying this selective memory for emotional scenes. Further, we examined how these processes are affected by stimulus dimensions of arousal and valence. The trade-off effect in memory occurred for low to high arousal positive and negative scenes. There was a core emotional memory network associated with the trade-off among all the emotional scene types, however, there were additional regions that were uniquely associated with the trade-off for each individual scene type. These results suggest that there is a common network of regions associated with the emotional memory trade-off effect, but that valence and arousal also independently affect the neural activity underlying the effect. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. How emotion leads to selective memory: Neuroimaging evidence

    PubMed Central

    Waring, Jill D.; Kensinger, Elizabeth A.

    2011-01-01

    Often memory for emotionally arousing items is enhanced relative to neutral items within complex visual scenes, but this enhancement can come at the expense of memory for peripheral background information. This ‘trade-off’ effect has been elicited by a range of stimulus valence and arousal levels, yet the magnitude of the effect has been shown to vary with these factors. Using fMRI, this study investigated the neural mechanisms underlying this selective memory for emotional scenes. Further, we examined how these processes are affected by stimulus dimensions of arousal and valence. The trade-off effect in memory occurred for low to high arousal positive and negative scenes. There was a core emotional memory network associated with the trade-off among all the emotional scene types, however there were additional regions that were uniquely associated with the trade-off for each individual scene type. These results suggest that there is a common network of regions associated with the emotional memory tradeoff effect, but that valence and arousal also independently affect the neural activity underlying the effect. PMID:21414333

  15. 11. "NIGHT SCENE OF TEST AREA WITH TEST STAND 1A ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. "NIGHT SCENE OF TEST AREA WITH TEST STAND 1-A IN FOREGROUND. LIGHTS OF MAIN BASE, EDWARDS AFB, IN THE BACKGROUND. EDWARDS AFB." Test Area 1-120. Looking west past Test Stand 1-A to Test Area 1-115 and Test Area 1-110. Photo no. "12,401 57; G-AFFTC 12 DEC 57; TS 1-A Aux #1". - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Leuhman Ridge near Highways 58 & 395, Boron, Kern County, CA

  16. A distributed code for color in natural scenes derived from center-surround filtered cone signals

    PubMed Central

    Kellner, Christian J.; Wachtler, Thomas

    2013-01-01

    In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289

  17. Spatial and Temporal Patterns of Unburned Areas within Fire Perimeters in the Northwestern United States from 1984 to 2014

    NASA Astrophysics Data System (ADS)

    Meddens, A. J.; Kolden, C.; Lutz, J. A.; Abatzoglou, J. T.; Hudak, A. T.

    2016-12-01

    Recently, there has been concern about increasing extent and severity of wildfires across the globe given rapid climate change. Areas that do not burn within fire perimeters can act as fire refugia, providing (1) protection from the detrimental effects of the fire, (2) seed sources, and (3) post-fire habitat on the landscape. However, recent studies have mainly focused on the higher end of the burn severity spectrum whereas the lower end of the burn severity spectrum has been largely ignored. We developed a spatially explicit database for 2,200 fires across the inland northwestern USA, delineating unburned areas within fire perimeters from 1984 to 2014. We used 1,600 Landsat scenes with one or two scenes before and one or two scenes after the fires to capture the unburned proportion of the fire. Subsequently, we characterized the spatial and temporal patterns of unburned areas and related the unburned proportion to interannual climate variability. The overall classification accuracy detecting unburned locations was 89.2% using a 10-fold cross-validation classification tree approach in combination with 719 randomly located field plots. The unburned proportion ranged from 2% to 58% with an average of 19% for a select number of fires. We find that using both an immediate post-fire image and a one-year post fire image improves classification accuracy of unburned islands over using just a single post-fire image. The spatial characteristics of the unburned islands differ between forested and non-forested regions with a larger amount of unburned area within non-forest. In addition, we show trends of unburned proportion related primarily to concurrent climatic drought conditions across the entire region. This database is important for subsequent analyses of fire refugia prioritization, vegetation recovery studies, ecosystem resilience, and forest management to facilitate unburned islands through fuels breaks, prescribed burning, and fire suppression strategies.

  18. Parietal cortex integrates contextual and saliency signals during the encoding of natural scenes in working memory.

    PubMed

    Santangelo, Valerio; Di Francesco, Simona Arianna; Mastroberardino, Serena; Macaluso, Emiliano

    2015-12-01

    The Brief presentation of a complex scene entails that only a few objects can be selected, processed indepth, and stored in memory. Both low-level sensory salience and high-level context-related factors (e.g., the conceptual match/mismatch between objects and scene context) contribute to this selection process, but how the interplay between these factors affects memory encoding is largely unexplored. Here, during fMRI we presented participants with pictures of everyday scenes. After a short retention interval, participants judged the position of a target object extracted from the initial scene. The target object could be either congruent or incongruent with the context of the scene, and could be located in a region of the image with maximal or minimal salience. Behaviourally, we found a reduced impact of saliency on visuospatial working memory performance when the target was out-of-context. Encoding-related fMRI results showed that context-congruent targets activated dorsoparietal regions, while context-incongruent targets de-activated the ventroparietal cortex. Saliency modulated activity both in dorsal and ventral regions, with larger context-related effects for salient targets. These findings demonstrate the joint contribution of knowledge-based and saliency-driven attention for memory encoding, highlighting a dissociation between dorsal and ventral parietal regions. © 2015 Wiley Periodicals, Inc.

  19. Neural codes of seeing architectural styles

    PubMed Central

    Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.

    2017-01-01

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture. PMID:28071765

  20. Neural codes of seeing architectural styles.

    PubMed

    Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B

    2017-01-10

    Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.

  1. Emergent selectivity for task-relevant stimuli in higher-order auditory cortex

    PubMed Central

    Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.

    2014-01-01

    A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467

  2. Behavioral biases when viewing multiplexed scenes: scene structure and frames of reference for inspection

    PubMed Central

    Stainer, Matthew J.; Scott-Brown, Kenneth C.; Tatler, Benjamin W.

    2013-01-01

    Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate “sub-scenes.” Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers. PMID:24069008

  3. Multispectral Photography

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.

  4. Two Distinct Scene-Processing Networks Connecting Vision and Memory.

    PubMed

    Baldassano, Christopher; Esteva, Andre; Fei-Fei, Li; Beck, Diane M

    2016-01-01

    A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.

  5. POLYSITE - An interactive package for the selection and refinement of Landsat image training sites

    NASA Technical Reports Server (NTRS)

    Mack, Marilyn J. P.

    1986-01-01

    A versatile multifunction package, POLYSITE, developed for Goddard's Land Analysis System, is described which simplifies the process of interactively selecting and correcting the sites used to study Landsat TM and MSS images. Image switching between the zoomed and nonzoomed image, color and shape cursor change and location display, and bit plane erase or color change, are global functions which are active at all times. Local functions possibly include manipulation of intensive study areas, new site definition, mensuration, and new image copying. The program is illustrated with the example of a full TM maser scene of metropolitan Washington, DC.

  6. On Clear-Cut Mapping with Time-Series of Sentinel-1 Data in Boreal Forest

    NASA Astrophysics Data System (ADS)

    Rauste, Yrjo; Antropov, Oleg; Mutanen, Teemu; Hame, Tuomas

    2016-08-01

    Clear-cutting is the most drastic and wide-spread change that affects the hydrological and carbon-balance proper- ties of forested land in the Boreal forest zone1.A time-series of 36 Sentinel-1 images was used to study the potential for mapping clear-cut areas. The time series covered one and half year (2014-10-09 ... 2016-03-20) in a 200-km-by-200-km study site in Finland. The Sentinel- 1 images were acquired in Interferometric Wide-swath (IW), dual-polarized mode (VV+VH). All scenes were acquired in the same orbit configuration. Amplitude im- ages (GRDH product) were used. The Sentinel-1 scenes were ortho-rectified with in-house software using a digi- tal elevation model (DEM) produced by the Land Survey of Finland. The Sentinel-1 amplitude data were radio- metrically corrected for topographic effects.The temporal behaviour of C-band backscatter was stud- ied for areas representing 1) areas clear-cut during the ac- quisition of the Sentinel-1 time-series, 2) areas remaining forest during the acquisition of the Sentinel-1 time-series, and 3) areas that had been clear-cut before the acquisition of the Sentinel-1 time-series.The following observations were made:1. The separation between clear-cut areas and forest was generally low;2. Under certain acquisition conditions, clear-cut areas were well separable from forest;3. The good scenes were acquired: 1) in winter during thick snow cover, and 2) in late summer towards the end of a warm and dry period;4. The separation between clear-cut and forest was higher in VH polarized data than in VV-polarized data.5. The separation between clear-cut and forest was higher in the winter/snow scenes than in the dry summer scenes.

  7. Guest Editor's introduction: Special issue on distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Lea, Rodger

    1998-09-01

    Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com

  8. Methods of extending signatures and training without ground information. [data processing, pattern recognition

    NASA Technical Reports Server (NTRS)

    Henderson, R. G.; Thomas, G. S.; Nalepka, R. F.

    1975-01-01

    Methods of performing signature extension, using LANDSAT-1 data, are explored. The emphasis is on improving the performance and cost-effectiveness of large area wheat surveys. Two methods were developed: ASC, and MASC. Two methods, Ratio, and RADIFF, previously used with aircraft data were adapted to and tested on LANDSAT-1 data. An investigation into the sources and nature of between scene data variations was included. Initial investigations into the selection of training fields without in situ ground truth were undertaken.

  9. Evaluation of three techniques for classifying urban land cover patterns using LANDSAT MSS data. [New Orleans, Louisiana

    NASA Technical Reports Server (NTRS)

    Baumann, P. R. (Principal Investigator)

    1979-01-01

    Three computer quantitative techniques for determining urban land cover patterns are evaluated. The techniques examined deal with the selection of training samples by an automated process, the overlaying of two scenes from different seasons of the year, and the use of individual pixels as training points. Evaluation is based on the number and type of land cover classes generated and the marks obtained from an accuracy test. New Orleans, Louisiana and its environs form the study area.

  10. USING CLASSIFICATION CONSISTENCY IN INTER-SCENE OVERLAP AREAS TO MODEL SPATIAL VARIATIONS IN LAND-COVER ACCURACY OVER LARGE GEOGRAPHIC REGIONS

    EPA Science Inventory

    During the last decade, a number of initiatives have been undertaken to create systematic national and global data sets of processed satellite imagery. An important application of these data is the derivation of large area (i.e. multi-scene) land cover products. Such products, ho...

  11. Surface reflectance factor retrieval from Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Holm, Ronald G.; Jackson, Ray D.; Yuan, Benfan; Moran, M. Susan; Slater, Philip N.

    1989-01-01

    Based on the absolute radiometric calibration of the TM and the use of a radiative transfer program for atmospheric correction, ground reflectances were retrieved for several fields of crops and bare soil in TM bands 1-4 for six TM scenes acquired over a 12-month period. These reflectances were compared to those measured using ground-based and low-altitude, aircraft-mounted radiometers. When, for four TM acquisitions, the comparison was made between areas that had been carefully selected for their high uniformity, the reflectance factors agreed to + or - 0.01 over the reflectance range 0.02-0.55. When the comparison was made for two of the above acquisitions and two others on different dates, for larger areas not carefully selected to be of uniform reflectance, the reflectance factors agreed to + or - 0.02 (1 sigma RMS), over same reflectance range.

  12. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers

    NASA Astrophysics Data System (ADS)

    Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément

    2015-07-01

    3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.

  13. Cerebral Correlates of Emotional and Action Appraisals During Visual Processing of Emotional Scenes Depending on Spatial Frequency: A Pilot Study.

    PubMed

    Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole

    2016-01-01

    Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task's demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands.

  14. Low-Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.

    2001-01-01

    Northeast Yellowstone National Park (YNP) has a diversity of forest, range, and wetland cover types. Several remote sensing studies have recently been done in this area, including the NASA Earth Observations Commercial Applications Program (EOCAP) hyperspectral project conducted by Yellowstone Ecosystems Studies (YES) on the use of hyperspectral imaging for assessing riparian and in-stream habitats. In 1999, YES and NASA's Commercial Remote Sensing Program Office began collaborative study of this area, assessing the potential of synergistic use of hyperspectral, synthetic aperture radar (SAR), and multiband thermal data for mapping forest, range, and wetland land cover. Since the beginning, a quality 'reference' land cover map has been desired as a tool for developing and validating other land cover maps produced during the project. This paper recounts an effort to produce such a reference land cover map using low-altitude Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data and unsupervised classification techniques. The main objective of this study is to assess ISODATA classification for mapping land cover in Northeast YNP using select bands of low-altitude AVIRIS data. A secondary, more long-term objective is to assess the potential for improving ISODATA-based classification of land cover through use of principal components analysis and minimum noise fraction (MNF) techniques. This paper will primarily report on work regarding the primary research objective. This study focuses on an AVIRIS cube acquired on July 23, 1999, by the confluence of Soda Butte Creek with the Lamar River. Range and wetland habitats dominate the image with forested habitats being a comparatively minor component of the scene. The scene generally tracks from southwest to northeast. Most of the scene is valley bottom with some lower side slopes occurring on the western portion. Elevations within the AVIRIS scene range from approximately 1998 to 2165 m above sea level, based on US Geological Survey (USGS) 30-m digital elevation model (DEM) data. Despain and the National Park Service (NPS) provide additional description of the study area.

  15. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  16. Semantic guidance of eye movements in real-world scenes

    PubMed Central

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914

  17. Semantic guidance of eye movements in real-world scenes.

    PubMed

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Remotely Sensed Thermal Anomalies in Western Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains the areas identified as areas of anomalous surface temperature from Landsat satellite imagery in Western Colorado. Data was obtained for two different dates. The digital numbers of each Landsat scene were converted to radiance and the temperature was calculated in degrees Kelvin and then converted to degrees Celsius for each land cover type using the emissivity of that cover type. And this process was repeated for each of the land cover types (open water, barren, deciduous forest and evergreen forest, mixed forest, shrub/scrub, grassland/herbaceous, pasture hay, and cultivated crops). The temperature of each pixel within each scene was calculated using the thermal band. In order to calculate the temperature an average emissivity value was used for each land cover type within each scene. The NLCD 2001 land cover classification raster data of the zones that cover Colorado were downloaded from USGS site and used to identify the land cover types within each scene. Areas that had temperature residual greater than 2o, and areas with temperature equal to 1o to 2o, were considered Landsat modeled very warm and warm surface exposures (thermal anomalies), respectively. Note: 'o' is used in this description to represent lowercase sigma.

  19. Hybrid-mode read-in integrated circuit for infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Cho, Min Ji; Shin, Uisub; Lee, Hee Chul

    2017-05-01

    The infrared scene projector (IRSP) is a tool for evaluating infrared sensors by producing infrared images. Because sensor testing with IRSPs is safer than field testing, the usefulness of IRSPs is widely recognized at present. The important performance characteristics of IRSPs are the thermal resolution and the thermal dynamic range. However, due to an existing trade-off between these requirements, it is often difficult to find a workable balance between them. The conventional read-in integrated circuit (RIIC) can be classified into two types: voltage-mode and current-mode types. An IR emitter driven by a voltage-mode RIIC offers a fine thermal resolution. On the other hand, an emitter driven by the current-mode RIIC has the advantage of a wide thermal dynamic range. In order to provide various scenes, i.e., from highresolution scenes to high-temperature scenes, both of the aforementioned advantages are required. In this paper, a hybridmode RIIC which is selectively operated in two modes is proposed. The mode-selective characteristic of the proposed RIIC allows users to generate high-fidelity scenes regardless of the scene content. A prototype of the hybrid-mode RIIC was fabricated using a 0.18-μm 1-poly 6-metal CMOS process. The thermal range and the thermal resolution of the IR emitter driven by the proposed circuit were calculated based on measured data. The estimated thermal dynamic range of the current mode was from 261K to 790K, and the estimated thermal resolution of the voltage mode at 300K was 23 mK with a 12-bit gray-scale resolution.

  20. A statistical model for radar images of agricultural scenes

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  1. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-06-13

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.

  2. Effects of two educational method of lecturing and role playing on knowledge and performance of high school students in first aid at emergency scene

    PubMed Central

    Hassanzadeh, Akbar; Vasili, Arezu; Zare, Zahra

    2010-01-01

    BACKGROUND: This study aimed to investigate the effects of two educational methods on students' knowledge and performance regarding first aid at emergency scenes. METHODS: In this semi-experimental study, the sample was selected randomly among male and female public high school students of Isfahan. Each group included 60 students. At first the knowledge and performance of students in first aid at emergency scene was assessed using a researcher-made questionnaire. Then necessary education was provided to the students within 10 sessions of two hours by lecturing and role playing. The students' knowledge and performance was as-sessed again and the results were compared. RESULTS: It was no significant relationship between the frequency distribution of students' age, major and knowledge and performance before the educational course in the two groups. The score of knowledge in performing CPR, using proper way to bandage, immobilizing the injured area, and proper ways of carrying injured person after the education was significantly increased in both groups. Moreover, the performance in proper way to bandage, immobilizing injured area and proper ways of carrying injured person after educational course was significantly higher in playing role group compared to lecturing group after education. CONCLUSIONS: Iran is a developing country with a young generation and it is a country with high risk of natural disasters; so, providing necessary education with more effective methods can be effective in reducing mortality and morbidity due to lack of first aid care in crucial moments. Training with playing role is suggested for this purpose. PMID:21589743

  3. Effects of two educational method of lecturing and role playing on knowledge and performance of high school students in first aid at emergency scene.

    PubMed

    Hassanzadeh, Akbar; Vasili, Arezu; Zare, Zahra

    2010-01-01

    This study aimed to investigate the effects of two educational methods on students' knowledge and performance regarding first aid at emergency scenes. In this semi-experimental study, the sample was selected randomly among male and female public high school students of Isfahan. Each group included 60 students. At first the knowledge and performance of students in first aid at emergency scene was assessed using a researcher-made questionnaire. Then necessary education was provided to the students within 10 sessions of two hours by lecturing and role playing. The students' knowledge and performance was as-sessed again and the results were compared. It was no significant relationship between the frequency distribution of students' age, major and knowledge and performance before the educational course in the two groups. The score of knowledge in performing CPR, using proper way to bandage, immobilizing the injured area, and proper ways of carrying injured person after the education was significantly increased in both groups. Moreover, the performance in proper way to bandage, immobilizing injured area and proper ways of carrying injured person after educational course was significantly higher in playing role group compared to lecturing group after education. Iran is a developing country with a young generation and it is a country with high risk of natural disasters; so, providing necessary education with more effective methods can be effective in reducing mortality and morbidity due to lack of first aid care in crucial moments. Training with playing role is suggested for this purpose.

  4. Recognition of 3-D Scene with Partially Occluded Objects

    NASA Astrophysics Data System (ADS)

    Lu, Siwei; Wong, Andrew K. C...

    1987-03-01

    This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.

  5. The new generation of OpenGL support in ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, M.

    2008-07-01

    OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.

  6. Insects and associated arthropods analyzed during medicolegal death investigations in Harris County, Texas, USA: January 2013- April 2016

    PubMed Central

    2017-01-01

    The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner’s office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types. PMID:28604832

  7. Insects and associated arthropods analyzed during medicolegal death investigations in Harris County, Texas, USA: January 2013- April 2016.

    PubMed

    Sanford, Michelle R

    2017-01-01

    The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner's office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types.

  8. Firearms in major motion pictures, 1995-2004.

    PubMed

    Binswanger, Ingrid A; Cowan, John A

    2009-03-01

    Firearms are a major cause of injury and death. We sought to determine (1) the prevalence of movie scenes that depicted firearms and verbal firearm safety messages; (2) the context and health outcomes in firearm scenes; and (3) the association between the Motion Picture Association of America ratings and firearm scene characteristics. Ten top revenue-grossing motion pictures were selected for each year from 1995 to 2004 in descending order of gross revenues. Data on firearm scenes were collected by movie coders using dual-monitor computer workstations and real-time collection tools. Seventy of the 100 movies had scenes with firearms and the majority of movies with firearms were rated PG-13. Firearm scenes (N = 624) accounted for 17% of screen time in movies with firearms. Among firearm scenes, crime or illegal activity was involved in 45%, deaths occurred in 19%, and injuries occurred in 12%. A verbal reference to safety was made in 0.8%. Depictions of firearms in top revenue-grossing movies were common, but safety messages were exceedingly rare. Major motion pictures present an under-used opportunity for education about firearm safety.

  9. Cybersickness in the presence of scene rotational movements along different axes.

    PubMed

    Lo, W T; So, R H

    2001-02-01

    Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.

  10. Acoustic and higher-level representations of naturalistic auditory scenes in human auditory and frontal cortex.

    PubMed

    Hausfeld, Lars; Riecke, Lars; Formisano, Elia

    2018-06-01

    Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  11. On validating remote sensing simulations using coincident real data

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan

    2016-05-01

    The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.

  12. Functional neuroanatomy of intuitive physical inference

    PubMed Central

    Mikhael, John G.; Tenenbaum, Joshua B.; Kanwisher, Nancy

    2016-01-01

    To engage with the world—to understand the scene in front of us, plan actions, and predict what will happen next—we must have an intuitive grasp of the world’s physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events—a “physics engine” in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general “multiple demand” system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892

  13. Iranian Audience Poll on Smoking Scenes in Persian Movies in 2011

    PubMed Central

    Heydari, Gholamreza

    2014-01-01

    Background: Scenes depicting smoking are among the causes of smoking initiation in youth. The present study was the first in Iran to collect some primary information regarding the presence of smoking scenes in movies and propagation of tobacco use. Methods: This cross-sectional study was conducted by polling audience about smoking scenes in Persian movies on theaters in 2011. Data were collected using a questionnaire. A total of 2000 subjects were selected for questioning. The questioning for all movies was carried out 2 weeks after the movie premiered at 4 different times including twice during the week and twice at weekends. Results: A total of 39 movies were selected for further assessment. In general, 2,129 viewers participated in the study. General opinion of 676 subjects (31.8%) was that these movies can lead to initiation or continuation of smoking in viewers. Women significantly thought that these movies can lead to initiation of smoking (37.4% vs. 29%). This belief was stronger among non-smokers as well (33.7% vs. 26%). Conclusions: Despite the prohibition of cigarette advertisements in the mass media and movies, we still witness scenes depicting smoking by the good or bad characters of the movies so more observation in this field is needed. PMID:24627742

  14. Functional neuroanatomy of intuitive physical inference.

    PubMed

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.

  15. Neural correlates of contextual cueing are modulated by explicit learning.

    PubMed

    Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A

    2011-10-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Neural correlates of contextual cueing are modulated by explicit learning

    PubMed Central

    Westerberg, Carmen E.; Miller, Brennan B.; Reber, Paul J.; Cohen, Neal J.; Paller, Ken A.

    2011-01-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer’s knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. PMID:21889947

  17. Cerebral Correlates of Emotional and Action Appraisals During Visual Processing of Emotional Scenes Depending on Spatial Frequency: A Pilot Study

    PubMed Central

    Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole

    2016-01-01

    Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task’s demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands. PMID:26757433

  18. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less

  19. A model of proto-object based saliency

    PubMed Central

    Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph

    2013-01-01

    Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601

  20. Determining the rate of forest conversion in Mato Grosso, Brazil, using Landsat MSS and AVHRR data

    NASA Technical Reports Server (NTRS)

    Nelson, Ross; Horning, Ned; Stone, Thomas A.

    1987-01-01

    AVHRR-LAC thermal data and Landsat MSS and TM spectral data were used to estimate the rate of forest clearing in Mato Grosso, Brazil, between 1981 and 1984. The Brazilian state was stratified into forest and nonforest. A list sampling procedure was used in the forest stratum to select Landsat MSS scenes for processing based on estimates of fire activity in the scenes. Fire activity in 1984 was estimated using AVHRR-LAC thermal data. State-wide estimates of forest conversion indicate that between 1981 and 1984, 353,966 ha + or - 77,000 ha (0.4 percent of the state area) were converted per year. No evidence of reforestation was found in this digital sample. The relationship between forest clearing rate (based on MSS-TM analysis) and fire activity (estimated using AVHRR data) was noisy (R-squared = 0.41). The results suggest that AVHRR data may be put to better use as a stratification tool than as a subsidiary variable in list sampling.

  1. A hierarchical, retinotopic proto-organization of the primate visual system at birth

    PubMed Central

    Arcaro, Michael J; Livingstone, Margaret S

    2017-01-01

    The adult primate visual system comprises a series of hierarchically organized areas. Each cortical area contains a topographic map of visual space, with different areas extracting different kinds of information from the retinal input. Here we asked to what extent the newborn visual system resembles the adult organization. We find that hierarchical, topographic organization is present at birth and therefore constitutes a proto-organization for the entire primate visual system. Even within inferior temporal cortex, this proto-organization was already present, prior to the emergence of category selectivity (e.g., faces or scenes). We propose that this topographic organization provides the scaffolding for the subsequent development of visual cortex that commences at the onset of visual experience DOI: http://dx.doi.org/10.7554/eLife.26196.001 PMID:28671063

  2. The Forensic Confirmation Bias: A Comparison Between Experts and Novices.

    PubMed

    van den Eeden, Claire A J; de Poot, Christianne J; van Koppen, Peter J

    2018-05-17

    A large body of research has described the influence of context information on forensic decision-making. In this study, we examined the effect of context information on the search for and selection of traces by students (N = 36) and crime scene investigators (N = 58). Participants investigated an ambiguous mock crime scene and received prior information indicating suicide, a violent death or no information. Participants described their impression of the scene and wrote down which traces they wanted to secure. Results showed that context information impacted first impression of the scene and crime scene behavior, namely number of traces secured. Participants in the murder condition secured most traces. Furthermore, the students secured more crime-related traces. Students were more confident in their first impression. This study does not indicate that experts outperform novices. We therefore argue for proper training on cognitive processes as an integral part of all forensic education. © 2018 American Academy of Forensic Sciences.

  3. TMS to object cortex affects both object and scene remote networks while TMS to scene cortex only affects scene networks.

    PubMed

    Rafique, Sara A; Solomon-Harris, Lily M; Steeves, Jennifer K E

    2015-12-01

    Viewing the world involves many computations across a great number of regions of the brain, all the while appearing seamless and effortless. We sought to determine the connectivity of object and scene processing regions of cortex through the influence of transient focal neural noise in discrete nodes within these networks. We consecutively paired repetitive transcranial magnetic stimulation (rTMS) with functional magnetic resonance-adaptation (fMR-A) to measure the effect of rTMS on functional response properties at the stimulation site and in remote regions. In separate sessions, rTMS was applied to the object preferential lateral occipital region (LO) and scene preferential transverse occipital sulcus (TOS). Pre- and post-stimulation responses were compared using fMR-A. In addition to modulating BOLD signal at the stimulation site, TMS affected remote regions revealing inter and intrahemispheric connections between LO, TOS, and the posterior parahippocampal place area (PPA). Moreover, we show remote effects from object preferential LO to outside the ventral perception network, in parietal and frontal areas, indicating an interaction of dorsal and ventral streams and possibly a shared common framework of perception and action. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Polar Cap Retreat

    NASA Technical Reports Server (NTRS)

    2004-01-01

    13 August 2004 This red wide angle Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a view of the retreating seasonal south polar cap in the most recent spring in late 2003. Bright areas are covered with frost, dark areas are those from which the solid carbon dioxide has sublimed away. The center of this image is located near 76.5oS, 28.2oW. The scene is large; it covers an area about 250 km (155 mi) across. The scene is illuminated by sunlight from the upper left.

  5. Investigation of several aspects of LANDSAT 4/5 data quality. [California, Texas, Arkansas, Alabama, and Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C. (Principal Investigator)

    1984-01-01

    A second quadrant from the Sacramento, CA scene 44/33 acquired by LANDSAT-4 was tested for band to band resolution. Results show that all measured misregistrations are within 0.03 pixels for similar band pairs. Two LANDSAT-5 scenes (one from Corpus Christi, TX and the other from Huntsville, AL) were also tested for band to band resolution. All measured misregistrations in the Texas scene are less than 0.03 pixels. The across scan misregistration Alabama scene is -0.66 pixels and thus needs correction. A 512 x 512 pixel area of the Pacific Ocean was corrected for the pixel offsets. Modulation transfer function analysis of the San Mateo Bridge using data from the San Francisco scene was accomplished.

  6. Neural representation of anxiety and personality during exposure to anxiety-provoking and neutral scenes from scary movies.

    PubMed

    Straube, Thomas; Preissler, Sandra; Lipka, Judith; Hewig, Johannes; Mentzel, Hans-Joachim; Miltner, Wolfgang H R

    2010-01-01

    Some people search for intense sensations such as being scared by frightening movies while others do not. The brain mechanisms underlying such inter-individual differences are not clear. Testing theoretical models, we investigated neural correlates of anxiety and the personality trait sensation seeking in 40 subjects who watched threatening and neutral scenes from scary movies during functional magnetic resonance imaging. Threat versus neutral scenes induced increased activation in anterior cingulate cortex, insula, thalamus, and visual areas. Movie-induced anxiety correlated positively with activation in dorsomedial prefrontal cortex, indicating a role for this area in the subjective experience of being scared. Sensation seeking-scores correlated positively with brain activation to threat versus neutral scenes in visual areas and in thalamus and anterior insula, i.e. regions involved in the induction and representation of arousal states. For the insula and thalamus, these outcomes were partly due to an inverse relation between sensation seeking scores and brain activation during neutral film clips. These results support models predicting cerebral hypoactivation in high sensation seekers during neutral stimulation, which may be compensated by more intense sensations such as watching scary movies. 2009 Wiley-Liss, Inc.

  7. Adherent Raindrop Modeling, Detectionand Removal in Video.

    PubMed

    You, Shaodi; Tan, Robby T; Kawakami, Rei; Mukaigawa, Yasuhiro; Ikeuchi, Katsushi

    2016-09-01

    Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Modeling, detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatio-temporal derivatives of raindrops. To accomplish the idea, we first model adherent raindrops using law of physics, and detect raindrops based on these models in combination with motion and intensity temporal derivatives of the input video. Having detected the raindrops, we remove them and restore the images based on an analysis that some areas of raindrops completely occludes the scene, and some other areas occlude only partially. For partially occluding areas, we restore them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity derivative. For completely occluding areas, we recover them by using a video completion technique. Experimental results using various real videos show the effectiveness of our method.

  8. Generative Learning during Visual Search for Scene Changes: Enhancing Free Recall of Individuals with and without Mental Retardation

    ERIC Educational Resources Information Center

    Carlin, Michael T.; Soraci, Sal A.; Strawbridge, Christina P.

    2005-01-01

    Memory for scene changes that were identified immediately (passive encoding) or following systematic and effortful search (generative encoding) was compared across groups differing in age and intelligence. In the context of flicker methodology, generative search for the changing object involved selection and rejection of multiple potential…

  9. Finding the Cause: Verbal Framing Helps Children Extract Causal Evidence Embedded in a Complex Scene

    ERIC Educational Resources Information Center

    Butler, Lucas P.; Markman, Ellen M.

    2012-01-01

    In making causal inferences, children must both identify a causal problem and selectively attend to meaningful evidence. Four experiments demonstrate that verbally framing an event ("Which animals make Lion laugh?") helps 4-year-olds extract evidence from a complex scene to make accurate causal inferences. Whereas framing was unnecessary when…

  10. Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters

    PubMed Central

    Zhang, Sirou; Qiao, Xiaoya

    2017-01-01

    In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311

  11. Satellite image maps of Pakistan

    USGS Publications Warehouse

    ,

    1997-01-01

    Georeferenced Landsat satellite image maps of Pakistan are now being made available for purchase from the U.S. Geological Survey (USGS). The first maps to be released are a series of Multi-Spectral Scanner (MSS) color image maps compiled from Landsat scenes taken before 1979. The Pakistan image maps were originally developed by USGS as an aid for geologic and general terrain mapping in support of the Coal Resource Exploration and Development Program in Pakistan (COALREAP). COALREAP, a cooperative program between the USGS, the United States Agency for International Development, and the Geological Survey of Pakistan, was in effect from 1985 through 1994. The Pakistan MSS image maps (bands 1, 2, and 4) are available as a full-country mosaic of 72 Landsat scenes at a scale of 1:2,000,000, and in 7 regional sheets covering various portions of the entire country at a scale of 1:500,000. The scenes used to compile the maps were selected from imagery available at the Eros Data Center (EDC), Sioux Falls, S. Dak. Where possible, preference was given to cloud-free and snow-free scenes that displayed similar stages of seasonal vegetation development. The data for the MSS scenes were resampled from the original 80-meter resolution to 50-meter picture elements (pixels) and digitally transformed to a geometrically corrected Lambert conformal conic projection. The cubic convolution algorithm was used during rotation and resampling. The 50-meter pixel size allows for such data to be imaged at a scale of 1:250,000 without degradation; for cost and convenience considerations, however, the maps were printed at 1:500,000 scale. The seven regional sheets have been named according to the main province or area covered. The 50-meter data were averaged to 150-meter pixels to generate the country image on a single sheet at 1:2,000,000 scale

  12. Effect of fixation positions on perception of lightness

    NASA Astrophysics Data System (ADS)

    Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.

    2015-03-01

    Visual acuity, luminance sensitivity, contrast sensitivity, and color sensitivity are maximal in the fovea and decrease with retinal eccentricity. Therefore every scene is perceived by integrating the small, high resolution samples collected by moving the eyes around. Moreover, when viewing ambiguous figures the fixated position influences the dominance of the possible percepts. Therefore fixations could serve as a selection mechanism whose function is not confined to finely resolve the selected detail of the scene. Here this hypothesis is tested in the lightness perception domain. In a first series of experiments we demonstrated that when observers matched the color of natural objects they based their lightness judgments on objects' brightest parts. During this task the observers tended to fixate points with above average luminance, suggesting a relationship between perception and fixations that we causally proved using a gaze contingent display in a subsequent experiment. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. In a second series of experiments we considered a high level strategy that the visual system uses to segment the visual scene in a layered representation. We demonstrated that eye movement sampling mediates between the layer segregation and its effects on lightness perception. Together these studies show that eye fixations are partially responsible for the selection of information from a scene that allows the visual system to estimate the reflectance of a surface.

  13. Sci-Vis Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arthur Bleeker, PNNL

    2015-03-11

    SVF is a full featured OpenGL 3d framework that allows for rapid creation of complex visualizations. The SVF framework handles much of the lifecycle and complex tasks required for a 3d visualization. Unlike a game framework SVF was designed to use fewer resources, work well in a windowed environment, and only render when necessary. The scene also takes advantage of multiple threads to free up the UI thread as much as possible. Shapes (actors) in the scene are created by adding or removing functionality (through support objects) during runtime. This allows a highly flexible and dynamic means of creating highlymore » complex actors without the code complexity (it also helps overcome the lack of multiple inheritance in Java.) All classes are highly customizable and there are abstract classes which are intended to be subclassed to allow a developer to create more complex and highly performant actors. There are multiple demos included in the framework to help the developer get started and shows off nearly all of the functionality. Some simple shapes (actors) are already created for you such as text, bordered text, radial text, text area, complex paths, NURBS paths, cube, disk, grid, plane, geometric shapes, and volumetric area. It also comes with various camera types for viewing that can be dragged, zoomed, and rotated. Picking or selecting items in the scene can be accomplished in various ways depending on your needs (raycasting or color picking.) The framework currently has functionality for tooltips, animation, actor pools, color gradients, 2d physics, text, 1d/2d/3d textures, children, blending, clipping planes, view frustum culling, custom shaders, and custom actor states« less

  14. Enhancing the performance of regional land cover mapping

    NASA Astrophysics Data System (ADS)

    Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping

    2016-10-01

    Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.

  15. Disentangling the effects of spatial inconsistency of targets and distractors when searching in realistic scenes.

    PubMed

    Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W

    2015-02-10

    Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation. © 2015 ARVO.

  16. Fixations on objects in natural scenes: dissociating importance from salience

    PubMed Central

    't Hart, Bernard M.; Schmidt, Hannah C. E. F.; Roth, Christine; Einhäuser, Wolfgang

    2013-01-01

    The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's “importance” for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (“common”/“important”) or a rarely named (“rare”/“unimportant”) object, track the observers' eye movements during scene viewing and ask them to provide keywords describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in reading. PMID:23882251

  17. Modulation of visually evoked movement responses in moving virtual environments.

    PubMed

    Reed-Jones, Rebecca J; Vallis, Lori Ann

    2009-01-01

    Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.

  18. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  19. How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling

    PubMed Central

    Veale, Richard; Hafed, Ziad M.

    2017-01-01

    Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044023

  20. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  1. Assessing Multiple Object Tracking in Young Children Using a Game

    ERIC Educational Resources Information Center

    Ryokai, Kimiko; Farzin, Faraz; Kaltman, Eric; Niemeyer, Greg

    2013-01-01

    Visual tracking of multiple objects in a complex scene is a critical survival skill. When we attempt to safely cross a busy street, follow a ball's position during a sporting event, or monitor children in a busy playground, we rely on our brain's capacity to selectively attend to and track the position of specific objects in a dynamic scene. This…

  2. Long-Term Memories Bias Sensitivity and Target Selection in Complex Scenes

    PubMed Central

    Patai, Eva Zita; Doallo, Sonia; Nobre, Anna Christina

    2014-01-01

    In everyday situations we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention, and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location (Summerfield, Lepsien, Gitelman, Mesulam, & Nobre, 2006; Summerfield, Rao, Garside, & Nobre, 2011). The present study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d′) for targets located in remembered spatial contexts. We used the N2pc event-related potential to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top-down sources of information. PMID:23016670

  3. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  4. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  5. ERTS evaluation for land use inventory

    NASA Technical Reports Server (NTRS)

    Hardy, E. E. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The feasibility of accomplishing a general inventory of any given region based on spectral categories from satellite data has been demonstrated in a pilot study for an area of 6300 square kilometers in central New York State. This was accomplished by developing special processing techniques to improve and balance contrast and density for each spectral band of an image scene to compare with a standard range of density and contrast found to be acceptable for interpretation of the scene. Diazo film transparencies were made from enlarged black and white transparencies of each spectral band. Color composites were constructed from these diazo films in combinations of hue and spectral bands to enhance different spectral features in the scene. Interpretation and data takeoff was accomplished manually by translating interpreted areas onto an overlay to construct a spectral map. The minimum area interpreted was 25 hectares. The minimum area geographically referenced was one square kilometer. The interpretation and referencing of data from ERTS-1 was found to be about 88% accurate for eight primary spectral categories.

  6. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  7. The effects of scene content parameters, compression, and frame rate on the performance of analytics systems

    NASA Astrophysics Data System (ADS)

    Tsifouti, A.; Triantaphillidou, S.; Larabi, M. C.; Doré, G.; Bilissi, E.; Psarrou, A.

    2015-01-01

    In this investigation we study the effects of compression and frame rate reduction on the performance of four video analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack) entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4 AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems. Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence. Findings could contribute to the improvement of VA systems.

  8. A three-layer model of natural image statistics.

    PubMed

    Gutmann, Michael U; Hyvärinen, Aapo

    2013-11-01

    An important property of visual systems is to be simultaneously both selective to specific patterns found in the sensory input and invariant to possible variations. Selectivity and invariance (tolerance) are opposing requirements. It has been suggested that they could be joined by iterating a sequence of elementary selectivity and tolerance computations. It is, however, unknown what should be selected or tolerated at each level of the hierarchy. We approach this issue by learning the computations from natural images. We propose and estimate a probabilistic model of natural images that consists of three processing layers. Two natural image data sets are considered: image patches, and complete visual scenes downsampled to the size of small patches. For both data sets, we find that in the first two layers, simple and complex cell-like computations are performed. In the third layer, we mainly find selectivity to longer contours; for patch data, we further find some selectivity to texture, while for the downsampled complete scenes, some selectivity to curvature is observed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex.

    PubMed

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2017-05-01

    A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Burned areas for the conterminous U.S. from 1984 through 2015, an automated approach using dense time-series of Landsat data

    NASA Astrophysics Data System (ADS)

    Hawbaker, T. J.; Vanderhoof, M.; Beal, Y. J. G.; Takacs, J. D.; Schmidt, G.; Falgout, J.; Brunner, N. M.; Caldwell, M. K.; Picotte, J. J.; Howard, S. M.; Stitt, S.; Dwyer, J. L.

    2016-12-01

    Complete and accurate burned area data are needed to document patterns of fires, to quantify relationships between the patterns and drivers of fire occurrence, and to assess the impacts of fires on human and natural systems. Unfortunately, many existing fire datasets in the United States are known to be incomplete and that complicates efforts to understand burned area patterns and introduces a large amount of uncertainty in efforts to identify their driving processes and impacts. Because of this, the need to systematically collect burned area information has been recognized by the United Nations Framework Convention on Climate Change and the Intergovernmental Panel on Climate Change, which have both called for the production of essential climate variables. To help meet this need, we developed a novel algorithm that automatically identifies burned areas in temporally-dense time series of Landsat image stacks to produce Landsat Burned Area Essential Climate Variable (BAECV) products. The algorithm makes use of predictors derived from individual Landsat scenes, lagged reference conditions, and change metrics between the scene and reference predictors. Outputs of the BAECV algorithm, generated for the conterminous United States for 1984 through 2015, consist of burn probabilities for each Landsat scene, in addition to, annual composites including: the maximum burn probability, burn classification, and the Julian date of the first Landsat scene a burn was observed. The BAECV products document patterns of fire occurrence that are not well characterized by existing fire datasets in the United States. We anticipate that these data could help to better understand past patterns of fire occurrence, the drivers that created them, and the impacts fires had on natural and human systems.

  11. Urban area change detection procedures with remote sensing data

    NASA Technical Reports Server (NTRS)

    Maxwell, E. L. (Principal Investigator); Riordan, C. J.

    1980-01-01

    The underlying factors affecting the detection and identification of nonurban to urban land cover change using satellite data were studied. Computer programs were developed to create a digital scene and to simulate the effect of the sensor point spread function (PSF) on the transfer of modulation from the scene to an image of the scene. The theory behind the development of a digital filter representing the PSF is given as well as an example of its application. Atmospheric effects on modulation transfer are also discussed. A user's guide and program listings are given.

  12. Investigation of Large Capacity Optical Memories for Correlator Applications.

    DTIC Science & Technology

    1981-10-01

    refringence varies to a certain extent over the area ofeach sample film due to nonuniform stretching during manufacture. Such variations make the...made with the tank placed in a lair ’ c number of positions in the scene -- clearly an excessive burden. Now when the tarets are positioned in the scene

  13. Differential Engagement of Brain Regions within a "Core" Network during Scene Construction

    ERIC Educational Resources Information Center

    Summerfield, Jennifer J.; Hassabis, Demis; Maguire, Eleanor A.

    2010-01-01

    Reliving past events and imagining potential future events engages a well-established "core" network of brain areas. How the brain constructs, or reconstructs, these experiences or scenes has been debated extensively in the literature, but remains poorly understood. Here we designed a novel task to investigate this (re)constructive process by…

  14. Reach Out and Touch Someone: West Alabama Designs a New Emergency Link.

    ERIC Educational Resources Information Center

    Coogan, Mercy Hardie

    1980-01-01

    Quality on-the-scene emergency care for a rural area is provided by West Alabama's Emergency Medical Services. The success of this delivery system is attributed to a radio/telephone communications system that provides quick, direct contact between paramedics at the scene and medical doctors miles away. (DS)

  15. Use of an Infrared Thermometer with Laser Targeting in Morphological Scene Change Detection for Fire Detection

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Singh, Harjap; Grindley, Josef E.

    2013-06-01

    Morphological Scene Change Detection (MSCD) is a process typically tasked at detecting relevant changes in a guarded environment for security applications. This can be implemented on a Field Programmable Gate Array (FPGA) by a combination of binary differences based around exclusive-OR (XOR) gates, mathematical morphology and a crucial threshold setting. This is a robust technique and can be applied many areas from leak detection to movement tracking, and further augmented to perform additional functions such as watermarking and facial detection. Fire is a severe problem, and in areas where traditional fire alarm systems are not installed or feasible, it may not be detected until it is too late. Shown here is a way of adapting the traditional Morphological Scene Change Detector (MSCD) with a temperature sensor so if both the temperature sensor and scene change detector are triggered, there is a high likelihood of fire present. Such a system would allow integration into autonomous mobile robots so that not only security patrols could be undertaken, but also fire detection.

  16. Monitoring gradual ecosystem change using Landsat time series analyses: case studies in selected forest and rangeland ecosystems

    USGS Publications Warehouse

    Vogelmann, James E.; Xian, George; Homer, Collin G.; Tolk, Brian

    2012-01-01

    The focus of the study was to assess gradual changes occurring throughout a range of natural ecosystems using decadal Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM +) time series data. Time series data stacks were generated for four study areas: (1) a four scene area dominated by forest and rangeland ecosystems in the southwestern United States, (2) a sagebrush-dominated rangeland in Wyoming, (3) woodland adjacent to prairie in northwestern Nebraska, and (4) a forested area in the White Mountains of New Hampshire. Through analyses of time series data, we found evidence of gradual systematic change in many of the natural vegetation communities in all four areas. Many of the conifer forests in the southwestern US are showing declines related to insects and drought, but very few are showing evidence of improving conditions or increased greenness. Sagebrush communities are showing decreases in greenness related to fire, mining, and probably drought, but very few of these communities are showing evidence of increased greenness or improving conditions. In Nebraska, forest communities are showing local expansion and increased canopy densification in the prairie–woodland interface, and in the White Mountains high elevation understory conifers are showing range increases towards lower elevations. The trends detected are not obvious through casual inspection of the Landsat images. Analyses of time series data using many scenes and covering multiple years are required in order to develop better impressions and representations of the changing ecosystem patterns and trends that are occurring. The approach described in this paper demonstrates that Landsat time series data can be used operationally for assessing gradual ecosystem change across large areas. Local knowledge and available ancillary data are required in order to fully understand the nature of these trends.

  17. Masking interrupts figure-ground signals in V1.

    PubMed

    Lamme, Victor A F; Zipser, Karl; Spekreijse, Henk

    2002-10-01

    In a backward masking paradigm, a target stimulus is rapidly (<100 msec) followed by a second stimulus. This typically results in a dramatic decrease in the visibility of the target stimulus. It has been shown that masking reduces responses in V1. It is not known, however, which process in V1 is affected by the mask. In the past, we have shown that in V1, modulations of neural activity that are specifically related to figure-ground segregation can be recorded. Here, we recorded from awake macaque monkeys, engaged in a task where they had to detect figures from background in a pattern backward masking paradigm. We show that the V1 figure-ground signals are selectively and fully suppressed at target-mask intervals that psychophysically result in the target being invisible. Initial response transients, signalling the features that make up the scene, are not affected. As figure-ground modulations depend on feedback from extrastriate areas, these results suggest that masking selectively interrupts the recurrent interactions between V1 and higher visual areas.

  18. Do High-Functioning People with Autism Spectrum Disorder Spontaneously Use Event Knowledge to Selectively Attend to and Remember Context-Relevant Aspects in Scenes?

    ERIC Educational Resources Information Center

    Loth, Eva; Gomez, Juan Carlos; Happe, Francesca

    2011-01-01

    This study combined an event schema approach with top-down processing perspectives to investigate whether high-functioning children and adults with autism spectrum disorder (ASD) spontaneously attend to and remember context-relevant aspects of scenes. Participants read one story of story-pairs (e.g., burglary or tea party). They then inspected a…

  19. Ventral-stream-like shape representation: from pixel intensity values to trainable object-selective COSFIRE models

    PubMed Central

    Azzopardi, George; Petkov, Nicolai

    2014-01-01

    The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms. PMID:25126068

  20. Effective connectivity in the neural network underlying coarse-to-fine categorization of visual scenes. A dynamic causal modeling study.

    PubMed

    Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole

    2015-10-01

    According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear.

    PubMed

    Willems, Roel M; Clevis, Krien; Hagoort, Peter

    2011-09-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.

  2. Urbanization: Riyadh, Saudi Arabia

    NASA Image and Video Library

    2001-10-22

    Riyadh, the national capital of Saudi Arabia, is shown in 1972, 1990 and 2000. Its population grew in these years from about a half million to more than two million. Saudi Arabia experienced urbanization later than many other countries; in the early 1970s its urban-rural ratio was still about 1:3. By 1990 that had reversed to about 3:1. The city grew through in-migration from rural areas, and from decreases in the death rate while birthrates remained high. The 1972 image is a Landsat MSS scene; the 1990 image is a Landsat Thematic Mapper scene; and the 2000 image is an ASTER scene. All three images cover an area of about 27 x 34 km. The image is centered at 24.6 degrees north latitude, 46.6 degrees east longitude. http://photojournal.jpl.nasa.gov/catalog/PIA11087

  3. Study of LANDSAT-D thematic mapper performance as applied to hydrocarbon exploration

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Many scenes of particular interest have a light dusting of snow cover. The possible use of hue-saturation-intensity transformations to reduce the effect of snow cover is being investigated. A tape of the Greeley, Colorado scene was reviewed on the interactive system and image types to be produced (decorrelated 2,3,4; natural color 1,2,3,; hue separation value 5/2,5/7 eigen 1; 4,5,7 in two color combinations) were selected. In several instances, a 1,3,4 combination produces a more useful false color infrared version of TM data than the more common 2,3,4 arrangement, probably because band 1 is less highly correlated with the band 3 and 4 than is band 2. A review of spacecraft performance suggests that the standard corrections applied at GSFC are more complicated than necessary in some areas and insufficient in other cases. The image motion compensation device on the TM works so well that bow-typing effects are very small; there are differences in the radiometry of forward and backward scans that make additional calibration necessary.

  4. Can cigarette warnings counterbalance effects of smoking scenes in movies?

    PubMed

    Golmier, Isabelle; Chebat, Jean-Charles; Gélinas-Chebat, Claire

    2007-02-01

    Scenes in movies where smoking occurs have been empirically shown to influence teenagers to smoke cigarettes. The capacity of a Canadian warning label on cigarette packages to decrease the effects of smoking scenes in popular movies has been investigated. A 2 x 3 factorial design was used to test the effects of the same movie scene with or without electronic manipulation of all elements related to smoking, and cigarette pack warnings, i.e., no warning, text-only warning, and text+picture warning. Smoking-related stereotypes and intent to smoke of teenagers were measured. It was found that, in the absence of warning, and in the presence of smoking scenes, teenagers showed positive smoking-related stereotypes. However, these effects were not observed if the teenagers were first exposed to a picture and text warning. Also, smoking-related stereotypes mediated the relationship of the combined presentation of a text and picture warning and a smoking scene on teenagers' intent to smoke. Effectiveness of Canadian warning labels to prevent or to decrease cigarette smoking among teenagers is discussed, and areas of research are proposed.

  5. Trade-off between curvature tuning and position invariance in visual area V4

    PubMed Central

    Sharpee, Tatyana O.; Kouh, Minjoon; Reynolds, John H.

    2013-01-01

    Humans can rapidly recognize a multitude of objects despite differences in their appearance. The neural mechanisms that endow high-level sensory neurons with both selectivity to complex stimulus features and “tolerance” or invariance to identity-preserving transformations, such as spatial translation, remain poorly understood. Previous studies have demonstrated that both tolerance and selectivity to conjunctions of features are increased at successive stages of the ventral visual stream that mediates visual recognition. Within a given area, such as visual area V4 or the inferotemporal cortex, tolerance has been found to be inversely related to the sparseness of neural responses, which in turn was positively correlated with conjunction selectivity. However, the direct relationship between tolerance and conjunction selectivity has been difficult to establish, with different studies reporting either an inverse or no significant relationship. To resolve this, we measured V4 responses to natural scenes, and using recently developed statistical techniques, we estimated both the relevant stimulus features and the range of translation invariance for each neuron. Focusing the analysis on tuning to curvature, a tractable example of conjunction selectivity, we found that neurons that were tuned to more curved contours had smaller ranges of position invariance and produced sparser responses to natural stimuli. These trade-offs provide empirical support for recent theories of how the visual system estimates 3D shapes from shading and texture flows, as well as the tiling hypothesis of the visual space for different curvature values. PMID:23798444

  6. AREA RESTRICTIONS, RISK, HARM, AND HEALTH CARE ACCESS AMONG PEOPLE WHO USE DRUGS IN VANCOUVER, CANADA: A SPATIALLY ORIENTED QUALITATIVE STUDY

    PubMed Central

    McNeil, Ryan; Cooper, Hannah; Small, Will; Kerr, Thomas

    2015-01-01

    Area restrictions prohibiting people from entering drug scenes or areas where they were arrested are a common socio-legal mechanism employed to regulate the spatial practices of people who use drugs (PWUD). To explore how socio-spatial patterns stemming from area restrictions shape risk, harm, and health care access, qualitative interviews and mapping exercises were conducted with 24 PWUD with area restrictions in Vancouver, Canada. Area restrictions disrupted access to health and social resources (e.g., HIV care) concentrated in drug scenes, while territorial stigma prevented PWUD from accessing supports in other neighbourhoods. Rather than preventing involvement in drug-related activities, area restrictions displaced these activities to other locations and increased vulnerability to diverse risks and harms (e.g., unsafe drug use practices, violence). Given the harms stemming from area restrictions there is an urgent need to reconsider this socio-legal strategy. PMID:26241893

  7. Robust selectivity to two-object images in human visual cortex

    PubMed Central

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  8. Mineral target areas in Nevada from geological analysis of LANDSAT-1 imagery

    NASA Technical Reports Server (NTRS)

    Abdel-Gawad, M.; Tubbesing, L.

    1975-01-01

    Geological analysis of LANDSAT-1 Scene MSS 1053-17540 suggests that certain known mineral districts in east-central Nevada frequently occur near faults or at faults or lineament intersections and areas of complex deformation and flexures. Seventeen (17) areas of analogous characteristics were identified as favorable targets for mineral exploration. During reconnaissance field trips eleven areas were visited. In three areas evidence was found of mining and/or prospecting not known before the field trips. In four areas favorable structural and alteration features were observed which call for more detailed field studies. In one of the four areas limonitic iron oxide samples were found in the regolith of a brecciated dolomite ridge. This area contains quartz veins, granitic and volcanic rocks and lies near the intersection of two linear fault structures identified in the LANDSAT-1 imagery. Semiquantitative spectroscopic analysis of selected portions of the samples showed abnormal contents of arsenic, molybdenum, copper, lead, zinc, and silver. These limonitic samples found were not in situ and further field studies are required to assess their source and significance.

  9. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  10. Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex.

    PubMed

    Kim, Yee-Joon; Tsai, Jeffrey J; Ojemann, Jeffrey; Verghese, Preeti

    2017-05-10

    Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping. SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two stimuli increases the interaction component that is a hallmark for perceptual integration of stimuli. Furthermore, this stimulus-specific interaction is represented in prefrontal and parietal cortex in a task-dependent manner. Copyright © 2017 the authors 0270-6474/17/374942-12$15.00/0.

  11. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  12. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  13. Remote sensing of ephemeral water bodies in western Niger

    USGS Publications Warehouse

    Verdin, J.P.

    1996-01-01

    Research was undertaken to evaluate the feasibility of monitoring the small ephemeral water bodies of the Sahel with the 1.1 km resolution data of the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR). Twenty-one lakes of western Niger with good ground observation records were selected for examination. Thematic Mapper images from 1988 were first analysed to determine surface areas and temperature differences between water and adjacent land. Six AVHRR scenes from the 1988-89 dry season were then studied. It was found that a lake can be monitored until its surface area drops below 10 ha, in most cases. Furthermore, with prior knowledge of the location and shape of a water body, its surface area can be estimated from AVHRR band 5 data to within about 10 ha. These results are explained by the sharp temperature contrast between water and land, on the order of 13?? C.

  14. Attention in the real world: toward understanding its neural basis

    PubMed Central

    Peelen, Marius V.; Kastner, Sabine

    2016-01-01

    The efficient selection of behaviorally relevant objects from cluttered environments supports our everyday goals. Attentional selection has typically been studied in search tasks involving artificial and simplified displays. Although these studies have revealed important basic principles of attention, they do not explain how the brain efficiently selects familiar objects in complex and meaningful real-world scenes. Findings from recent neuroimaging studies indicate that real-world search is mediated by ‘what’ and ‘where’ attentional templates that are implemented in high-level visual cortex. These templates represent target-diagnostic properties and likely target locations, respectively, and are shaped by object familiarity, scene context, and memory. We propose a framework for real-world search that incorporates these recent findings and specifies directions for future study. PMID:24630872

  15. Scene recognition based on integrating active learning with dictionary learning

    NASA Astrophysics Data System (ADS)

    Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen

    2018-04-01

    Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.

  16. Synchronous contextual irregularities affect early scene processing: replication and extension.

    PubMed

    Mudrik, Liad; Shalgi, Shani; Lamy, Dominique; Deouell, Leon Y

    2014-04-01

    Whether contextual regularities facilitate perceptual stages of scene processing is widely debated, and empirical evidence is still inconclusive. Specifically, it was recently suggested that contextual violations affect early processing of a scene only when the incongruent object and the scene are presented a-synchronously, creating expectations. We compared event-related potentials (ERPs) evoked by scenes that depicted a person performing an action using either a congruent or an incongruent object (e.g., a man shaving with a razor or with a fork) when scene and object were presented simultaneously. We also explored the role of attention in contextual processing by using a pre-cue to direct subjects׳ attention towards or away from the congruent/incongruent object. Subjects׳ task was to determine how many hands the person in the picture used in order to perform the action. We replicated our previous findings of frontocentral negativity for incongruent scenes that started ~ 210 ms post stimulus presentation, even earlier than previously found. Surprisingly, this incongruency ERP effect was negatively correlated with the reaction times cost on incongruent scenes. The results did not allow us to draw conclusions about the role of attention in detecting the regularity, due to a weak attention manipulation. By replicating the 200-300 ms incongruity effect with a new group of subjects at even earlier latencies than previously reported, the results strengthen the evidence for contextual processing during this time window even when simultaneous presentation of the scene and object prevent the formation of prior expectations. We discuss possible methodological limitations that may account for previous failures to find this an effect, and conclude that contextual information affects object model selection processes prior to full object identification, with semantic knowledge activation stages unfolding only later on. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Infrared hyperspectral imaging for chemical vapour detection

    NASA Astrophysics Data System (ADS)

    Ruxton, K.; Robertson, G.; Miller, W.; Malcolm, G. P. A.; Maker, G. T.; Howle, C. R.

    2012-10-01

    Active hyperspectral imaging is a valuable tool in a wide range of applications. One such area is the detection and identification of chemicals, especially toxic chemical warfare agents, through analysis of the resulting absorption spectrum. This work presents a selection of results from a prototype midwave infrared (MWIR) hyperspectral imaging instrument that has successfully been used for compound detection at a range of standoff distances. Active hyperspectral imaging utilises a broadly tunable laser source to illuminate the scene with light at a range of wavelengths. While there are a number of illumination methods, the chosen configuration illuminates the scene by raster scanning the laser beam using a pair of galvanometric mirrors. The resulting backscattered light from the scene is collected by the same mirrors and focussed onto a suitable single-point detector, where the image is constructed pixel by pixel. The imaging instrument that was developed in this work is based around an IR optical parametric oscillator (OPO) source with broad tunability, operating in the 2.6 to 3.7 μm (MWIR) and 1.5 to 1.8 μm (shortwave IR, SWIR) spectral regions. The MWIR beam was primarily used as it addressed the fundamental absorption features of the target compounds compared to the overtone and combination bands in the SWIR region, which can be less intense by more than an order of magnitude. We show that a prototype NCI instrument was able to locate hydrocarbon materials at distances up to 15 metres.

  18. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)

    1982-01-01

    Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.

  19. Urbanization: Riyadh, Saudi Arabia

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Riyadh, the national capital of Saudi Arabia, is shown in 1972, 1990 and 2000. Its population grew in these years from about a half million to more than two million. Saudi Arabia experienced urbanization later than many other countries; in the early 1970s its urban-rural ratio was still about 1:3. By 1990 that had reversed to about 3:1. The city grew through in-migration from rural areas, and from decreases in the death rate while birthrates remained high. The 1972 image is a Landsat MSS scene; the 1990 image is a Landsat Thematic Mapper scene; and the 2000 image is an ASTER scene. All three images cover an area of about 27 x 34 km. The image is centered at 24.6 degrees north latitude, 46.6 degrees east longitude.

    The U.S. science team is located at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The Terra mission is part of NASA's Science Mission Directorate.

  20. Evaluating the design of an earth radiation budget instrument with system simulations. Part 2: Minimization of instantaneous sampling errors for CERES-I

    NASA Technical Reports Server (NTRS)

    Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert

    1994-01-01

    Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.

  1. Three-dimensional measurement system for crime scene documentation

    NASA Astrophysics Data System (ADS)

    Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert

    2017-10-01

    Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.

  2. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  3. Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm

    NASA Technical Reports Server (NTRS)

    Sidick, Erikin

    2009-01-01

    A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.

  4. Spectral Variability among Rocks in Visible and Near Infrared Multispectral Pancam Data Collected at Gusev Crater: Examinations using Spectral Mixture Analysis and Related Techniques

    NASA Technical Reports Server (NTRS)

    Farrand, W. H.; Bell, J. F., III; Johnson, J. R.; Squyres, S. W.; Soderblom, J.; Ming, D. W.

    2006-01-01

    Visible and Near Infrared (VNIR) multispectral observations of rocks made by the Mars Exploration Rover Spirit s Panoramic camera (Pancam) have been analysed using a spectral mixture analysis (SMA) methodology. Scenes have been examined from the Gusev crater plains into the Columbia Hills. Most scenes on the plains and in the Columbia Hills could be modeled as three endmember mixtures of a bright material, rock, and shade. Scenes of rocks disturbed by the rover s Rock Abrasion Tool (RAT) required additional endmembers. In the Columbia Hills there were a number of scenes in which additional rock endmembers were required. The SMA methodology identified relatively dust-free areas on undisturbed rock surfaces, as well as spectrally unique areas on RAT abraded rocks. Spectral parameters from these areas were examined and six spectral classes were identified. These classes are named after a type rock or area and are: Adirondack, Lower West Spur, Clovis, Wishstone, Peace, and Watchtower. These classes are discriminable based, primarily, on near-infrared (NIR) spectral parameters. Clovis and Watchtower class rocks appear more oxidized than Wishstone class rocks and Adirondack basalts based on their having higher 535 nm band depths. Comparison of the spectral parameters of these Gusev crater rocks to parameters of glass-dominated basaltic tuffs indicates correspondence between measurements of Clovis and Watchtower classes, but divergence for the Wishstone class rocks which appear to have a higher fraction of crystalline ferrous iron bearing phases. Despite a high sulfur content, the rock Peace has NIR properties resembling plains basalts.

  5. Pattern recognition of native plant communities: Manitou Colorado test site

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1972-01-01

    Optimum channel selection among 12 channels of multispectral scanner imagery identified six as providing the best information about 11 vegetation classes and two nonvegetation classes at the Manitou Experimental Forest. Intensive preprocessing of the scanner signals was required to eliminate a serious scan angle effect. Final processing of the normalized data provided acceptable recognition results of generalized plant community types. Serious errors occurred with attempts to classify specific community types within upland grassland areas. The consideration of the convex mixtures concept (effects of amounts of live plant cover, exposed soil, and plant litter cover on apparent scene radiances) significantly improved the classification of some of the grassland classes.

  6. Polar cloud and surface classification using AVHRR imagery - An intercomparison of methods

    NASA Technical Reports Server (NTRS)

    Welch, R. M.; Sengupta, S. K.; Goroch, A. K.; Rabindra, P.; Rangaraj, N.; Navar, M. S.

    1992-01-01

    Six Advanced Very High-Resolution Radiometer local area coverage (AVHRR LAC) arctic scenes are classified into ten classes. Three different classifiers are examined: (1) the traditional stepwise discriminant analysis (SDA) method; (2) the feed-forward back-propagation (FFBP) neural network; and (3) the probabilistic neural network (PNN). More than 200 spectral and textural measures are computed. These are reduced to 20 features using sequential forward selection. Theoretical accuracy of the classifiers is determined using the bootstrap approach. Overall accuracy is 85.6 percent, 87.6 percent, and 87.0 percent for the SDA, FFBP, and PNN classifiers, respectively, with standard deviations of approximately 1 percent.

  7. Psychophysical Criteria for Visual Simulation Systems.

    DTIC Science & Technology

    1980-05-01

    definitive data were found to estab- lish detection thresholds; therefore, this is one area where a psycho- physical study was recommended. Differential size...The specific functional relationships needinq quantification were the following: 1. The effect of Horizontal Aniseikonia on Target Detection and...Transition Technique 6. The Effects of Scene Complexity and Separation on the Detection of Scene Misalignment 7. Absolute Brightness Levels in

  8. Negotiating place and gendered violence in Canada's largest open drug scene.

    PubMed

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-05-01

    Vancouver's Downtown Eastside is home to Canada's largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and 'marginal men' (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants' spatial practices. Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into "dangerous" drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug-related risks among these highly-vulnerable PWID. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. NEGOTIATING PLACE AND GENDERED VIOLENCE IN CANADA’S LARGEST OPEN DRUG SCENE

    PubMed Central

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-01-01

    Background Vancouver’s Downtown Eastside is home to Canada’s largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and ‘marginal men’ (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Methods Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants’ spatial practices. Results Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into “dangerous” drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Conclusion Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug-related risks among these highly-vulnerable PWID. PMID:24332972

  10. Mapping simulated scenes with skeletal remains using differential GPS in open environments: an assessment of accuracy and practicality.

    PubMed

    Walter, Brittany S; Schultz, John J

    2013-05-10

    Scene mapping is an integral aspect of processing a scene with scattered human remains. By utilizing the appropriate mapping technique, investigators can accurately document the location of human remains and maintain a precise geospatial record of evidence. One option that has not received much attention for mapping forensic evidence is the differential global positioning (DGPS) unit, as this technology now provides decreased positional error suitable for mapping scenes. Because of the lack of knowledge concerning this utility in mapping a scene, controlled research is necessary to determine the practicality of using newer and enhanced DGPS units in mapping scattered human remains. The purpose of this research was to quantify the accuracy of a DGPS unit for mapping skeletal dispersals and to determine the applicability of this utility in mapping a scene with dispersed remains. First, the accuracy of the DGPS unit in open environments was determined using known survey markers in open areas. Secondly, three simulated scenes exhibiting different types of dispersals were constructed and mapped in an open environment using the DGPS. Variables considered during data collection included the extent of the dispersal, data collection time, data collected on different days, and different postprocessing techniques. Data were differentially postprocessed and compared in a geographic information system (GIS) to evaluate the most efficient recordation methods. Results of this study demonstrate that the DGPS is a viable option for mapping dispersed human remains in open areas. The accuracy of collected point data was 11.52 and 9.55 cm for 50- and 100-s collection times, respectfully, and the orientation and maximum length of long bones was maintained. Also, the use of error buffers for point data of bones in maps demonstrated the error of the DGPS unit, while showing that the context of the dispersed skeleton was accurately maintained. Furthermore, the application of a DGPS for accurate scene mapping is discussed and guidelines concerning the implementation of this technology for mapping human scattered skeletal remains in open environments are provided. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. A study of payload specialist station monitor size constraints. [space shuttle orbiters

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.

    1975-01-01

    Constraints on the CRT display size for the shuttle orbiter cabin are studied. The viewing requirements placed on these monitors were assumed to involve display of imaged scenes providing visual feedback during payload operations and display of alphanumeric characters. Data on target recognition/resolution, target recognition, and range rate detection by human observers were utilized to determine viewing requirements for imaged scenes. Field-of-view and acuity requirements for a variety of payload operations were obtained along with the necessary detection capability in terms of range-to-target size ratios. The monitor size necessary to meet the acuity requirements was established. An empirical test was conducted to determine required recognition sizes for displayed alphanumeric characters. The results of the test were used to determine the number of characters which could be simultaneously displayed based on the recognition size requirements using the proposed monitor size. A CRT display of 20 x 20 cm is recommended. A portion of the display area is used for displaying imaged scenes and the remaining display area is used for alphanumeric characters pertaining to the displayed scene. The entire display is used for the character alone mode.

  12. Effects of Aging on the Neural Correlates of Successful Item and Source Memory Encoding

    PubMed Central

    Dennis, Nancy A.; Hayes, Scott M.; Prince, Steven E.; Madden, David J.; Huettel, Scott A.; Cabeza, Roberto

    2009-01-01

    To investigate the neural basis of age-related source memory (SM) deficits, young and older adults were scanned with fMRI while encoding faces, scenes, and face-scene pairs. Successful encoding activity was identified by comparing encoding activity for subsequently remembered versus forgotten items or pairs. Age deficits in successful encoding activity in hippocampal and prefrontal regions were more pronounced for SM (pairs) compared to item memory (faces and scenes). Age-related reductions were also found in regions specialized in processing faces (fusiform face area) and scenes (parahippocampal place area), but these reductions were similar for item and SM. Functional connectivity between the hippocampus and the rest of the brain was also affected by aging; whereas connections with posterior cortices were weaker in older adults, connections with anterior cortices including prefrontal regions were stronger in older adults. Taken together, the results provide a link between SM deficits in older adults and reduced recruitment of hippocampal and prefrontal regions during encoding. The functional connectivity findings are consistent with a posterior-anterior shift with aging (PASA), previously reported in several cognitive domains and linked to functional compensation. PMID:18605869

  13. Goal-Side Selection in Soccer Penalty Kicking When Viewing Natural Scenes

    PubMed Central

    Weigelt, Matthias; Memmert, Daniel

    2012-01-01

    The present study investigates the influence of goalkeeper displacement on goal-side selection in soccer penalty kicking. Facing a penalty situation, participants viewed photo-realistic images of a goalkeeper and a soccer goal. In the action selection task, they were asked to kick to the greater goal-side, and in the perception task, they indicated the position of the goalkeeper on the goal line. To this end, the goalkeeper was depicted in a regular goalkeeping posture, standing either in the exact middle of the goal or being displaced at different distances to the left or right of the goal’s center. Results showed that the goalkeeper’s position on the goal line systematically affected goal-side selection, even when participants were not aware of the displacement. These findings provide further support for the notion that the implicit processing of the stimulus layout in natural scenes can effect action selection in complex environments, such in soccer penalty shooting. PMID:22973246

  14. Early top-down control of visual processing predicts working memory performance

    PubMed Central

    Rutman, Aaron M.; Clapp, Wesley C.; Chadick, James Z.; Gazzaley, Adam

    2009-01-01

    Selective attention confers a behavioral benefit for both perceptual and working memory (WM) performance, often attributed to top-down modulation of sensory neural processing. However, the direct relationship between early activity modulation in sensory cortices during selective encoding and subsequent WM performance has not been established. To explore the influence of selective attention on WM recognition, we used electroencephalography (EEG) to study the temporal dynamics of top-down modulation in a selective, delayed-recognition paradigm. Participants were presented with overlapped, “double-exposed” images of faces and natural scenes, and were instructed to either remember the face or the scene while simultaneously ignoring the other stimulus. Here, we present evidence that the degree to which participants modulate the early P100 (97–129 ms) event-related potential (ERP) during selective stimulus encoding significantly correlates with their subsequent WM recognition. These results contribute to our evolving understanding of the mechanistic overlap between attention and memory. PMID:19413473

  15. A Multi-Wavelength Thermal Infrared and Reflectance Scene Simulation Model

    NASA Technical Reports Server (NTRS)

    Ballard, J. R., Jr.; Smith, J. A.; Smith, David E. (Technical Monitor)

    2002-01-01

    Several theoretical calculations are presented and our approach discussed for simulating overall composite scene thermal infrared exitance and canopy bidirectional reflectance of a forest canopy. Calculations are performed for selected wavelength bands of the DOE Multispectral Thermal Imagery and comparisons with atmospherically corrected MTI imagery are underway. NASA EO-1 Hyperion observations also are available and the favorable comparison of our reflective model results with these data are reported elsewhere.

  16. Location-specific effects of attention during visual short-term memory maintenance.

    PubMed

    Matsukura, Michi; Cosman, Joshua D; Roper, Zachary J J; Vatterott, Daniel B; Vecera, Shaun P

    2014-06-01

    Recent neuroimaging studies suggest that early sensory areas such as area V1 are recruited to actively maintain a selected feature of the item held in visual short-term memory (VSTM). These findings raise the possibility that visual attention operates in similar manners across perceptual and memory representations to a certain extent, despite memory-level and perception-level selections are functionally dissociable. If VSTM operates by retaining "reasonable copies" of scenes constructed during sensory processing (Serences et al., 2009, p. 207, the sensory recruitment hypothesis), then it is possible that selective attention can be guided by both exogenous (peripheral) and endogenous (central) cues during VSTM maintenance. Yet, the results from the previous studies that examined this issue are inconsistent. In the present study, we investigated whether attention can be directed to a specific item's location represented in VSTM with the exogenous cue in a well-controlled setting. The results from the four experiments suggest that, as observed with the endogenous cue, the exogenous cue can efficiently guide selective attention during VSTM maintenance. The finding is not only consistent with the sensory recruitment hypothesis but also validates the legitimacy of the exogenous cue use in past and future studies. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. Spectral unmixing of urban land cover using a generic library approach

    NASA Astrophysics Data System (ADS)

    Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben

    2016-10-01

    Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.

  18. Spatial and temporal aspects of chromatic adaptation and their functional significance for colour constancy.

    PubMed

    Werner, Annette

    2014-11-01

    Illumination in natural scenes changes at multiple temporal and spatial scales: slow changes in global illumination occur in the course of a day, and we encounter fast and localised illumination changes when visually exploring the non-uniform light field of three-dimensional scenes; in addition, very long-term chromatic variations may come from the environment, like for example seasonal changes. In this context, I consider the temporal and spatial properties of chromatic adaptation and discuss their functional significance for colour constancy in three-dimensional scenes. A process of fast spatial tuning in chromatic adaptation is proposed as a possible sensory mechanism for linking colour constancy to the spatial structure of a scene. The observed middlewavelength selectivity of this process is particularly suitable for adaptation to the mean chromaticity and the compensation of interreflections in natural scenes. Two types of sensory colour constancy are distinguished, based on the functional differences of their temporal and spatial scales: a slow type, operating at a global scale for the compensation of the ambient illumination; and a fast colour constancy, which is locally restricted and well suited to compensate region-specific variations in the light field of three dimensional scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

    PubMed Central

    Clevis, Krien; Hagoort, Peter

    2011-01-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540

  20. Functional size of human visual area V1: a neural correlate of top-down attention.

    PubMed

    Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R

    2014-06-01

    Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Peatland classification of West Siberia based on Landsat imagery

    NASA Astrophysics Data System (ADS)

    Terentieva, I.; Glagolev, M.; Lapshina, E.; Maksyutov, S. S.

    2014-12-01

    Increasing interest in peatlands for prediction of environmental changes requires an understanding of its geographical distribution. West Siberia Plain is the biggest peatland area in Eurasia and is situated in the high latitudes experiencing enhanced rate of climate change. West Siberian taiga mires are important globally, accounting for about 12.5% of the global wetland area. A number of peatland maps of the West Siberia was developed in 1970s, but their accuracy is limited. Here we report the effort in mapping West Siberian peatlands using 30 m resolution Landsat imagery. As a first step, peatland classification scheme oriented on environmental parameter upscaling was developed. The overall workflow involves data pre-processing, training data collection, image classification on a scene-by-scene basis, regrouping of the derived classes into final peatland types and accuracy assessment. To avoid misclassification peatlands were distinguished from other landscapes using threshold method: for each scene, Green-Red Vegetation Indices was used for peatland masking and 5th channel was used for masking water bodies. Peatland image masks were made in Quantum GIS, filtered in MATLAB and then classified in Multispec (Purdue Research Foundation) using maximum likelihood algorithm of supervised classification method. Training sample selection was mostly based on spectral signatures due to limited ancillary and high-resolution image data. As an additional source of information, we applied our field knowledge resulting from more than 10 years of fieldwork in West Siberia summarized in an extensive dataset of botanical relevés, field photos, pH and electrical conductivity data from 40 test sites. After the classification procedure, discriminated spectral classes were generalized into 12 peatland types. Overall accuracy assessment was based on 439 randomly assigned test sites showing final map accuracy was 80%. Total peatland area was estimated at 73.0 Mha. Various ridge-hollow and ridge-hollow-pool bog complexes prevail here occupying 34.5 Mha. They are followed by lakes (11.1 Mha), fens (10.7 Mha), pine-dwarf-shrub sphagnum bogs (9.3 Mha) and palsa complexes (7.4 Mha).

  2. Extracting scene feature vectors through modeling, volume 3

    NASA Technical Reports Server (NTRS)

    Berry, J. K.; Smith, J. A.

    1976-01-01

    The remote estimation of the leaf area index of winter wheat at Finney County, Kansas was studied. The procedure developed consists of three activities: (1) field measurements; (2) model simulations; and (3) response classifications. The first activity is designed to identify model input parameters and develop a model evaluation data set. A stochastic plant canopy reflectance model is employed to simulate reflectance in the LANDSAT bands as a function of leaf area index for two phenological stages. An atmospheric model is used to translate these surface reflectances into simulated satellite radiance. A divergence classifier determines the relative similarity between model derived spectral responses and those of areas with unknown leaf area index. The unknown areas are assigned the index associated with the closest model response. This research demonstrated that the SRVC canopy reflectance model is appropriate for wheat scenes and that broad categories of leaf area index can be inferred from the procedure developed.

  3. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  4. Experiments in MPEG-4 content authoring, browsing, and streaming

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Schmidt, Robert L.; Basso, Andrea; Civanlar, Mehmet R.

    2000-12-01

    In this paper, within the context of the MPEG-4 standard we report on preliminary experiments in three areas -- authoring of MPEG-4 content, a player/browser for MPEG-4 content, and streaming of MPEG-4 content. MPEG-4 is a new standard for coding of audiovisual objects; the core of MPEG-4 standard is complete while amendments are in various stages of completion. MPEG-4 addresses compression of audio and visual objects, their integration by scene description, and interactivity of users with such objects. MPEG-4 scene description is based on VRML like language for 3D scenes, extended to 2D scenes, and supports integration of 2D and 3D scenes. This scene description language is called BIFS. First, we introduce the basic concepts behind BIFS and then show with an example, textual authoring of different components needed to describe an audiovisual scene in BIFS; the textual BIFS is then saved as compressed binary file/s for storage or transmission. Then, we discuss a high level design of an MPEG-4 player/browser that uses the main components from authoring such as encoded BIFS stream, media files it refers to, and multiplexed object descriptor stream to play an MPEG-4 scene. We also discuss our extensions to such a player/browser. Finally, we present our work in streaming of MPEG-4 -- the payload format, modification to client MPEG-4 player/browser, server-side infrastructure and example content used in our MPEG-4 streaming experiments.

  5. Fundamental remote sensing science research program: The Scene Radiation and Atmospheric Effects Characterization Project

    NASA Technical Reports Server (NTRS)

    Deering, D. W.

    1985-01-01

    The Scene Radiation and Atmospheric Effects Characterization (SRAEC) Project was established within the NASA Fundamental Remote Sensing Science Research Program to improve our understanding of the fundamental relationships of energy interactions between the sensor and the surface target, including the effect of the atmosphere. The current studies are generalized into the following five subject areas: optical scene modeling, Earth-space radiative transfer, electromagnetic properties of surface materials, microwave scene modeling, and scatterometry studies. This report has been prepared to provide a brief overview of the SRAEC Project history and objectives and to report on the scientific findings and project accomplishments made by the nineteen principal investigators since the project's initiation just over three years ago. This annual summary report derives from the most recent annual principal investigators meeting held January 29 to 31, 1985.

  6. Earth Observation

    NASA Image and Video Library

    2014-06-24

    ISS040-E-018729 (24 June 2014) --- One of the Expedition 40 crew members aboard the Earth-orbiting International Space Station photographed this image featuring the peninsular portion of the state of Florida. Lake Okeechobee stands out in the south central part of the state. The heavily-populated area of Miami can be traced along the Atlantic Coast near the bottom of the scene. Cape Canaveral and the Kennedy Space Center are just below center frame on the Atlantic Coast. The Florida Keys are at the south (left) portion of the scene and the Gulf Coast, including the Tampa-St. Petersburg area, is near frame center.

  7. Application of multi-resolution 3D techniques in crime scene documentation with bloodstain pattern analysis.

    PubMed

    Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub

    2016-10-01

    In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. The Low Backscattering Objects Classification in Polsar Image Based on Bag of Words Model Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Yang, L.; Shi, L.; Li, P.; Yang, J.; Zhao, L.; Zhao, B.

    2018-04-01

    Due to the forward scattering and block of radar signal, the water, bare soil, shadow, named low backscattering objects (LBOs), often present low backscattering intensity in polarimetric synthetic aperture radar (PolSAR) image. Because the LBOs rise similar backscattering intensity and polarimetric responses, the spectral-based classifiers are inefficient to deal with LBO classification, such as Wishart method. Although some polarimetric features had been exploited to relieve the confusion phenomenon, the backscattering features are still found unstable when the system noise floor varies in the range direction. This paper will introduce a simple but effective scene classification method based on Bag of Words (BoW) model using Support Vector Machine (SVM) to discriminate the LBOs, without relying on any polarimetric features. In the proposed approach, square windows are firstly opened around the LBOs adaptively to determine the scene images, and then the Scale-Invariant Feature Transform (SIFT) points are detected in training and test scenes. The several SIFT features detected are clustered using K-means to obtain certain cluster centers as the visual word lists and scene images are represented using word frequency. At last, the SVM is selected for training and predicting new scenes as some kind of LBOs. The proposed method is executed over two AIRSAR data sets at C band and L band, including water, bare soil and shadow scenes. The experimental results illustrate the effectiveness of the scene method in distinguishing LBOs.

  9. Region-to-area screening methodology for the Crystalline Repository Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1985-04-01

    The purpose of this document is to describe the Crystalline Repository Project's (CRP) process for region-to-area screening of exposed and near-surface crystalline rock bodies in the three regions of the conterminous United States where crystalline rock is being evaluated as a potential host for the second nuclear waste repository (i.e., in the North Central, Northeastern, and Southeastern Regions). This document indicates how the US Department of Energy's (DOE) General Guidelines for the Recommendation of Sites for Nuclear Waste Repositories (10 CFR 960) were used to select and apply factors and variables for the region-to-area screening, explains how these factors andmore » variable are to be applied in the region-to-area screening, and indicates how this methodology relates to the decision process leading to the selection of candidate areas. A brief general discussion of the screening process from the national survey through area screening and site recommendation is presented. This discussion sets the scene for detailed discussions which follow concerning the region-to-area screening process, the guidance provided by the DOE Siting Guidelines for establishing disqualifying factors and variables for screening, and application of the disqualifying factors and variables in the screening process. This document is complementary to the regional geologic and environmental characterization reports to be issued in the summer of 1985 as final documents. These reports will contain the geologic and environmental data base that will be used in conjunction with the methodology to conduct region-to-area screening.« less

  10. A detail-preserved and luminance-consistent multi-exposure image fusion algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Guanquan; Zhou, Yue

    2018-04-01

    When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.

  11. Time Series Analysis of Vegetation Change using Hyperspectral and Multispectral Data

    DTIC Science & Technology

    2012-09-01

    rivers clogged with sediment” (Hartman, 2008). In addition, backpackers, campers, and skiers are in danger of being hit by falling trees. Mountain...information from hyperspectral data without a priori knowledge or requiring ground observations” (Kruse & Perry, 2009). Figure 16. Spectral...known endmembers and the scene spectra (Boardman & Kruse, 2011). Known endmembers come from analysts’ knowledge of an area in a scene, or from

  12. Design of a multispectral, wedge filter, remote-sensing instrument incorporating a multiport, thinned, CCD area array

    NASA Astrophysics Data System (ADS)

    Demro, James C.; Hartshorne, Richard; Woody, Loren M.; Levine, Peter A.; Tower, John R.

    1995-06-01

    The next generation Wedge Imaging Spectrometer (WIS) instruments currently in integration at Hughes SBRD incorporate advanced features to increase operation flexibility for remotely sensed hyperspectral imagery collection and use. These features include: a) multiple linear wedge filters to tailor the spectral bands to the scene phenomenology; b) simple, replaceable fore-optics to allow different spatial resolutions and coverages; c) data acquisition system (DAS) that collects the full data stream simultaneously from both WIS instruments (VNIR and SWIR/MWIR), stores the data in a RAID storage, and provides for down-loading of the data to MO disks; the WIS DAS also allows selection of the spectral band sets to be stored; d) high-performance VNIR camera subsystem based upon a 512 X 512 CCD area array and associated electronics.

  13. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    PubMed

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. IKONOS geometric characterization

    USGS Publications Warehouse

    Helder, Dennis; Coan, Michael; Patrick, Kevin; Gaska, Peter

    2003-01-01

    The IKONOS spacecraft acquired images on July 3, 17, and 25, and August 13, 2001 of Brookings SD, a small city in east central South Dakota, and on May 22, June 30, and July 30, 2000, of the rural area around the EROS Data Center. South Dakota State University (SDSU) evaluated the Brookings scenes and the USGS EROS Data Center (EDC) evaluated the other scenes. The images evaluated by SDSU utilized various natural objects and man-made features as identifiable targets randomly distribution throughout the scenes, while the images evaluated by EDC utilized pre-marked artificial points (panel points) to provide the best possible targets distributed in a grid pattern. Space Imaging provided products at different processing levels to each institution. For each scene, the pixel (line, sample) locations of the various targets were compared to field observed, survey-grade Global Positioning System locations. Patterns of error distribution for each product were plotted, and a variety of statistical statements of accuracy are made. The IKONOS sensor also acquired 12 pairs of stereo images of globally distributed scenes between April 2000 and April 2001. For each scene, analysts at the National Imagery and Mapping Agency (NIMA) compared derived photogrammetric coordinates to their corresponding NIMA field-surveyed ground control point (GCPs). NIMA analysts determined horizontal and vertical accuracies by averaging the differences between the derived photogrammetric points and the field-surveyed GCPs for all 12 stereo pairs. Patterns of error distribution for each scene are presented.

  15. The effect of non-visual working memory load on top-down modulation of visual processing

    PubMed Central

    Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark

    2009-01-01

    While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of task-relevant and irrelevant visual representations. In our dual-task paradigm, each trial began with the auditory presentation of six random (high load) or sequentially-ordered (low load) digits. Next, two relevant visual stimuli (e.g., faces), presented amongst two temporally interspersed visual distractors (e.g., scenes), were to be encoded and maintained across a 7-sec delay interval, after which memory for the relevant images and digits was probed. When taxed by high load digit maintenance, participants exhibited impaired performance on the visual WM task and a selective failure to attenuate the neural processing of task-irrelevant scene stimuli. The over-processing of distractor scenes under high load was indexed by elevated encoding activity in a scene-selective region-of-interest relative to low load and passive viewing control conditions, as well as by improved long-term recognition memory for these items. In contrast, the load manipulation did not affect participants' ability to upregulate activity in this region when scenes were task-relevant. These results highlight the critical role of domain-general WM resources in the goal-directed regulation of distractor processing. Moreover, the consequences of increased WM load in young adults closely resemble the effects of cognitive aging on distractor filtering [Gazzaley et al., (2005) Nature Neuroscience 8, 1298-1300], suggesting the possibility of a common underlying mechanism. PMID:19397858

  16. Condom use and high-risk sexual acts in adult films: a comparison of heterosexual and homosexual films.

    PubMed

    Grudzen, Corita R; Elliott, Marc N; Kerndt, Peter R; Schuster, Mark A; Brook, Robert H; Gelberg, Lillian

    2009-04-01

    We compared the prevalence of condom use during a variety of sexual acts portrayed in adult films produced for heterosexual and homosexual audiences to assess compliance with state Occupational Health and Safety Administration regulations. We analyzed 50 heterosexual and 50 male homosexual films released between August 1, 2005, and July 31, 2006, randomly selected from the distributor of 85% of the heterosexual adult films released each year in the United States. Penile-vaginal intercourse was protected with condoms in 3% of heterosexual scenes. Penile-anal intercourse, common in both heterosexual (42%) and homosexual (80%) scenes, was much less likely to be protected with condoms in heterosexual than in homosexual scenes (10% vs 78%; P < .001). No penile-oral acts were protected with condoms in any of the selected films. Heterosexual films were much less likely than were homosexual films to portray condom use, raising concerns about transmission of HIV and other sexually transmitted diseases, especially among performers in heterosexual adult films. In addition, the adult film industry, especially the heterosexual industry, is not adhering to state occupational safety regulations.

  17. Effects of emotion regulation strategy on brain responses to the valence and social content of visual scenes.

    PubMed

    Vrtička, Pascal; Sander, David; Vuilleumier, Patrik

    2011-04-01

    Emotion Regulation (ER) includes different mechanisms aiming at volitionally modulating emotional responses, including cognitive re-evaluation (re-appraisal; REAP) or inhibition of emotion expression and behavior (expressive suppression; ESUP). However, despite the importance of these ER strategies, previous functional magnetic resonance imaging (fMRI) studies have not sufficiently disentangled the specific neural impact of REAP versus ESUP on brain responses to different kinds of emotion-eliciting events. Moreover, although different effects have been reported for stimulus valence (positive vs. negative), no study has systematically investigated how ER may change emotional processing as a function of particular stimulus content variables (i.e., social vs. nonsocial). Our fMRI study directly compared brain activation to visual scenes during the use of different ER strategies, relative to a "natural" viewing condition, but also examined the effects of ER as a function of the social versus nonsocial content of scenes, in addition to their negative versus positive valence (by manipulating these factors orthogonally in a 2×2 factorial design). Our data revealed that several prefrontal cortical areas were differentially recruited during either REAP or ESUP, independent of the valence and content of images. In addition, selective modulations by either REAP or ESUP were found depending on the negative valence of scenes (medial fusiform gyrus, anterior insula, dmPFC), and on their nonsocial (middle insula) or social (bilateral amygdala, mPFC, posterior cingulate) significance. Furthermore, we observed a significant lateralization in the amygdala for the effect of the two different ER strategies, with a predominant modulation by REAP on the left side but by ESUP on the right side. Taken together, these results do not only highlight the distributed nature of neural changes induced by ER, but also reveal the specific impact of different strategies (REAP or ESUP), and the specific sites implicated by different dimensions of emotional information (social or negative). Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. STS-35 Earth observation of the Persian Gulf area

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-35 Earth observation taken aboard Columbia, Orbiter Vehicle (OV) 102, is of the Persian Gulf area. Major cities and oilfields of the countries of Saudi Arabia (foreground), Iraq (top left), Iran (top center and top right), Kuwait, Bahrain, Qatar, and a portion of the United Arab Emirates are visible in this scene. The cities are the large whitish areas of city lights. Flares characteristic of the Mid-East oil field practices are visible both onshore and offshore throughout the scene. Major cities identifiable are in Iraq - Baghdad, Basra, and Faw; in Qatar - Ab Dawhah; in Kuwait - Kuwait City; in Saudi Arabia - Riyadh, Al Jubayl, Dharan, Al Huf, Ad Dilam and Al Hariq; and Bahrain and its associated causeway to the mainland.

  19. STS-35 Earth observation of the Persian Gulf area

    NASA Image and Video Library

    1990-12-10

    STS-35 Earth observation taken aboard Columbia, Orbiter Vehicle (OV) 102, is of the Persian Gulf area. Major cities and oilfields of the countries of Saudi Arabia (foreground), Iraq (top left), Iran (top center and top right), Kuwait, Bahrain, Qatar, and a portion of the United Arab Emirates are visible in this scene. The cities are the large whitish areas of city lights. Flares characteristic of the Mid-East oil field practices are visible both onshore and offshore throughout the scene. Major cities identifiable are in Iraq - Baghdad, Basra, and Faw; in Qatar - Ab Dawhah; in Kuwait - Kuwait City; in Saudi Arabia - Riyadh, Al Jubayl, Dharan, Al Huf, Ad Dilam and Al Hariq; and Bahrain and its associated causeway to the mainland.

  20. Spectral variability among rocks in visible and near-infrared mustispectral Pancam data collected at Gusev crater: Examinations using spectral mixture analysis and related techniques

    USGS Publications Warehouse

    Farrand, W. H.; Bell, J.F.; Johnson, J. R.; Squyres, S. W.; Soderblom, J.; Ming, D. W.

    2006-01-01

    Visible and near-infrared (VNIR) multispectral observations of rocks made by the Mars Exploration Rover Spirit's Panoramic camera (Pancam) have been analyzed using a spectral mixture analysis (SMA) methodology. Scenes have been examined from the Gusev crater plains into the Columbia Hills. Most scenes on the plains and in the Columbia Hills could be modeled as three end-member mixtures of a bright material, rock, and shade. Scenes of rocks disturbed by the rover's Rock Abrasion Tool (RAT) required additional end-members. In the Columbia Hills, there were a number of scenes in which additional rock end-members were required. The SMA methodology identified relatively dust-free areas on undisturbed rock surfaces as well as spectrally unique areas on RAT abraded rocks. Spectral parameters from these areas were examined, and six spectral classes were identified. These classes are named after a type rock or area and are Adirondack, Lower West Spur, Clovis, Wishstone, Peace, and Watchtower. These classes are discriminable based, primarily, on near-infrared (NIR) spectral parameters. Clovis and Watchtower class rocks appear more oxidized than Wishstone class rocks and Adirondack basalts based on their having higher 535 nm band depths. Comparison of the spectral parameters of these Gusev crater rocks to parameters of glass-dominated basaltic tuffs indicates correspondence between measurements of Clovis and Watchtower classes but divergence for the Wishstone class rocks, which appear to have a higher fraction of crystalline ferrous iron-bearing phases. Despite a high sulfur content, the rock Peace has NIR properties resembling plains basalts. Copyright 2006 by the American Geophysical Union.

  1. Modular Representation of Luminance Polarity In the Superficial Layers Of Primary Visual Cortex

    PubMed Central

    Smith, Gordon B.; Whitney, David E.; Fitzpatrick, David

    2016-01-01

    Summary The spatial arrangement of luminance increments (ON) and decrements (OFF) falling on the retina provides a wealth of information used by central visual pathways to construct coherent representations of visual scenes. But how the polarity of luminance change is represented in the activity of cortical circuits remains unclear. Using wide-field epifluorescence and two-photon imaging we demonstrate a robust modular representation of luminance polarity (ON or OFF) in the superficial layers of ferret primary visual cortex. Polarity-specific domains are found with both uniform changes in luminance and single light/dark edges, and include neurons selective for orientation and direction of motion. The integration of orientation and polarity preference is evident in the selectivity and discrimination capabilities of most layer 2/3 neurons. We conclude that polarity selectivity is an integral feature of layer 2/3 neurons, ensuring that the distinction between light and dark stimuli is available for further processing in downstream extrastriate areas. PMID:26590348

  2. Landsat-7 long-term acquisition plan radiometry - evolution over time

    USGS Publications Warehouse

    Markham, Brian L; Goward, Samuel; Arvidson, Terry; Barsi, Julia A.; Scaramuzza, Pat

    2006-01-01

    The Landsat-7 Enhanced Thematic Mapper Plus instrument has two selectable gains for each spectral band. In the acquisition plan, the gains were initially set to maximize the entropy in each scene. One unintended consequence of this strategy was that, at times, dense vegetation saturated band 4 and deserts saturated all bands. A revised strategy, based on a land-cover classification and sun angle thresholds, reduced saturation, but resulted in gain changes occurring within the same scene on multiple overpasses. As the gain changes cause some loss of data and difficulties for some ground processing systems, a procedure was devised to shift the gain changes to the nearest predicted cloudy scenes. The results are still not totally satisfactory as gain changes still impact some scenes and saturation still occurs, particularly in ephemerally snow-covered regions. A primary conclusion of our experience with variable gain on Landsat-7 is that such an approach should not be employed on future global monitoring missions.

  3. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    PubMed

    Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  4. LivePhantom: Retrieving Virtual World Light Data to Real Environments

    PubMed Central

    2016-01-01

    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663

  5. The polymorphism of crime scene investigation: An exploratory analysis of the influence of crime and forensic intelligence on decisions made by crime scene examiners.

    PubMed

    Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin

    2015-12-01

    A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. ASTER cloud coverage reassessment using MODIS cloud mask products

    NASA Astrophysics Data System (ADS)

    Tonooka, Hideyuki; Omagari, Kunjuro; Yamamoto, Hirokazu; Tachikawa, Tetsushi; Fujita, Masaru; Paitaer, Zaoreguli

    2010-10-01

    In the Advanced Spaceborne Thermal Emission and Reflection radiometer (ASTER) Project, two kinds of algorithms are used for cloud assessment in Level-1 processing. The first algorithm based on the LANDSAT-5 TM Automatic Cloud Cover Assessment (ACCA) algorithm is used for a part of daytime scenes observed with only VNIR bands and all nighttime scenes, and the second algorithm based on the LANDSAT-7 ETM+ ACCA algorithm is used for most of daytime scenes observed with all spectral bands. However, the first algorithm does not work well for lack of some spectral bands sensitive to cloud detection, and the two algorithms have been less accurate over snow/ice covered areas since April 2008 when the SWIR subsystem developed troubles. In addition, they perform less well for some combinations of surface type and sun elevation angle. We, therefore, have developed the ASTER cloud coverage reassessment system using MODIS cloud mask (MOD35) products, and have reassessed cloud coverage for all ASTER archived scenes (>1.7 million scenes). All of the new cloud coverage data are included in Image Management System (IMS) databases of the ASTER Ground Data System (GDS) and NASA's Land Process Data Active Archive Center (LP DAAC) and used for ASTER product search by users, and cloud mask images are distributed to users through Internet. Daily upcoming scenes (about 400 scenes per day) are reassessed and inserted into the IMS databases in 5 to 7 days after each scene observation date. Some validation studies for the new cloud coverage data and some mission-related analyses using those data are also demonstrated in the present paper.

  7. Visual Attention Model Based on Statistical Properties of Neuron Responses

    PubMed Central

    Duan, Haibin; Wang, Xiaohua

    2015-01-01

    Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859

  8. Temporal variations of natural soil salinity in an arid environment using satellite images

    NASA Astrophysics Data System (ADS)

    Gutierrez, M.; Johnson, E.

    2010-11-01

    In many remote arid areas the scarce amount of conventional soil salinity data precludes detailed analyses of salinity variations for the purpose of predicting its impact on agricultural production. A tool that is an appropriate surrogate for on-ground testing in determining temporal variations of soil salinity is Landsat satellite data. In this study six Landsat scenes over El Cuervo, a closed basin adjacent to the middle Rio Conchos basin in northern Mexico, were used to show temporal variation of natural salts from 1986 to 2005. Natural salts were inferred from ground reference data and spectral responses. Transformations used were Tasseled Cap, Principal Components and several (band) ratios. Classification of each scene was performed from the development of Regions Of Interest derived from geochemical data collected by SGM, spectral responses derived from ENVI software, and a small amount of field data collected by the authors. The resultant land cover classes showed a relationship between climatic drought and areal coverage of natural salts. When little precipitation occurred three months prior to the capture of the Landsat scene, approximately 15%-20% of the area was classified as salt. This is compared to practically no classified salt in the wetter years of 1992 and 2005 Landsat scenes.

  9. Hemispheric Asymmetry of Visual Scene Processing in the Human Brain: Evidence from Repetition Priming and Intrinsic Activity

    PubMed Central

    Kahn, Itamar; Wig, Gagan S.; Schacter, Daniel L.

    2012-01-01

    Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes. PMID:21968568

  10. Hemispheric asymmetry of visual scene processing in the human brain: evidence from repetition priming and intrinsic activity.

    PubMed

    Stevens, W Dale; Kahn, Itamar; Wig, Gagan S; Schacter, Daniel L

    2012-08-01

    Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes.

  11. Initial vegetation species and senescience/stress indicator mapping in the San Luis Valley, Colorado using imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Clark, Roger N.; King, Trude V. V.; Ager, Cathy; Swayze, Gregg A.

    1995-01-01

    We analyzed AVIRIS data obtained over agricultural areas in the San Luis Valley of Colorado. The data were acquired on September 3, 1993. A combined method of radiative transfer modeling and ground calibration site reflectance was used to correct the flight data to surface reflectance. This method, called Radiative Transfer Ground Calibration, or RTGC, corrects for variable water vapor in the atmosphere and produces spectra free of artifacts with spectral channel to channel noise approaching the signal to noise of the raw data. The calibration site soil samples were obtained on the day of the overflight and measured on our laboratory spectrometer. The site was near the center of the AVIRIS scene and the spectra of the soil is spectrally bland, especially in the region of the chlorophyll absorption in the visible portion of the spectrum. The center of the scene is located at approximately 106 deg 03' longitude, 37 deg 23' latitude, and the scene covers about 92 square kilometers. This scene is one of 28 in the area for a general project to study the Summitville abandoned mine site, located in the mountains west of the San Luis Valley, and its effects on the surrounding environment.

  12. Feature-based attentional modulations in the absence of direct visual stimulation.

    PubMed

    Serences, John T; Boynton, Geoffrey M

    2007-07-19

    When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.

  13. Everyone knows what is interesting: Salient locations which should be fixated

    PubMed Central

    Masciocchi, Christopher Michael; Mihalas, Stefan; Parkhurst, Derrick; Niebur, Ernst

    2010-01-01

    Most natural scenes are too complex to be perceived instantaneously in their entirety. Observers therefore have to select parts of them and process these parts sequentially. We study how this selection and prioritization process is performed by humans at two different levels. One is the overt attention mechanism of saccadic eye movements in a free-viewing paradigm. The second is a conscious decision process in which we asked observers which points in a scene they considered the most interesting. We find in a very large participant population (more than one thousand) that observers largely agree on which points they consider interesting. Their selections are also correlated with the eye movement pattern of different subjects. Both are correlated with predictions of a purely bottom–up saliency map model. Thus, bottom–up saliency influences cognitive processes as far removed from the sensory periphery as in the conscious choice of what an observer considers interesting. PMID:20053088

  14. A neural model of the temporal dynamics of figure-ground segregation in motion perception.

    PubMed

    Raudies, Florian; Neumann, Heiko

    2010-03-01

    How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Vegetation in transition: the Southwest's dynamic past century

    Treesearch

    Raymond M. Turner

    2005-01-01

    Monitoring that follows long-term vegetation changes often requires selection of a temporal baseline. Any such starting point is to some degree artificial, but in some instances there are aids that can be used as guides to baseline selection. Matched photographs duplicating scenes first recorded on film a century or more ago reveal changes that help select the starting...

  16. ERTS-1 anomalous dark patches

    NASA Technical Reports Server (NTRS)

    Strong, A. E. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Through combined use of imagery from ERTS-1 and NOAA-2 satellites was found that when the sun elevation exceeds 55 degrees, the ERTS-1 imagery is subject to considerable contamination by sunlight even though the actual specular point is nearly 300 nautical miles from nadir. Based on sea surface wave slope information, a wind speed of 10 knots will theoretically provide approximately 0.5 percent incident solar reflectance under observed ERTS multispectral scanner detectors. This reflectance nearly doubles under the influence of a 20 knot wind. The most pronounced effect occurs in areas of calm water where anomalous dark patches are observed. Calm water at distances from the specular point found in ERTS scenes will reflect no solar energy to the multispectral scanner, making these regions stand out as dark areas in all bands in an ocean scene otherwise comprosed by a general diffuse sunlight from rougher ocean surfaces. Anomalous dark patches in the outer parts of the glitter zones may explain the unusual appearance of some scenes.

  17. Environmental processes and spectral reflectance characteristics associated with soil erosion in desert fringe regions

    NASA Technical Reports Server (NTRS)

    Jacobberger, P. A.

    1986-01-01

    Two Thematic Mapper (TM) scenes were acquired. A scene was acquired for the Bahariya, Egypt field area, and one was acquired covering the Okavango Delta site. Investigations at the northwest Botswana study sites have concentrated upon a system of large linear (alab) dunes possessing an average wavelength of 2 kilometers and an east-west orientation. These dunes exist to the north and west of the Okavango Swamp, the pseudodeltaic end-sink of the internal Okavango-Cubango-Cuito drainage network. One archival scene and two TM acquisitions are on order, but at present no TM data were acquired for the Tombouctou/Azaouad Dunes, Mali. The three areas taken together comprise an environmental series ranging from hyperarid to semi-arid, with desertization processes operational or incipient in each. The long range goal is to predict normal seasonal variations, so that aperiodic spectral changes resulting from soil erosion, vegetation damage, and associated surface processes would be distinguishable as departures from the norm.

  18. Evaluation of multiband, multitemporal, and transformed LANDSAT MSS data for land cover area estimation. [North Central Missouri

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; May, G. A.; Kalcic, M. T. (Principal Investigator)

    1981-01-01

    Sample segments of ground-verified land cover data collected in conjunction with the USDA/ESS June Enumerative Survey were merged with LANDSAT data and served as a focus for unsupervised spectral class development and accuracy assessment. Multitemporal data sets were created from single-date LANDSAT MSS acquisitions from a nominal scene covering an eleven-county area in north central Missouri. Classification accuracies for the four land cover types predominant in the test site showed significant improvement in going from unitemporal to multitemporal data sets. Transformed LANDSAT data sets did not significantly improve classification accuracies. Regression estimators yielded mixed results for different land covers. Misregistration of two LANDSAT data sets by as much and one half pixels did not significantly alter overall classification accuracies. Existing algorithms for scene-to scene overlay proved adequate for multitemporal data analysis as long as statistical class development and accuracy assessment were restricted to field interior pixels.

  19. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  20. Manhole Cover Detection Using Vehicle-Based Multi-Sensor Data

    NASA Astrophysics Data System (ADS)

    Ji, S.; Shi, Y.; Shi, Z.

    2012-07-01

    A new method combined wit multi-view matching and feature extraction technique is developed to detect manhole covers on the streets using close-range images combined with GPS/IMU and LINDAR data. The covers are an important target on the road traffic as same as transport signs, traffic lights and zebra crossing but with more unified shapes. However, the different shoot angle and distance, ground material, complex street scene especially its shadow, and cars in the road have a great impact on the cover detection rate. The paper introduces a new method in edge detection and feature extraction in order to overcome these difficulties and greatly improve the detection rate. The LIDAR data are used to do scene segmentation and the street scene and cars are excluded from the roads. And edge detection method base on canny which sensitive to arcs and ellipses is applied on the segmented road scene and the interesting areas contain arcs are extracted and fitted to ellipse. The ellipse are then resampled for invariance to shooting angle and distance and then are matched to adjacent images for further checking if covers and . More than 1000 images with different scenes are used in our tests and the detection rate is analyzed. The results verified our method have its advantages in correct covers detection in the complex street scene.

  1. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  2. Color appearance and color rendering of HDR scenes: an experiment

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna; Rizzi, Alessandro; McCann, John J.

    2009-01-01

    In order to gain a deeper understanding of the appearance of coloured objects in a three-dimensional scene, the research introduces a multidisciplinary experimental approach. The experiment employed two identical 3-D Mondrians, which were viewed and compared side by side. Each scene was subjected to different lighting conditions. First, we used an illumination cube to diffuse the light and illuminate all the objects from each direction. This produced a low-dynamicrange (LDR) image of the 3-D Mondrian scene. Second, in order to make a high-dynamic range (HDR) image of the same objects, we used a directional 150W spotlight and an array of WLEDs assembled in a flashlight. The scenes were significant as each contained exactly the same three-dimensional painted colour blocks that were arranged in the same position in the still life. The blocks comprised 6 hue colours and 5 tones from white to black. Participants from the CREATE project were asked to consider the change in the appearance of a selection of colours according to lightness, hue, and chroma, and to rate how the change in illumination affected appearance. We measured the light coming to the eye from still-life surfaces with a colorimeter (Yxy). We captured the scene radiance using multiple exposures with a number of different cameras. We have begun a programme of digital image processing of these scene capture methods. This multi-disciplinary programme continues until 2010, so this paper is an interim report on the initial phases and a description of the ongoing project.

  3. Earth Observation

    NASA Image and Video Library

    2014-06-24

    ISS040-E-018725 (24 June 2014) --- One of the Expedition 40 crew members aboard the Earth-orbiting International Space Station photographed this image featuring most of the peninsular portion of the state of Florida. Lake Okeechobee stands out in the south central part of the state. The heavily-populated area of Miami can be traced along the Atlantic Coast near the bottom of the scene. Cape Canaveral and the Kennedy Space Center are in lower right portion of the image on the Atlantic Coast. The Florida Keys are at the south (left) portion of the scene and the Gulf Coast, including the Tampa-St. Petersburg area, is near frame center.

  4. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.

  5. Conference scene: Select Biosciences Epigenetics Europe 2010.

    PubMed

    Razvi, Enal S

    2011-02-01

    The field of epigenetics is now on a geometric rise, driven in a large part by the realization that modifiers of chromatin are key regulators of biological processes in vivo. The three major classes of epigenetic effectors are DNA methylation, histone post-translational modifications (such as acetylation, methylation or phosphorylation) and small noncoding RNAs (most notably microRNAs). In this article, I report from Select Biosciences Epigenetics Europe 2010 industry conference held on 14-15 September 2010 at The Burlington Hotel, Dublin, Ireland. This industry conference was extremely well attended with a global pool of delegates representing the academic research community, biotechnology companies and pharmaceutical companies, as well as the technology/tool developers. This conference represented the current state of the epigenetics community with cancer/oncology as a key driver. In fact, it has been estimated that approximately 45% of epigenetic researchers today identify cancer/oncology as their main area of focus vis-à-vis their epigenetic research efforts.

  6. Parahippocampal and retrosplenial contributions to human spatial navigation

    PubMed Central

    Epstein, Russell A.

    2010-01-01

    Spatial navigation is a core cognitive ability in humans and animals. Neuroimaging studies have identified two functionally-defined brain regions that activate during navigational tasks and also during passive viewing of navigationally-relevant stimuli such as environmental scenes: the parahippocampal place area (PPA) and the retrosplenial complex (RSC). Recent findings indicate that the PPA and RSC play distinct and complementary roles in spatial navigation, with the PPA more concerned with representation of the local visual scene and RSC more concerned with situating the scene within the broader spatial environment. These findings are a first step towards understanding the separate components of the cortical network that mediates spatial navigation in humans. PMID:18760955

  7. Memory for sound, with an ear toward hearing in complex auditory scenes.

    PubMed

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  8. Detection and mapping of hydrothermally altered rocks in the vicinity of the comstock lode, Virginia Range, Nevada, using enhanced LANDSAT images

    NASA Technical Reports Server (NTRS)

    Ashley, R. P. (Principal Investigator); Goetz, A. F. H.; Rowan, L. C.; Abrams, M. J.

    1979-01-01

    The author has identified the following significant results. LANDSAT images enhanced by the band-ratioing method can be used for reconnaissance alteration mapping in moderately heavily vegetated semiarid terrain as well as in sparsely vegetated to semiarid terrain where the technique was originally developed. Significant vegetation cover in a scene, however, requires the use of MSS ratios 4/5, 4/6, and 6/7 rather than 4/5, 5/6, and 6/7, and requires careful interpretation of the results. Supplemental information suitable to vegetation identification and cover estimates, such as standard LANDSAT false-color composites and low altitude aerial photographs of selected areas is desirable.

  9. Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS)

    NASA Technical Reports Server (NTRS)

    Masek, Jeffrey G.

    2006-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) project is creating a record of forest disturbance and regrowth for North America from the Landsat satellite record, in support of the carbon modeling activities. LEDAPS relies on the decadal Landsat GeoCover data set supplemented by dense image time series for selected locations. Imagery is first atmospherically corrected to surface reflectance, and then change detection algorithms are used to extract disturbance area, type, and frequency. Reuse of the MODIS Land processing system (MODAPS) architecture allows rapid throughput of over 2200 MSS, TM, and ETM+ scenes. Initial ("Beta") surface reflectance products are currently available for testing, and initial continental disturbance products will be available by the middle of 2006.

  10. Neurotoxic lesions of ventrolateral prefrontal cortex impair object-in-place scene memory

    PubMed Central

    Wilson, Charles R E; Gaffan, David; Mitchell, Anna S; Baxter, Mark G

    2007-01-01

    Disconnection of the frontal lobe from the inferotemporal cortex produces deficits in a number of cognitive tasks that require the application of memory-dependent rules to visual stimuli. The specific regions of frontal cortex that interact with the temporal lobe in performance of these tasks remain undefined. One capacity that is impaired by frontal–temporal disconnection is rapid learning of new object-in-place scene problems, in which visual discriminations between two small typographic characters are learned in the context of different visually complex scenes. In the present study, we examined whether neurotoxic lesions of ventrolateral prefrontal cortex in one hemisphere, combined with ablation of inferior temporal cortex in the contralateral hemisphere, would impair learning of new object-in-place scene problems. Male macaque monkeys learned 10 or 20 new object-in-place problems in each daily test session. Unilateral neurotoxic lesions of ventrolateral prefrontal cortex produced by multiple injections of a mixture of ibotenate and N-methyl-d-aspartate did not affect performance. However, when disconnection from inferotemporal cortex was completed by ablating this region contralateral to the neurotoxic prefrontal lesion, new learning was substantially impaired. Sham disconnection (injecting saline instead of neurotoxin contralateral to the inferotemporal lesion) did not affect performance. These findings support two conclusions: first, that the ventrolateral prefrontal cortex is a critical area within the frontal lobe for scene memory; and second, the effects of ablations of prefrontal cortex can be confidently attributed to the loss of cell bodies within the prefrontal cortex rather than to interruption of fibres of passage through the lesioned area. PMID:17445247

  11. Spectral decomposition of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Gaddis, Lisa; Soderblom, Laurence; Kieffer, Hugh; Becker, Kris; Torson, Jim; Mullins, Kevin

    1993-01-01

    A set of techniques is presented that uses only information contained within a raw Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene to estimate and to remove additive components such as multiple scattering and instrument dark current. Multiplicative components (instrument gain, topographic modulation of brightness, atmospheric transmission) can then be normalized, permitting enhancement, extraction, and identification of relative reflectance information related to surface composition and mineralogy. The technique for derivation of additive-component spectra from a raw AVIRIS scene is an adaption of the 'regression intersection method' of Crippen. This method uses two surface units that are spatially extensive, and located in rugged terrain. For a given wavelength pair, subtraction of the derived additive component from individual band values will remove topography in both regions in a band/band ratio image. Normalization of all spectra in the scene to the average scene spectrum then results in cancellation of multiplicative components and production of a relative-reflectance scene. The resulting AVIRIS product contains relative-reflectance features due to mineral absorption that depart from the average spectrum. These features commonly are extremely weak and difficult to recognize, but they can be enhanced by using two simple 3-D image-processing tools. The validity of these techniques will be demonstrated by comparisons between relative-reflectance AVIRIS spectra and those derived by using JPL standard calibrations. The AVIRIS data used in this analysis were acquired over the Kelso Dunes area (34 deg 55' N, 115 deg 43' W) of the eastern Mojave Desert, CA (in 1987) and the Upheaval Dome area (38 deg 27' N, 109 deg 55' W) of the Canyonlands National Park, UT (in 1991).

  12. Characterizing Woody Vegetation Spectral and Structural Parameters with a 3-D Scene Model

    NASA Astrophysics Data System (ADS)

    Qin, W.; Yang, L.

    2004-05-01

    Quantification of structural and biophysical parameters of woody vegetation is of great significance in understanding vegetation condition, dynamics and functionality. Such information over a landscape scale is crucial for global and regional land cover characterization, global carbon-cycle research, forest resource inventories, and fire fuel estimation. While great efforts and progress have been made in mapping general land cover types over large area, at present, the ability to quantify regional woody vegetation structural and biophysical parameters is limited. One approach to address this research issue is through an integration of physically based 3-D scene model with multiangle and multispectral remote sensing data and in-situ measurements. The first step of this work is to model woody vegetation structure and its radiation regime using a physically based 3-D scene model and field data, before a robust operational algorithm can be developed for retrieval of important woody vegetation structural/biophysical parameters. In this study, we use an advanced 3-D scene model recently developed by Qin and Gerstl (2000), based on L-systems and radiosity theories. This 3-D scene model has been successfully applied to semi-arid shrubland to study structure and radiation regime at a regional scale. We apply this 3-D scene model to a more complicated and heterogeneous forest environment dominated by deciduous and coniferous trees. The data used in this study are from a field campaign conducted by NASA in a portion of the Superior National Forest (SNF) near Ely, Minnesota during the summers of 1983 and 1984, and supplement data collected during our revisit to the same area of SNF in summer of 2003. The model is first validated with reflectance measurements at different scales (ground observations, helicopter, aircraft, and satellite). Then its ability to characterize the structural and spectral parameters of the forest scene is evaluated. Based on the results from this study and the current multi-spectral and multi-angular satellite data (MODIS, MISR), a robust retrieval system to estimate woody vegetation structural/biophysical parameters is proposed.

  13. Tobacco imagery on New Zealand television 2002-2004.

    PubMed

    McGee, Rob; Ketchel, Juanita

    2006-10-01

    Considerable emphasis has been placed on the importance of tobacco imagery in the movies as one of the "drivers" of smoking among young people. Findings are presented from a content analysis of 98 hours of prime-time programming on New Zealand television 2004, identifying 152 scenes with tobacco imagery, and selected characteristics of those scenes. About one in four programmes contained tobacco imagery, most of which might be regarded as "neutral or positive". This amounted to about two scenes containing such imagery for every hour of programming. A comparison with our earlier content analysis of programming in 2002 indicated little change in the level of tobacco imagery. The effect of this imagery in contributing to young viewers taking up smoking, and sustaining the addiction among those already smoking, deserves more research attention.

  14. Editing ERTS-1 data to exclude land aids cluster analysis of water targets

    NASA Technical Reports Server (NTRS)

    Erb, R. B. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. It has been determined that an increase in the number of spectrally distinct coastal water types is achieved when data values over the adjacent land areas are excluded from the processing routine. This finding resulted from an automatic clustering analysis of ERTS-1 system corrected MSS scene 1002-18134 of 25 July 1972 over Monterey Bay, California. When the entire study area data set was submitted to the clustering only two distinct water classes were extracted. However, when the land area data points were removed from the data set and resubmitted to the clustering routine, four distinct groupings of water features were identified. Additionally, unlike the previous separation, the four types could be correlated to features observable in the associated ERTS-1 imagery. This exercise demonstrates that by proper selection of data submitted to the processing routine, based upon the specific application of study, additional information may be extracted from the ERTS-1 MSS data.

  15. Rapid discrimination of visual scene content in the human brain.

    PubMed

    Anokhin, Andrey P; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W; Heath, Andrew C

    2006-06-06

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n = 264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200 and 600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline region, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance.

  16. Rapid discrimination of visual scene content in the human brain

    PubMed Central

    Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.

    2007-01-01

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815

  17. An estimate of field size distributions for selected sites in the major grain producing countries

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.

    1977-01-01

    The field size distributions for the major grain producing countries of the World were estimated. LANDSAT-1 and 2 images were evaluated for two areas each in the United States, People's Republic of China, and the USSR. One scene each was evaluated for France, Canada, and India. Grid sampling was done for representative sub-samples of each image, measuring the long and short axes of each field; area was then calculated. Each of the resulting data sets was computer analyzed for their frequency distributions. Nearly all frequency distributions were highly peaked and skewed (shifted) towards small values, approaching that of either a Poisson or log-normal distribution. The data were normalized by a log transformation, creating a Gaussian distribution which has moments readily interpretable and useful for estimating the total population of fields. Resultant predictors of the field size estimates are discussed.

  18. Intersubject synchronization of cortical activity during natural vision.

    PubMed

    Hasson, Uri; Nir, Yuval; Levy, Ifat; Fuhrmann, Galit; Malach, Rafael

    2004-03-12

    To what extent do all brains work alike during natural conditions? We explored this question by letting five subjects freely view half an hour of a popular movie while undergoing functional brain imaging. Applying an unbiased analysis in which spatiotemporal activity patterns in one brain were used to "model" activity in another brain, we found a striking level of voxel-by-voxel synchronization between individuals, not only in primary and secondary visual and auditory areas but also in association cortices. The results reveal a surprising tendency of individual brains to "tick collectively" during natural vision. The intersubject synchronization consisted of a widespread cortical activation pattern correlated with emotionally arousing scenes and regionally selective components. The characteristics of these activations were revealed with the use of an open-ended "reverse-correlation" approach, which inverts the conventional analysis by letting the brain signals themselves "pick up" the optimal stimuli for each specialized cortical area.

  19. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  20. Discriminating between camouflaged targets by their time of detection by a human-based observer assessment method

    NASA Astrophysics Data System (ADS)

    Selj, G. K.; Søderblom, M.

    2015-10-01

    Detection of a camouflaged object in natural sceneries requires the target to be distinguishable from its local background. The development of any new camouflage pattern therefore has to rely on a well-founded test methodology - which has to be correlated with the final purpose of the pattern - as well as an evaluation procedure, containing the optimal criteria for i) discriminating between the targets and then eventually ii) for a final rank of the targets. In this study we present results from a recent camouflage assessment trial where human observers were used in a search by photo methodology to assess generic test camouflage patterns. We conducted a study to investigate possible improvements in camouflage patterns for battle dress uniforms. The aim was to do a comparative study of potential, and generic patterns intended for use in arid areas (sparsely vegetated, semi desert). We developed a test methodology that was intended to be simple, reliable and realistic with respect to the operational benefit of camouflage. Therefore we chose to conduct a human based observer trial founded on imagery of realistic targets in natural backgrounds. Inspired by a recent and similar trial in the UK, we developed new and purpose-based software to be able to conduct the observer trial. Our preferred assessment methodology - the observer trial - was based on target recordings in 12 different, but operational relevant scenes, collected in a dry and sparsely vegetated area (Rhodes). The scenes were chosen with the intention to span as broadly as possible. The targets were human-shaped mannequins and were situated identically in each of the scenes to allow for a relative comparison of camouflage effectiveness in each scene. Test of significance, among the targets' performance, was carried out by non-parametric tests as the corresponding time of detection distributions in overall were found to be difficult to parameterize. From the trial, containing 12 different scenes from sparsely vegetated areas we collected detection time's distributions for 6 generic targets through visual search by 148 observers. We found that the different targets performed differently, given by their corresponding time of detection distributions, within a single scene. Furthermore, we gained an overall ranking over all the 12 scenes by performing a weighted sum over all scenes, intended to keep as much of the vital information on the targets' signature effectiveness as possible. Our results show that it was possible to measure the targets performance relatively to another also when summing over all scenes. We also compared our ranking based on our preferred criterion (detection time) with a secondary (probability of detection) to assess the sensitivity of a final ranking based upon the test set-up and evaluation criterion. We found our observer-based approach to be well suited regarding its ability to discriminate between similar targets and to assign numeric values to the observed differences in performance. We believe our approach will be well suited as a tool whenever different aspects of camouflage are to be evaluated and understood further.

  1. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.

    PubMed

    Sohn, Bong-Soo

    2017-03-11

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

  2. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones

    PubMed Central

    Sohn, Bong-Soo

    2017-01-01

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487

  3. Condom Use and High-Risk Sexual Acts in Adult Films: A Comparison of Heterosexual and Homosexual Films

    PubMed Central

    Elliott, Marc N.; Kerndt, Peter R.; Schuster, Mark A.; Brook, Robert H.; Gelberg, Lillian

    2009-01-01

    Objectives. We compared the prevalence of condom use during a variety of sexual acts portrayed in adult films produced for heterosexual and homosexual audiences to assess compliance with state Occupational Health and Safety Administration regulations. Methods. We analyzed 50 heterosexual and 50 male homosexual films released between August 1, 2005, and July 31, 2006, randomly selected from the distributor of 85% of the heterosexual adult films released each year in the United States. Results. Penile–vaginal intercourse was protected with condoms in 3% of heterosexual scenes. Penile–anal intercourse, common in both heterosexual (42%) and homosexual (80%) scenes, was much less likely to be protected with condoms in heterosexual than in homosexual scenes (10% vs 78%; P < .001). No penile–oral acts were protected with condoms in any of the selected films. Conclusions. Heterosexual films were much less likely than were homosexual films to portray condom use, raising concerns about transmission of HIV and other sexually transmitted diseases, especially among performers in heterosexual adult films. In addition, the adult film industry, especially the heterosexual industry, is not adhering to state occupational safety regulations. PMID:19218178

  4. Perceptual salience affects the contents of working memory during free-recollection of objects from natural scenes

    PubMed Central

    Pedale, Tiziana; Santangelo, Valerio

    2015-01-01

    One of the most important issues in the study of cognition is to understand which are the factors determining internal representation of the external world. Previous literature has started to highlight the impact of low-level sensory features (indexed by saliency-maps) in driving attention selection, hence increasing the probability for objects presented in complex and natural scenes to be successfully encoded into working memory (WM) and then correctly remembered. Here we asked whether the probability of retrieving high-saliency objects modulates the overall contents of WM, by decreasing the probability of retrieving other, lower-saliency objects. We presented pictures of natural scenes for 4 s. After a retention period of 8 s, we asked participants to verbally report as many objects/details as possible of the previous scenes. We then computed how many times the objects located at either the peak of maximal or minimal saliency in the scene (as indexed by a saliency-map; Itti et al., 1998) were recollected by participants. Results showed that maximal-saliency objects were recollected more often and earlier in the stream of successfully reported items than minimal-saliency objects. This indicates that bottom-up sensory salience increases the recollection probability and facilitates the access to memory representation at retrieval, respectively. Moreover, recollection of the maximal- (but not the minimal-) saliency objects predicted the overall amount of successfully recollected objects: The higher the probability of having successfully reported the most-salient object in the scene, the lower the amount of recollected objects. These findings highlight that bottom-up sensory saliency modulates the current contents of WM during recollection of objects from natural scenes, most likely by reducing available resources to encode and then retrieve other (lower saliency) objects. PMID:25741266

  5. Development and application of operational techniques for the inventory and monitoring of resources and uses for the Texas coastal zone

    NASA Technical Reports Server (NTRS)

    Harwood, P. (Principal Investigator); Malin, P.; Finley, R.; Mcculloch, S.; Murphy, D.; Hupp, B.; Schell, J. A.

    1977-01-01

    The author has identified the following significant results. Four LANDSAT scenes were analyzed for the Harbor Island area test sites to produce land cover and land use maps using both image interpretation and computer-assisted techniques. When evaluated against aerial photography, the mean accuracy for three scenes was 84% for the image interpretation product and 62% for the computer-assisted classification maps. Analysis of the fourth scene was not completed using the image interpretation technique, because of poor quality, false color composite, but was available from the computer technique. Preliminary results indicate that these LANDSAT products can be applied to a variety of planning and management activities in the Texas coastal zone.

  6. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  7. Functional anatomy of temporal organisation and domain-specificity of episodic memory retrieval.

    PubMed

    Kwok, Sze Chai; Shallice, Tim; Macaluso, Emiliano

    2012-10-01

    Episodic memory provides information about the "when" of events as well as "what" and "where" they happened. Using functional imaging, we investigated the domain specificity of retrieval-related processes following encoding of complex, naturalistic events. Subjects watched a 42-min TV episode, and 24h later, made discriminative choices of scenes from the clip during fMRI. Subjects were presented with two scenes and required to either choose the scene that happened earlier in the film (Temporal), or the scene with a correct spatial arrangement (Spatial), or the scene that had been shown (Object). We identified a retrieval network comprising the precuneus, lateral and dorsal parietal cortex, middle frontal and medial temporal areas. The precuneus and angular gyrus are associated with temporal retrieval, with precuneal activity correlating negatively with temporal distance between two happenings at encoding. A dorsal fronto-parietal network engages during spatial retrieval, while antero-medial temporal regions activate during object-related retrieval. We propose that access to episodic memory traces involves different processes depending on task requirements. These include memory-searching within an organised knowledge structure in the precuneus (Temporal task), online maintenance of spatial information in dorsal fronto-parietal cortices (Spatial task) and combining scene-related spatial and non-spatial information in the hippocampus (Object task). Our findings support the proposal of process-specific dissociations of retrieval. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  9. Functional anatomy of temporal organisation and domain-specificity of episodic memory retrieval

    PubMed Central

    Kwok, Sze Chai; Shallice, Tim; Macaluso, Emiliano

    2013-01-01

    Episodic memory provides information about the “when” of events as well as “what” and “where” they happened. Using functional imaging, we investigated the domain specificity of retrieval-related processes following encoding of complex, naturalistic events. Subjects watched a 42-min TV episode, and 24 h later, made discriminative choices of scenes from the clip during fMRI. Subjects were presented with two scenes and required to either choose the scene that happened earlier in the film (Temporal), or the scene with a correct spatial arrangement (Spatial), or the scene that had been shown (Object). We identified a retrieval network comprising the precuneus, lateral and dorsal parietal cortex, middle frontal and medial temporal areas. The precuneus and angular gyrus are associated with temporal retrieval, with precuneal activity correlating negatively with temporal distance between two happenings at encoding. A dorsal fronto-parietal network engages during spatial retrieval, while antero-medial temporal regions activate during object-related retrieval. We propose that access to episodic memory traces involves different processes depending on task requirements. These include memory-searching within an organised knowledge structure in the precuneus (Temporal task), online maintenance of spatial information in dorsal fronto-parietal cortices (Spatial task) and combining scene-related spatial and non-spatial information in the hippocampus (Object task). Our findings support the proposal of process-specific dissociations of retrieval. PMID:22877840

  10. Suppression of vegetation in LANDSAT ETM+ remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael

    2010-05-01

    Vegetation cover is an impediment to the interpretation of multispectral remote sensing images for geological applications, especially in densely vegetated terrains. In order to enhance the underlying geological information in such terrains, it is desirable to suppress the reflectance component of vegetation. One form of spectral unmixing that has been successfully used for vegetation reflectance suppression in multispectral images is called "forced invariance". It is based on segregating components of the reflectance spectrum that are invariant with respect to a specific spectral index such as the NDVI. The forced invariance method uses algorithms such as software defoliation. However, the outputs of software defoliation are single channel data, which are not amenable to geological interpretations. Crippen and Blom (2001) proposed a new forced invariance algorithm that utilizes band statistics, rather than band ratios. The authors demonstrated the effectiveness of their algorithms on a LANDSAT TM scene from Nevada, USA, especially in open canopy areas in mixed and semi-arid terrains. In this presentation, we report the results of our experimentation with this algorithm on a densely to sparsely vegetated Landsat ETM+ scene. We selected a scene (Path 119, Row 39) acquired on 18th July, 2004. Two study areas located around the city of Hangzhou, eastern China were tested. One of them covers uninhabited hilly terrain characterized by low rugged topography, parts of the hills are densely vegetated; another one covers both inhabited urban areas and uninhabited hilly terrain, which is densely vegetated. Crippen and Blom's algorithm is implemented in the following sequential steps: (1) dark pixel correction; (2) vegetation index calculation; (3) estimation of statistical relationship between vegetation index and digital number (DN) values for each band; (4) calculation of a smooth best-fit curve for the above relationships; and finally, (5) selection of a target average DN value and scaling all pixels at each vegetation index level by an amount that shifts the curve to the target digital number (DN). The main drawback of their algorithm is severe distortions of the DN values of non-vegetated areas, a suggested solution is masking outliers such as cloud, water, etc. We therefore extend this algorithm by masking non-vegetated areas. Our algorithm comprises the following three steps: (1) masking of barren or sparsely vegetated areas using a threshold based on a vegetation index that is calculated after atmosphere correction (dark pixel correction and ACTOR were compared) in order to conserve their original spectral information through the subsequent processing; (2) applying Crippen and Blom's forced invariance algorithm to suppress the spectral response of vegetation only in vegetated areas; and (3) combining the processed vegetated areas with the masked barren or sparsely vegetated areas followed by histogram equalization to eliminate the differences in color-scales between these two types of areas, and enhance the integrated image. The output images of both study areas showed significant improvement over the original images in terms of suppression of vegetation reflectance and enhancement of the underlying geological information. The processed images show clear banding, probably associated with lithological variations in the underlying rock formations. The colors of non-vegetated pixels are distorted in the unmasked results but in the same location the pixels in the masked results show regions of higher contrast. We conclude that the algorithm offers an effective way to enhance geological information in LANDSAT TM/ETM+ images of terrains with significant vegetation cover. It is also suitable to other multispectral satellite data have bands in similar wavelength regions. In addition, an application of this method to hyperspectral data may be possible as long as it can provide the vegetation band ratios.

  11. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  12. Landsat-8: Status and on-orbit performance

    USGS Publications Warehouse

    Markham, Brian L; Barsi, Julia A.; Morfitt, Ron; Choate, Michael J.; Montanaro, Matthew; Arvidson, Terry; Irons, James R.

    2015-01-01

    Landsat 8 and its two Earth imaging sensors, the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) have been operating on-orbit for 2 ½ years. Landsat 8 has been acquiring substantially more images than initially planned, typically around 700 scenes per day versus a 400 scenes per day requirement, acquiring nearly all land scenes. Both the TIRS and OLI instruments are exceeding their SNR requirements by at least a factor of 2 and are very stable, degrading by at most 1% in responsivity over the mission to date. Both instruments have 100% operable detectors covering their cross track field of view using the redundant detectors as necessary. The geometric performance is excellent, meeting or exceeding all performance requirements. One anomaly occurred with the TIRS Scene Select Mirror (SSM) encoder that affected its operation, though by switching to the side B electronics, this was fully recovered. The one challenge is with the TIRS stray light, which affects the flat fielding and absolute calibration of the TIRS data. The error introduced is smaller in TIRS band 10. Band 11 should not currently be used in science applications.

  13. A benefit of context reinstatement to recognition memory in aging: the role of familiarity processes.

    PubMed

    Ward, Emma V; Maylor, Elizabeth A; Poirier, Marie; Korko, Malgorzata; Ruud, Jens C M

    2017-11-01

    Reinstatement of encoding context facilitates memory for targets in young and older individuals (e.g., a word studied on a particular background scene is more likely to be remembered later if it is presented on the same rather than a different scene or no scene), yet older adults are typically inferior at recalling and recognizing target-context pairings. This study examined the mechanisms of the context effect in normal aging. Age differences in word recognition by context condition (original, switched, none, new), and the ability to explicitly remember target-context pairings were investigated using word-scene pairs (Experiment 1) and word-word pairs (Experiment 2). Both age groups benefited from context reinstatement in item recognition, although older adults were significantly worse than young adults at identifying original pairings and at discriminating between original and switched pairings. In Experiment 3, participants were given a three-alternative forced-choice recognition task that allowed older individuals to draw upon intact familiarity processes in selecting original pairings. Performance was age equivalent. Findings suggest that heightened familiarity associated with context reinstatement is useful for boosting recognition memory in aging.

  14. The use of dental putty in the assessment of hard surfaces within paved urban areas that may leave defined or patterned marks on bodies.

    PubMed

    Johnson, Oliver Ross; Lyall, Matt; Johnson, Christopher Paul

    2015-04-01

    The identification of a patterned skin or scalp mark at autopsy can provide key forensic evidence in identifying an injury that may have been left by an assailant's footwear. It is also important to consider whether such a mark could alternatively have been left by the deceased coming into forceful contact with a hard surface at the scene of an incident, for example by falling. This study was designed to demonstrate how variable surfaces are within paved urban areas, including those which might leave marks resembling footwear patterns, and to evaluate whether dental putty impression lifting is a practical and effective adjunct to photography in assessing patterned surfaces. Eighteen 'scenes' of approximately 50 m² were assessed for different hard surfaces by photography and by the production of dental putty impression lifts. The number of hard surfaces varied between 4 and 12 per scene, with 90% (122/135) of all hard surfaces deemed likely to leave distinct marking on skin with forceful contact and 46% (62/135) a defined/regular mark potentially similar to a footwear injury (mean = 3.4 per scene). Dental putty proved to be an excellent tool in characterising hard surfaces, producing firm but slightly flexible lifts that can be used in combination with a commercially available inkless footwear impression kit to generate transparencies that help facilitate detailed comparison work. Whenever a potential footwear mark is identified at autopsy, a systematic examination of all hard surfaces at the scene is mandatory, and this process will be significantly strengthened by the use of dental putty. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. Spatial detection of tv channel logos as outliers from the content

    NASA Astrophysics Data System (ADS)

    Ekin, Ahmet; Braspenning, Ralph

    2006-01-01

    This paper proposes a purely image-based TV channel logo detection algorithm that can detect logos independently from their motion and transparency features. The proposed algorithm can robustly detect any type of logos, such as transparent and animated, without requiring any temporal constraints whereas known methods have to wait for the occurrence of large motion in the scene and assume stationary logos. The algorithm models logo pixels as outliers from the actual scene content that is represented by multiple 3-D histograms in the YC BC R space. We use four scene histograms corresponding to each of the four corners because the content characteristics change from one image corner to another. A further novelty of the proposed algorithm is that we define image corners and the areas where we compute the scene histograms by a cinematic technique called Golden Section Rule that is used by professionals. The robustness of the proposed algorithm is demonstrated over a dataset of representative TV content.

  16. Image variance and spatial structure in remotely sensed scenes. [South Dakota, California, Missouri, Kentucky, Louisiana, Tennessee, District of Columbia, and Oregon

    NASA Technical Reports Server (NTRS)

    Woodcock, C. E.; Strahler, A. H.

    1984-01-01

    Digital images derived by scanning air photos and through acquiring aircraft and spcecraft scanner data were studied. Results show that spatial structure in scenes can be measured and logically related to texture and image variance. Imagery data were used of a South Dakota forest; a housing development in Canoga Park, California; an agricltural area in Mississppi, Louisiana, Kentucky, and Tennessee; the city of Washington, D.C.; and the Klamath National Forest. Local variance, measured as the average standard deviation of brightness values within a three-by-three moving window, reaches a peak at a resolution cell size about two-thirds to three-fourths the size of the objects within the scene. If objects are smaller than the resolution cell size of the image, this peak does not occur and local variance simply decreases with increasing resolution as spatial averaging occurs. Variograms can also reveal the size, shape, and density of objects in the scene.

  17. Efficient structure from motion on large scenes using UAV with position and pose information

    NASA Astrophysics Data System (ADS)

    Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang

    2018-04-01

    In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.

  18. Attention to and Memory for Audio and Video Information in Television Scenes.

    ERIC Educational Resources Information Center

    Basil, Michael D.

    A study investigated whether selective attention to a particular television modality resulted in different levels of attention to and memory for each modality. Two independent variables manipulated selective attention. These were the semantic channel (audio or video) and viewers' instructed focus (audio or video). These variables were fully…

  19. Top-down influences on visual attention during listening are modulated by observer sex.

    PubMed

    Shen, John; Itti, Laurent

    2012-07-15

    In conversation, women have a small advantage in decoding non-verbal communication compared to men. In light of these findings, we sought to determine whether sex differences also existed in visual attention during a related listening task, and if so, if the differences existed among attention to high-level aspects of the scene or to conspicuous visual features. Using eye-tracking and computational techniques, we present direct evidence that men and women orient attention differently during conversational listening. We tracked the eyes of 15 men and 19 women who watched and listened to 84 clips featuring 12 different speakers in various outdoor settings. At the fixation following each saccadic eye movement, we analyzed the type of object that was fixated. Men gazed more often at the mouth and women at the eyes of the speaker. Women more often exhibited "distracted" saccades directed away from the speaker and towards a background scene element. Examining the multi-scale center-surround variation in low-level visual features (static: color, intensity, orientation, and dynamic: motion energy), we found that men consistently selected regions which expressed more variation in dynamic features, which can be attributed to a male preference for motion and a female preference for areas that may contain nonverbal information about the speaker. In sum, significant differences were observed, which we speculate arise from different integration strategies of visual cues in selecting the final target of attention. Our findings have implications for studies of sex in nonverbal communication, as well as for more predictive models of visual attention. Published by Elsevier Ltd.

  20. Linearized motion estimation for articulated planes.

    PubMed

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  1. New insights into ambient and focal visual fixations using an automatic classification algorithm

    PubMed Central

    Follet, Brice; Le Meur, Olivier; Baccino, Thierry

    2011-01-01

    Overt visual attention is the act of directing the eyes toward a given area. These eye movements are characterised by saccades and fixations. A debate currently surrounds the role of visual fixations. Do they all have the same role in the free viewing of natural scenes? Recent studies suggest that at least two types of visual fixations exist: focal and ambient. The former is believed to be used to inspect local areas accurately, whereas the latter is used to obtain the context of the scene. We investigated the use of an automated system to cluster visual fixations in two groups using four types of natural scene images. We found new evidence to support a focal–ambient dichotomy. Our data indicate that the determining factor is the saccade amplitude. The dependence on the low-level visual features and the time course of these two kinds of visual fixations were examined. Our results demonstrate that there is an interplay between both fixation populations and that focal fixations are more dependent on low-level visual features than are ambient fixations. PMID:23145248

  2. Neural markers of a greater female responsiveness to social stimuli

    PubMed Central

    Proverbio, Alice M; Zani, Alberto; Adorni, Roberta

    2008-01-01

    Background There is fMRI evidence that women are neurally predisposed to process infant laughter and crying. Other findings show that women might be more empathic and sensitive than men to emotional facial expressions. However, no gender difference in the brain responses to persons and unanimated scenes has hitherto been demonstrated. Results Twenty-four men and women viewed 220 images portraying persons or landscapes and ERPs were recorded from 128 sites. In women, but not in men, the N2 component (210–270) was much larger to persons than to scenes. swLORETA showed significant bilateral activation of FG (BA19/37) in both genders when viewing persons as opposed to scenes. Only women showed a source of activity in the STG and in the right MOG (extra-striate body area, EBA), and only men in the left parahippocampal area (PPA). Conclusion A significant gender difference was found in activation of the left and right STG (BA22) and the cingulate cortex for the subtractive condition women minus men, thus indicating that women might have a greater preference or interest for social stimuli (faces and persons). PMID:18590546

  3. Blood Oxygen Level-Dependent Activation of the Primary Visual Cortex Predicts Size Adaptation Illusion

    PubMed Central

    Pooresmaeili, Arezoo; Arrighi, Roberto; Biagi, Laura; Morrone, Maria Concetta

    2016-01-01

    In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene. PMID:24089504

  4. Summary of along-track data from the Earth radiation budget satellite for several major desert regions

    NASA Technical Reports Server (NTRS)

    Brooks, David R.; Fenn, Marta A.

    1988-01-01

    For several days in January and August 1985, the Earth Radiation Budget Satellite, a component of the Earth Radiation Budget Experiment (ERBE), was operated in an along-track scanning mode. A survey of radiance measurements is given for four desert areas in Africa, the Arabian Peninsula, Australia, and the Sahel region of Africa. Each overflight provides radiance information for four scene categories: clear, partly cloudy, mostly cloudy, and overcast. The data presented include the variation of radiance in each scene classification as a function of viewing zenith angle during each overflight of the five target areas. Several features of interest in the development of anisotropic models are evident, including day-night differences in longwave limb darkening and the azimuthal dependence of short wave radiance. There is some evidence that surface features may introduce thermal or visible shadowing that is not incorporated in the usual descriptions of the anisotropic behavior of radiance as viewed from space. The data also demonstrate that the ERBE scene classification algorithms give results that, at least for desert surfaces, are a function of viewing geometry.

  5. Is attention based on spatial contextual memory preferentially guided by low spatial frequency signals?

    PubMed

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.

  6. Is Attention Based on Spatial Contextual Memory Preferentially Guided by Low Spatial Frequency Signals?

    PubMed Central

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception. PMID:23776509

  7. Napping and the Selective Consolidation of Negative Aspects of Scenes

    PubMed Central

    Payne, Jessica D.; Kensinger, Elizabeth A.; Wamsley, Erin; Spreng, R. Nathan; Alger, Sara; Gibler, Kyle; Schacter, Daniel L.; Stickgold, Robert

    2018-01-01

    After information is encoded into memory, it undergoes an offline period of consolidation that occurs optimally during sleep. The consolidation process not only solidifies memories, but also selectively preserves aspects of experience that are emotionally salient and relevant for future use. Here, we provide evidence that an afternoon nap is sufficient to trigger preferential memory for emotional information contained in complex scenes. Selective memory for negative emotional information was enhanced after a nap compared to wakefulness in two control conditions designed to carefully address interference and time-of-day confounds. Although prior evidence has connected negative emotional memory formation to rapid eye movement (REM) sleep physiology, we found that non-REM delta activity and the amount of slow wave sleep (SWS) in the nap were robustly related to the selective consolidation of negative information. These findings suggest that the mechanisms underlying memory consolidation benefits associated with napping and nighttime sleep are not always the same. Finally, we provide preliminary evidence that the magnitude of the emotional memory benefit conferred by sleep is equivalent following a nap and a full night of sleep, suggesting that selective emotional remembering can be economically achieved by taking a nap. PMID:25706830

  8. Study of recreational land and open space using Skylab imagery

    NASA Technical Reports Server (NTRS)

    Sattinger, I. J. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. An analysis of the statistical uniqueness of each of the signatures of the Gratiot-Saginaw State Game Area was made by computing a matrix of probabilities of misclassification for all possible signature pairs. Within each data set, the 35 signatures were then aggregated into a smaller set of composite signatures by combining groups of signatures having high probabilities of misclassification. Computer separation of forest denisty classes was poor with multispectral scanner data collected on 5 August 1973. Signatures from the scanner data were further analyzed to determine the ranking of spectral channels for computer separation of the scene classes. Probabilities of misclassification were computed for composite signatures using four separate combinations of data source and channel selection.

  9. Features in Aureum Chaos

    NASA Technical Reports Server (NTRS)

    2004-01-01

    12 November 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows light-toned, sedimentary rock outcrops in the Aureum Chaos region of Mars. On the brightest and steepest slope in this scene, dry talus shed from the outcrop has formed a series of dark fans along its base. These outcrops are located near 3.4oS, 27.5oW. The image covers an area approximately 3 km (1.9 mi) across and sunlight illuminates the scene from the upper left.

  10. An Assessment of the Shipboard Training Effectiveness of the Integrated Damage Control Training Technology (IDCTT) Version 3.0

    DTIC Science & Technology

    1998-03-01

    damage control actions in an assigned area of the ship. Reports are received from the On Scene Leader ( OSL ) and Investigators. Simultaneously, the RPL...control location. A phone talker and plotter will perform in unison with their counterparts in DCC. Key members of the repair party, the OSL and...the obligation of the On Scene Leader ( OSL ). This experienced petty officer is tasked with directing the ATL’s actions and informing the RPL of repair

  11. Natural scene logo recognition by joint boosting feature selection in salient regions

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Sun, Jun; Naoi, Satoshi; Minagawa, Akihiro; Hotta, Yoshinobu

    2011-01-01

    Logos are considered valuable intellectual properties and a key component of the goodwill of a business. In this paper, we propose a natural scene logo recognition method which is segmentation-free and capable of processing images extremely rapidly and achieving high recognition rates. The classifiers for each logo are trained jointly, rather than independently. In this way, common features can be shared across multiple classes for better generalization. To deal with large range of aspect ratio of different logos, a set of salient regions of interest (ROI) are extracted to describe each class. We ensure the selected ROIs to be both individually informative and two-by-two weakly dependant by a Class Conditional Entropy Maximization criteria. Experimental results on a large logo database demonstrate the effectiveness and efficiency of our proposed method.

  12. [Evaluation standards and application for photography of schistosomiasis control theme].

    PubMed

    Chun-Li, Cao; Qing-Biao, Hong; Jing-Ping, Guo; Fang, Liu; Tian-Ping, Wang; Jian-Bin, Liu; Lin, Chen; Hao, Wang; You-Sheng, Liang; Jia-Gang, Guo

    2018-02-26

    To set up and apply the evaluation standards for photography of schistosomiasis control theme, so as to offer the scientific advice for enriching the health information carrier of schistosomiasis control. Through the literature review and expert consultation, the evaluation standard for photography of schistosomiasis control theme was formulated. The themes were divided into 4 projects, such as new construction, natural scenery, working scene, and control achievements. The evaluation criteria of the theme photography were divided into the theme (60%), photographic composition (15%), focus exposure (15%), and color saturation (10%) . A total of 495 pictures (sets) from 59 units with 77 authors were collected from schistosomiasis epidemic areas national wide. After the first-step screening and second-step evaluation, the prizes of 3 themes of control achievements and new construction, working scene, and natural scenery were selected, such as 6 pictures of first prize, 12 pictures of second prize, 18 pictures of third prize, and 20 pictures of honorable prize. The evaluation standards of theme photography should be taken into the consideration of the technical elements of photography and the work specification of schistosomiasis prevention and control. In order to improve the ability of records for propaganda purpose of schistosomiasis control and better play a role of guiding correct propaganda, the training and guidance of photography of professionals should be carried out.

  13. The Nigerian home video boom: should Nigerian psychiatrists be worried? Lessons from content review and views of community dwellers.

    PubMed

    Atilola, Olayinka; Olayiwola, Funmilayo

    2012-09-01

    Media depiction of sufferers of mental illness is a widely viewed source of stigmatization and studies have found stigmatizing depictions of mental illness in Nigerian films. With the recent boom in the Nigerian home video industry, there is a need to know how often Nigerians are exposed to films that contain scenes depicting mental illness and how much premium they place on such portrayals as reflecting reality. To assess the popularity of Nigerian home videos among Nigerian community dwellers and the frequency of their exposure to scenes depicting mental illness. A semi-structured questionnaire was designed to obtain socio-demographic data and to find out how often respondents see scenes depicting 'madness' in home videos, as well as their views about the accuracy of such depictions from the orthodox psychiatry point of view. Current home videos available in video rental shops were selected for viewing and content review. All 676 respondents had seen a Nigerian home video in the preceding 30 days: 528 (78%) reported scenes depicting 'mad persons'; 472 (70%) reported that the scenes they saw agreed with their own initial understanding of the cause and treatment of 'madness'. About 20% of the films depicted mental illness. The most commonly depicted cause was sorcery and enchantment by witches and wizards, while the most commonly depicted treatment was magical and spiritual healing by diviners and religious priests. Nigerian home video is a popular electronic media in Nigeria and scenes depicting mental illness are not uncommon. The industry could be harnessed for promoting mental health literacy.

  14. The depiction of protective eyewear use in popular television programs.

    PubMed

    Glazier, Robert; Slade, Martin; Mayer, Hylton

    2011-04-01

    Media portrayal of health related activities may influence health related behaviors in adult and pediatric populations. This study characterizes the depiction of protective eyewear use in the scripted television programs most viewed by the age group that sustains the largest proportion of eye injuries. Viewership ratings data were acquired to assemble a list of the 24 most-watched scripted network broadcast programs for the 13-year-old to 45-year-old age group. The six highest average viewership programs that met the exclusion criteria were selected for analysis. Review of 30 episodes revealed a total of 258 exposure scenes in which an individual was engaged in an activity requiring eye protection (mean, 8.3 exposure scenes per episode; median, 5 exposure scenes per episode). Overall, 66 (26%) of exposure scenes depicted the use of any eye protection, while only 32 (12%) of exposure scenes depicted the use of adequate eye protection. No incidences of eye injuries or infectious exposures were depicted within the exposure scenes in the study set. The depiction of adequate protective eyewear use during eye-risk activities is rare in network scripted broadcast programs. Healthcare professionals and health advocacy groups should continue to work to improve public education about eye injury risks and prevention; these efforts could include working with the television industry to improve the accuracy of the depiction of eye injuries and the proper protective eyewear used for prevention of injuries in scripted programming. Future studies are needed to examine the relationship between media depiction of eye protection use and viewer compliance rates.

  15. Mastcam Special Filters Help Locate Variations Ahead

    NASA Image and Video Library

    2017-11-01

    This pair of images from the Mast Camera (Mastcam) on NASA's Curiosity rover illustrates how special filters are used to scout terrain ahead for variations in the local bedrock. The upper panorama is in the Mastcam's usual full color, for comparison. The lower panorama of the same scene, in false color, combines three exposures taken through different "science filters," each selecting for a narrow band of wavelengths. Filters and image processing steps were selected to make stronger signatures of hematite, an iron-oxide mineral, evident as purple. Hematite is of interest in this area of Mars -- partway up "Vera Rubin Ridge" on lower Mount Sharp -- as holding clues about ancient environmental conditions under which that mineral originated. In this pair of panoramas, the strongest indications of hematite appear related to areas where the bedrock is broken up. With information from this Mastcam reconnaissance, the rover team selected destinations in the scene for close-up investigations to gain understanding about the apparent patchiness in hematite spectral features. The Mastcam's left-eye camera took the component images of both panoramas on Sept. 12, 2017, during the 1,814th Martian day, or sol, of Curiosity's work on Mars. The view spans from south-southeast on the left to south-southwest on the right. The foreground across the bottom of the scene is about 50 feet (about 15 meters) wide. Figure 1 includes scale bars of 1 meter (3.3 feet) in the middle distance and 5 meters (16 feet) at upper right. Curiosity's Mastcam combines two cameras: the right eye with a telephoto lens and the left eye with a wider-angle lens. Each camera has a filter wheel that can be rotated in front of the lens for a choice of eight different filters. One filter for each camera is clear to all visible light, for regular full-color photos, and another is specifically for viewing the Sun. Some of the other filters were selected to admit wavelengths of light that are useful for identifying iron minerals. Each of the filters used for the lower panorama shown here admits light from a narrow band of wavelengths, extending to only about 5 to 10 nanometers longer or shorter than the filter's central wavelength. The three observations combined into this product used filters centered at three near-infrared wavelengths: 751 nanometers, 867 nanometers and 1,012 nanometers. Hematite distinctively absorbs some frequencies of infrared light more than others. Usual color photographs from digital cameras -- such as the upper panorama here from Mastcam -- combine information from red, green and blue filtering. The filters are in a microscopic grid in a "Bayer" filter array situated directly over the detector behind the lens, with wider bands of wavelengths. The colors of the upper panorama, as with most featured images from Mastcam, have been tuned with a color adjustment similar to white balancing for approximating how the rocks and sand would appear under daytime lighting conditions on Earth. https://photojournal.jpl.nasa.gov/catalog/PIA22065

  16. Rapid natural scene categorization in the near absence of attention

    PubMed Central

    Li, Fei Fei; VanRullen, Rufin; Koch, Christof; Perona, Pietro

    2002-01-01

    What can we see when we do not pay attention? It is well known that we can be “blind” even to major aspects of natural scenes when we attend elsewhere. The only tasks that do not need attention appear to be carried out in the early stages of the visual system. Contrary to this common belief, we report that subjects can rapidly detect animals or vehicles in briefly presented novel natural scenes while simultaneously performing another attentionally demanding task. By comparison, they are unable to discriminate large T's from L's, or bisected two-color disks from their mirror images under the same conditions. We conclude that some visual tasks associated with “high-level” cortical areas may proceed in the near absence of attention. PMID:12077298

  17. Single-unit activity during natural vision: diversity, consistency, and spatial sensitivity among AF face patch neurons.

    PubMed

    McMahon, David B T; Russ, Brian E; Elnaiem, Heba D; Kurnikova, Anastasia I; Leopold, David A

    2015-04-08

    Several visual areas within the STS of the macaque brain respond strongly to faces and other biological stimuli. Determining the principles that govern neural responses in this region has proven challenging, due in part to the inherently complex stimulus domain of dynamic biological stimuli that are not captured by an easily parameterized stimulus set. Here we investigated neural responses in one fMRI-defined face patch in the anterior fundus (AF) of the STS while macaques freely view complex videos rich with natural social content. Longitudinal single-unit recordings allowed for the accumulation of each neuron's responses to repeated video presentations across sessions. We found that individual neurons, while diverse in their response patterns, were consistently and deterministically driven by the video content. We used principal component analysis to compute a family of eigenneurons, which summarized 24% of the shared population activity in the first two components. We found that the most prominent component of AF activity reflected an interaction between visible body region and scene layout. Close-up shots of faces elicited the strongest neural responses, whereas far away shots of faces or close-up shots of hindquarters elicited weak or inhibitory responses. Sensitivity to the apparent proximity of faces was also observed in gamma band local field potential. This category-selective sensitivity to spatial scale, together with the known exchange of anatomical projections of this area with regions involved in visuospatial analysis, suggests that the AF face patch may be specialized in aspects of face perception that pertain to the layout of a social scene.

  18. A validation of ground ambulance pre-hospital times modeled using geographic information systems.

    PubMed

    Patel, Alka B; Waters, Nigel M; Blanchard, Ian E; Doig, Christopher J; Ghali, William A

    2012-10-03

    Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7-8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area.

  19. Application of LC and LCoS in Multispectral Polarized Scene Projector (MPSP)

    NASA Astrophysics Data System (ADS)

    Yu, Haiping; Guo, Lei; Wang, Shenggang; Lippert, Jack; Li, Le

    2017-02-01

    A Multispectral Polarized Scene Projector (MPSP) had been developed in the short-wave infrared (SWIR) regime for the test & evaluation (T&E) of spectro-polarimetric imaging sensors. This MPSP generates multispectral and hyperspectral video images (up to 200 Hz) with 512×512 spatial resolution with active spatial, spectral, and polarization modulation with controlled bandwidth. It projects input SWIR radiant intensity scenes from stored memory with user selectable wavelength and bandwidth, as well as polarization states (six different states) controllable on a pixel level. The spectral contents are implemented by a tunable filter with variable bandpass built based on liquid crystal (LC) material, together with one passive visible and one passive SWIR cholesteric liquid crystal (CLC) notch filters, and one switchable CLC notch filter. The core of the MPSP hardware is the liquid-crystal-on-silicon (LCoS) spatial light modulators (SLMs) for intensity control and polarization modulation.

  20. Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction

    NASA Astrophysics Data System (ADS)

    Fukushima, H.; Toratani, M.

    1997-07-01

    The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.

  1. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.

  2. Specific and Nonspecific Neural Activity during Selective Processing of Visual Representations in Working Memory

    ERIC Educational Resources Information Center

    Oh, Hwamee; Leung, Hoi-Chung

    2010-01-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two…

  3. Good Exemplars of Natural Scene Categories Elicit Clearer Patterns than Bad Exemplars but Not Greater BOLD Activity

    PubMed Central

    Torralbo, Ana; Walther, Dirk B.; Chai, Barry; Caddigan, Eamon; Fei-Fei, Li; Beck, Diane M.

    2013-01-01

    Within the range of images that we might categorize as a “beach”, for example, some will be more representative of that category than others. Here we first confirmed that humans could categorize “good” exemplars better than “bad” exemplars of six scene categories and then explored whether brain regions previously implicated in natural scene categorization showed a similar sensitivity to how well an image exemplifies a category. In a behavioral experiment participants were more accurate and faster at categorizing good than bad exemplars of natural scenes. In an fMRI experiment participants passively viewed blocks of good or bad exemplars from the same six categories. A multi-voxel pattern classifier trained to discriminate among category blocks showed higher decoding accuracy for good than bad exemplars in the PPA, RSC and V1. This difference in decoding accuracy cannot be explained by differences in overall BOLD signal, as average BOLD activity was either equivalent or higher for bad than good scenes in these areas. These results provide further evidence that V1, RSC and the PPA not only contain information relevant for natural scene categorization, but their activity patterns mirror the fundamentally graded nature of human categories. Analysis of the image statistics of our good and bad exemplars shows that variability in low-level features and image structure is higher among bad than good exemplars. A simulation of our neuroimaging experiment suggests that such a difference in variance could account for the observed differences in decoding accuracy. These results are consistent with both low-level models of scene categorization and models that build categories around a prototype. PMID:23555588

  4. Characteristics of nontrauma scene flights for air medical transport.

    PubMed

    Krebs, Margaret G; Fletcher, Erica N; Werman, Howard; McKenzie, Lara B

    2014-01-01

    Little is known about the use of air medical transport for patients with medical, rather than traumatic, emergencies. This study describes the practices of air transport programs, with respect to nontrauma scene responses, in several areas throughout the United States and Canada. A descriptive, retrospective study was conducted of all nontrauma scene flights from 2008 and 2009. Flight information and patient demographic data were collected from 5 air transport programs. Descriptive statistics were used to examine indications for transport, Glasgow Coma Scale Scores, and loaded miles traveled. A total of 1,785 nontrauma scene flights were evaluated. The percentage of scene flights contributed by nontraumatic emergencies varied between programs, ranging from 0% to 44.3%. The most common indication for transport was cardiac, nonST-segment elevation myocardial infarction (22.9%). Cardiac arrest was the indication for transport in 2.5% of flights. One air transport program reported a high percentage (49.4) of neurologic, stroke, flights. The use of air transport for nontraumatic emergencies varied considerably between various air transport programs and regions. More research is needed to evaluate which nontraumatic emergencies benefit from air transport. National guidelines regarding the use of air transport for nontraumatic emergencies are needed. Copyright © 2014 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  5. Advancing the retrievals of surface emissivity by modelling the spatial distribution of temperature in the thermal hyperspectral scene

    NASA Astrophysics Data System (ADS)

    Shimoni, M.; Haelterman, R.; Lodewyckx, P.

    2016-05-01

    Land Surface Temperature (LST) and Land Surface Emissivity (LSE) are commonly retrieved from thermal hyperspectral imaging. However, their retrieval is not a straightforward procedure because the mathematical problem is ill-posed. This procedure becomes more challenging in an urban area where the spatial distribution of temperature varies substantially in space and time. For assessing the influence of several spatial variances on the deviation of the temperature in the scene, a statistical model is created. The model was tested using several images from various times in the day and was validated using in-situ measurements. The results highlight the importance of the geometry of the scene and its setting relative to the position of the sun during day time. It also shows that when the position of the sun is in zenith, the main contribution to the thermal distribution in the scene is the thermal capacity of the landcover materials. In this paper we propose a new Temperature and Emissivity Separation (TES) method which integrates 3D surface and landcover information from LIDAR and VNIR hyperspectral imaging data in an attempt to improve the TES procedure for a thermal hyperspectral scene. The experimental results prove the high accuracy of the proposed method in comparison to another conventional TES model.

  6. A study to explore the use of orbital remote sensing to determine native arid plant distribution. [Arizona

    NASA Technical Reports Server (NTRS)

    Mcginnies, W. G. (Principal Investigator); Conn, J. S.; Haase, E. F.; Lepley, L. K.; Musick, H. B.; Foster, K. E.

    1975-01-01

    The author has identified the following significant results. Research results include a method for determining the reflectivities of natural areas from ERTS data taking into account sun angle and atmospheric effects on the radiance seen by the satellite sensor. Ground truth spectral signature data for various types of scenes, including ground with and without annuals, and various shrubs were collected. Large areas of varnished desert pavement are visible and mappable on ERTS and high altitude aircraft imagery. A large scale and a small scale vegetation pattern were found to be correlated with presence of desert pavement. A comparison of radiometric data with video recordings shows quantitatively that for most areas of desert vegetation, soils are the most influential factor in determining the signature of a scene. Additive and subtractive image processing techniques were applied in the dark room to enhance vegetational aspects of ERTS.

  7. Topography-Dependent Motion Compensation: Application to UAVSAR Data

    NASA Technical Reports Server (NTRS)

    Jones, Cathleen E.; Hensley, Scott; Michel, Thierry

    2009-01-01

    The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.

  8. Aircraft MSS data registration and vegetation classification of wetland change detection

    USGS Publications Warehouse

    Christensen, E.J.; Jensen, J.R.; Ramsey, Elijah W.; Mackey, H.E.

    1988-01-01

    Portions of the Savannah River floodplain swamp were evaluated for vegetation change using high resolution (5a??6 m) aircraft multispectral scanner (MSS) data. Image distortion from aircraft movement prevented precise image-to-image registration in some areas. However, when small scenes were used (200-250 ha), a first-order linear transformation provided registration accuracies of less than or equal to one pixel. A larger area was registered using a piecewise linear method. Five major wetland classes were identified and evaluated for change. Phenological differences and the variable distribution of vegetation limited wetland type discrimination. Using unsupervised methods and ground-collected vegetation data, overall classification accuracies ranged from 84 per cent to 87 per cent for each scene. Results suggest that high-resolution aircraft MSS data can be precisely registered, if small areas are used, and that wetland vegetation change can be accurately detected and monitored.

  9. A dose-dependent relationship between exposure to a street-based drug scene and health-related harms among people who use injection drugs.

    PubMed

    Debeck, Kora; Wood, Evan; Zhang, Ruth; Buxton, Jane; Montaner, Julio; Kerr, Thomas

    2011-08-01

    While the community impacts of drug-related street disorder have been well described, lesser attention has been given to the potential health and social implications of drug scene exposure on street-involved people who use illicit drugs. Therefore, we sought to assess the impacts of exposure to a street-based drug scene among injection drug users (IDU) in a Canadian setting. Data were derived from a prospective cohort study known as the Vancouver Injection Drug Users Study. Four categories of drug scene exposure were defined based on the numbers of hours spent on the street each day. Three generalized estimating equation (GEE) logistic regression models were constructed to identify factors associated with varying levels of drug scene exposure (2-6, 6-15, over 15 hours) during the period of December 2005 to March 2009. Among our sample of 1,486 IDU, at baseline, a total of 314 (21%) fit the criteria for high drug scene exposure (>15 hours per day). In multivariate GEE analysis, factors significantly and independently associated with high exposure included: unstable housing (adjusted odds ratio [AOR] = 9.50; 95% confidence interval [CI], 6.36-14.20); daily crack use (AOR = 2.70; 95% CI, 2.07-3.52); encounters with police (AOR = 2.11; 95% CI, 1.62-2.75); and being a victim of violence (AOR = 1.49; 95 % CI, 1.14-1.95). Regular employment (AOR = 0.50; 95% CI, 0.38-0.65), and engagement with addiction treatment (AOR = 0.58; 95% CI, 0.45-0.75) were negatively associated with high exposure. Our findings indicate that drug scene exposure is associated with markers of vulnerability and higher intensity addiction. Intensity of drug scene exposure was associated with indicators of vulnerability to harm in a dose-dependent fashion. These findings highlight opportunities for policy interventions to address exposure to street disorder in the areas of employment, housing, and addiction treatment.

  10. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    NASA Astrophysics Data System (ADS)

    Willers, Cornelius J.; Willers, Maria S.; Lapierre, Fabian

    2011-11-01

    The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction, the development of surveillance and missile sensors, signal/image processing algorithm development and aircraft self-protection countermeasure system development and evaluation. Even the most cursory investigation reveals a multitude of factors affecting the infrared signatures of realworld objects. Factors such as spectral emissivity, spatial/volumetric radiance distribution, specular reflection, reflected direct sunlight, reflected ambient light, atmospheric degradation and more, all affect the presentation of an object's instantaneous signature. The signature is furthermore dynamically varying as a result of internal and external influences on the object, resulting from the heat balance comprising insolation, internal heat sources, aerodynamic heating (airborne objects), conduction, convection and radiation. In order to accurately render the object's signature in a computer simulation, the rendering equations must therefore account for all the elements of the signature. In this overview paper, the signature models, rendering equations and application frameworks of three infrared simulation systems are reviewed and compared. The paper first considers the problem of infrared scene simulation in a framework for simulation validation. This approach provides concise definitions and a convenient context for considering signature models and subsequent computer implementation. The primary radiometric requirements for an infrared scene simulator are presented next. The signature models and rendering equations implemented in OSMOSIS (Belgian Royal Military Academy), DIRSIG (Rochester Institute of Technology) and OSSIM (CSIR & Denel Dynamics) are reviewed. In spite of these three simulation systems' different application focus areas, their underlying physics-based approach is similar. The commonalities and differences between the different systems are investigated, in the context of their somewhat different application areas. The application of an infrared scene simulation system towards the development of imaging missiles and missile countermeasures are briefly described. Flowing from the review of the available models and equations, recommendations are made to further enhance and improve the signature models and rendering equations in infrared scene simulators.

  11. Contrast performance modeling of broadband reflective imaging systems with hypothetical tunable filter fore-optics

    NASA Astrophysics Data System (ADS)

    Hodgkin, Van A.

    2015-05-01

    Most mass-produced, commercially available and fielded military reflective imaging systems operate across broad swaths of the visible, near infrared (NIR), and shortwave infrared (SWIR) wavebands without any spectral selectivity within those wavebands. In applications that employ these systems, it is not uncommon to be imaging a scene in which the image contrasts between the objects of interest, i.e., the targets, and the objects of little or no interest, i.e., the backgrounds, are sufficiently low to make target discrimination difficult or uncertain. This can occur even when the spectral distribution of the target and background reflectivity across the given waveband differ significantly from each other, because the fundamental components of broadband image contrast are the spectral integrals of the target and background signatures. Spectral integration by the detectors tends to smooth out any differences. Hyperspectral imaging is one approach to preserving, and thus highlighting, spectral differences across the scene, even when the waveband integrated signatures would be about the same, but it is an expensive, complex, noncompact, and untimely solution. This paper documents a study of how the capability to selectively customize the spectral width and center wavelength with a hypothetical tunable fore-optic filter would allow a broadband reflective imaging sensor to optimize image contrast as a function of scene content and ambient illumination.

  12. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  13. Developmental Change in the Acuity of Approximate Number and Area Representations

    ERIC Educational Resources Information Center

    Odic, Darko; Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    From very early in life, humans can approximate the number and surface area of objects in a scene. The ability to discriminate between 2 approximate quantities, whether number or area, critically depends on the ratio between the quantities, with the most difficult ratio that a participant can reliably discriminate known as the Weber fraction.…

  14. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  15. Electrocortical amplification for emotionally arousing natural scenes: the contribution of luminance and chromatic visual channels.

    PubMed

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas

    2015-03-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Use of multi-sensor active fire detections to map fires in the United States: the future of monitoring trends in burn severity

    USGS Publications Warehouse

    Picotte, Joshua J.; Coan, Michael; Howard, Stephen M.

    2014-01-01

    The effort to utilize satellite-based MODIS, AVHRR, and GOES fire detections from the Hazard Monitoring System (HMS) to identify undocumented fires in Florida and improve the Monitoring Trends in Burn Severity (MTBS) mapping process has yielded promising results. This method was augmented using regression tree models to identify burned/not-burned pixels (BnB) in every Landsat scene (1984–2012) in Worldwide Referencing System 2 Path/Rows 16/40, 17/39, and 1839. The burned area delineations were combined with the HMS detections to create burned area polygons attributed with their date of fire detection. Within our study area, we processed 88,000 HMS points (2003–2012) and 1,800 Landsat scenes to identify approximately 300,000 burned area polygons. Six percent of these burned area polygons were larger than the 500-acre MTBS minimum size threshold. From this study, we conclude that the process can significantly improve understanding of fire occurrence and improve the efficiency and timeliness of assessing its impacts upon the landscape.

  18. Monitoring the extent and occurence of fire in the different veld types of South Africa with particular reference to it's ecological role

    NASA Technical Reports Server (NTRS)

    Edwards, D. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. Imagery showed the highest amount of burned area to be in the western, southern, and eastern Transvaal and in one scene of the Transkei coast region. The percentage of burned area per image in all instances exceeded 1.4% reaching a maximum of 8.24% equivalent to 121,758 ha out of 1,476,540 ha of one image in the eastern Transvaal lowveld. There was a consistent increase in the amount of burnt area on images from July through to the end of October. From October onwards, there was a decrease in burned area so that during December there was none or very little burning evident. Four scenes comprising nine images showed burning according to the twelve veld types. Considerable variation was evident in the burning between different veld types: between 10 and 19% of the mixed, sourish-mixed, and sour bushveld types was burnt, but in other veld types the percentage of burnt area was less than 1%.

  19. Coordination and establishment of centralized facilities and services for the University of Alaska ERTS survey of the Alaskan environment

    NASA Technical Reports Server (NTRS)

    Belon, A. E. (Principal Investigator); Miller, J. M.

    1973-01-01

    The author has identified the following significant results. Scene 1072-21173 of the Anaktuvuk Pass region of the Brooks Range, Alaska, was studied from the point of view of a resource survey for purposes of land use planning as part of the effort to develop ERTS data processing and interpretation techniques. Other data sources and surface observations were utilized to produce a resource survey of a remote and undeveloped region of Alaska. Three vegetative types are apparent: moist tundra, low brush, and high brush. Watersheds are easily defined on the multispectral imagery. Features related indirectly to economic minerals are discernible from ERTS-1 imagery supported by ground truth data. These include mountains, outwash plains and alluvial deposits, drainage patterns, lineaments and probable bedding planes. This region falls within present land class categories which are not inconsistent with the imperatives of the resources. These land class categories include native village withdrawals, regional deficiency area, national interest study area for possible inclusion in a national system, public interest areas, utility corridor, and state land selection.

  20. Better Pictures in a Snap

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Retinex Imaging Processing, winner of NASA's 1999 Space Act Award, is commercially available through TruView Imaging Company. With this technology, amateur photographers use their personal computers to improve the brightness, scene contrast, detail, and overall sharpness of images with increased ease. The process was originally developed for remote sensing of the Earth by researchers at Langley Research Center and Science and Technology Corporation (STC). It automatically enhances a digital image in terms of dynamic range compression, color independence from the spectral distribution of the scene illuminant, and color/lightness rendition. As a result, the enhanced digital image is much closer to the scene perceived by the human visual system, under all kinds and levels of lighting variations. TruView believes there are other applications for the software in medical imaging, forensics, security, recognizance, mining, assembly, and other industrial areas.

  1. Locally excitatory, globally inhibitory oscillator networks: theory and application to scene segmentation

    NASA Astrophysics Data System (ADS)

    Wang, DeLiang; Terman, David

    1995-01-01

    A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.

  2. Interactive imagery and colour in paired-associate learning.

    PubMed

    Wilton, Richard N

    2006-01-01

    In four experiments participants were instructed to imagine scenes that described either an animal interacting with a coloured object or scenes in which the animal and coloured object were independent of each other. Participants were then given the name of the animal and required to select the name of the object and its colour. The results showed that the classic interactive imagery effect was greater for the selection of the name of the object than it was for colour. In Experiments 2, 3, and 4, additional measures were taken which suggest that the effect for colour is dependent upon the retrieval of other features of the object (e.g., its form). Thus it is argued that there is no primary interactive imagery effect for colour. The results were predicted by a version of the shared information hypothesis. The implications of the results for alternative theories are also considered.

  3. Anticipatory scene representation in preschool children's recall and recognition memory.

    PubMed

    Kreindel, Erica; Intraub, Helene

    2017-09-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4-5-year-olds and adults. In Experiment 2, a new, forced-choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide-angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3-5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age-related differences observed. © 2016 John Wiley & Sons Ltd.

  4. A study to explore the use of orbital remote sensing to determine native arid plant distribution. [Arizona

    NASA Technical Reports Server (NTRS)

    Mcginnies, W. G.; Haase, E. F. (Principal Investigator); Musick, H. B. (Compiler)

    1973-01-01

    The author has identified the following significant results. Ground truth spectral signature data for various types of scenes, including ground with and without annuals, and various shrubs, were collected. When these signature data are plotted with infrared (MSS band 6 or 7) reflectivity on one axis and red (MSS band 5) reflectivity on the other axis, clusters of data from the various types of scenes are distinct. This method of expressing spectral signature data appears to be more useful for distinguishing types of scenes than a simple infrared to red reflectivity ration. Large areas of varnished desert pavement are visible and mappable on ERTS-1 and high altitude aircraft imagery. A large scale vegetation pattern was found to be correlated with the presence of the desert pavement. The large scale correlation was used in mapping the vegetation of the area. It was found that a distinctive soil type was associated with the presence of the varnished desert pavement. The high salinity and exchangeable sodium percentage of this soil type provide a basis for the explanation of both the large scale and small scale vegetation pattern.

  5. High-resolution land cover classification using low resolution global data

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.

    2013-05-01

    A fusion approach is described that combines texture features from high-resolution panchromatic imagery with land cover statistics derived from co-registered low-resolution global databases to obtain high-resolution land cover maps. The method does not require training data or any human intervention. We use an MxN Gabor filter bank consisting of M=16 oriented bandpass filters (0-180°) at N resolutions (3-24 meters/pixel). The size range of these spatial filters is consistent with the typical scale of manmade objects and patterns of cultural activity in imagery. Clustering reduces the complexity of the data by combining pixels that have similar texture into clusters (regions). Texture classification assigns a vector of class likelihoods to each cluster based on its textural properties. Classification is unsupervised and accomplished using a bank of texture anomaly detectors. Class likelihoods are modulated by land cover statistics derived from lower resolution global data over the scene. Preliminary results from a number of Quickbird scenes show our approach is able to classify general land cover features such as roads, built up area, forests, open areas, and bodies of water over a wide range of scenes.

  6. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  7. Human memory manipulated: dissociating factors contributing to MTL activity, an fMRI study.

    PubMed

    Pustina, Dorian; Gizewski, Elke; Forsting, Michael; Daum, Irene; Suchan, Boris

    2012-04-01

    Memory processes are mainly studied with subjective rating procedures. We used a morphing procedure to objectively manipulate the similarity of target stimuli. While undergoing functional magnetic resonance imaging, nineteen subjects performed a encoding and recognition task on face and scene stimuli, varying the degree of manipulation of previously studied targets at 0%, 20%, 40% or 60%. Analyses were performed with parametric modulations for objective stimulus status (morphing level), subjective memory (confidence rating), and reaction times (RTs). Results showed that medial temporal lobe (MTL) activity can be best explained by a combination of subjective and objective factors. Memory success is associated with activity modulation in the hippocampus both for faces and for scenes. Memory failures correlated with lower hippocampal activity for scenes, but not for faces. Activity changed during retrieval on similar areas activated during encoding. There was a considerable impact of RTs on memory-related areas. Objective perceptual identity correlated with activity in the left MTL, while subjective memory experience correlated with activity in the right MTL for both types of material. Overall, the results indicate that MTL activity is heterogeneous, showing both linear and non-linear activity, depending on the factor analyzed. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Improved canopy reflectance modeling and scene inference through improved understanding of scene pattern

    NASA Technical Reports Server (NTRS)

    Franklin, Janet; Simonett, David

    1988-01-01

    The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.

  9. Who Killed Myra Mains?

    ERIC Educational Resources Information Center

    Sandage, Barbara J.

    2002-01-01

    Reports on the development and implementation of an integrated forensic science unit. Students examine and test evidence from a mock crime scene. Addresses many areas of the National Science Education Standards. (DDR)

  10. Atmospheric corrections for satellite water quality studies

    NASA Technical Reports Server (NTRS)

    Piech, K. R.; Schott, J. R.

    1975-01-01

    Variations in the relative value of the blue and green reflectances of a lake can be correlated with important optical and biological parameters measured from surface vessels. Measurement of the relative reflectance values from color film imagery requires removal of atmospheric effects. Data processing is particularly crucial because: (1) lakes are the darkest objects in a scene; (2) minor reflectance changes can correspond to important physical changes; (3) lake systems extend over broad areas in which atmospheric conditions may fluctuate; (4) seasonal changes are of importance; and, (5) effects of weather are important, precluding flights under only ideal weather conditions. Data processing can be accomplished through microdensitometry of scene shadow areas. Measurements of reflectance ratios can be made to an accuracy of plus or minus 12%, sufficient to permit monitoring of important eutrophication indices.

  11. Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones

    PubMed Central

    Chen, Jing; Cao, Ruochen; Wang, Yongtian

    2015-01-01

    Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters. PMID:26690439

  12. Evaluation of ZY-3 for Dsm and Ortho Image Generation

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.

    2013-04-01

    DSM generation using stereo satellites is an important topic for many applications. China has launched the three line ZY-3 stereo mapping satellite last year. This paper evaluates the ZY-3 performance for DSM and orthophoto generation on two scenes east of Munich. The direct georeferencing performance is tested using survey points, and the 3D RMSE is 4.5 m for the scene evaluated in this paper. After image orientation with GCPs and tie points, a DSM is generated using the Semi-Global Matching algorithm. For two 5 × 5 km2 test areas, a LIDAR reference DTM was available. After masking out forest areas, the overall RMSE between ZY-3 DSM and LIDAR reference is 2.0 m (RMSE). Additionally, qualitative comparison between ZY-3 and Cartosat-1 DSMs is performed.

  13. Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones.

    PubMed

    Chen, Jing; Cao, Ruochen; Wang, Yongtian

    2015-12-10

    Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters.

  14. Numerous Seasonal Lineae on Coprates Montes, Mars

    NASA Image and Video Library

    2016-07-07

    The white arrows indicate locations in this scene where numerous seasonal dark streaks have been identified in the Coprates Montes area of Mars' Valles Marineris by repeated observations from orbit. The streaks, called recurring slope lineae or RSL, extend downslope during a warm season, fade in the colder part of the year, and repeat the process the next Martian year. They are regarded as the strongest evidence for the possibility of liquid water on the surface of modern Mars. This oblique perspective for this view uses a three-dimensional terrain model derived from a stereo pair of observations by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The scene covers an area approximately 1.6 miles (2.5 kilometers) wide. http://photojournal.jpl.nasa.gov/catalog/PIA20757

  15. Seasat views North America, the Caribbean, and Western Europe with imaging radar

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Blom, R. G.; Bryan, M. L.; Daily, M.; Dixon, T. H.; Elachi, C.; Xenos, E. C.

    1980-01-01

    Forty-one digitally correlated Seasat synthetic-aperture radar images of land areas in North America, the Caribbean, and Western Europe are presented to demonstrate this microwave orbital imagery. The characteristics of the radar images, the types of information that can be extracted from them, and certain of their inherent distortions are briefly described. Each atlas scene covers an area of 90 X 90 kilometers, with the exception of the one that is the Nation's Capital. The scenes are grouped according to salient features of geology, hydrology and water resources, urban landcover, or agriculture. Each radar image is accompanied by a corresponding image in the optical or near-infrared range, or by a simple sketch map to illustrate features of interest. Characteristics of the Seasat radar imaging system are outlined.

  16. A study of human recognition rates for foveola-sized image patches selected from initial and final fixations on calibrated natural images

    NASA Astrophysics Data System (ADS)

    van der Linde, Ian; Rajashekar, Umesh; Cormack, Lawrence K.; Bovik, Alan C.

    2005-03-01

    Recent years have seen a resurgent interest in eye movements during natural scene viewing. Aspects of eye movements that are driven by low-level image properties are of particular interest due to their applicability to biologically motivated artificial vision and surveillance systems. In this paper, we report an experiment in which we recorded observers" eye movements while they viewed calibrated greyscale images of natural scenes. Immediately after viewing each image, observers were shown a test patch and asked to indicate if they thought it was part of the image they had just seen. The test patch was either randomly selected from a different image from the same database or, unbeknownst to the observer, selected from either the first or last location fixated on the image just viewed. We find that several low-level image properties differed significantly relative to the observers" ability to successfully designate each patch. We also find that the differences between patch statistics for first and last fixations are small compared to the differences between hit and miss responses. The goal of the paper was to, in a non-cognitive natural setting, measure the image properties that facilitate visual memory, additionally observing the role that temporal location (first or last fixation) of the test patch played. We propose that a memorability map of a complex natural scene may be constructed to represent the low-level memorability of local regions in a similar fashion to the familiar saliency map, which records bottom-up fixation attractors.

  17. Poly-Pattern Compressive Segmentation of ASTER Data for GIS

    NASA Technical Reports Server (NTRS)

    Myers, Wayne; Warner, Eric; Tutwiler, Richard

    2007-01-01

    Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.

  18. Perceptual learning during action video game playing.

    PubMed

    Green, C Shawn; Li, Renjie; Bavelier, Daphne

    2010-04-01

    Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.

  19. Bilateral Theta-Burst TMS to Influence Global Gestalt Perception

    PubMed Central

    Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto

    2012-01-01

    While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects – a deficit termed simultanagnosia – greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres. PMID:23110106

  20. Bilateral theta-burst TMS to influence global gestalt perception.

    PubMed

    Ritzinger, Bernd; Huberle, Elisabeth; Karnath, Hans-Otto

    2012-01-01

    While early and higher visual areas along the ventral visual pathway in the inferotemporal cortex are critical for the recognition of individual objects, the neural representation of human perception of complex global visual scenes remains under debate. Stroke patients with a selective deficit in the perception of a complex global Gestalt with intact recognition of individual objects - a deficit termed simultanagnosia - greatly helped to study this question. Interestingly, simultanagnosia typically results from bilateral lesions of the temporo-parietal junction (TPJ). The present study aimed to verify the relevance of this area for human global Gestalt perception. We applied continuous theta-burst TMS either unilaterally (left or right) or bilateral simultaneously over TPJ. Healthy subjects were presented with hierarchically organized visual stimuli that allowed parametrical degrading of the object at the global level. Identification of the global Gestalt was significantly modulated only for the bilateral TPJ stimulation condition. Our results strengthen the view that global Gestalt perception in the human brain involves TPJ and is co-dependent on both hemispheres.

  1. Earth mapping - aerial or satellite imagery comparative analysis

    NASA Astrophysics Data System (ADS)

    Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo

    Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.

  2. Mass-casualty Response to the Kiss Nightclub in Santa Maria, Brazil.

    PubMed

    Dal Ponte, Silvana T; Dornelles, Carlos F D; Arquilla, Bonnie; Bloem, Christina; Roblin, Patricia

    2015-02-01

    On January 27, 2013, a fire at the Kiss Nightclub in Santa Maria, Brazil led to a mass-casualty incident affecting hundreds of college students. A total of 234 people died on scene, 145 were hospitalized, and another 623 people received treatment throughout the first week following the incident.1 Eight of the hospitalized people later died.1 The Military Police were the first on scene, followed by the state fire department, and then the municipal Mobile Prehospital Assistance (SAMU) ambulances. The number of victims was not communicated clearly to the various units arriving on scene, leading to insufficient rescue personnel and equipment. Incident command was established on scene, but the rescuers and police were still unable to control the chaos of multiple bystanders attempting to assist in the rescue efforts. The Municipal Sports Center (CDM) was designated as the location for dead bodies, where victim identification and communication with families occurred, as well as forensic evaluation, which determined the primary cause of death to be asphyxia. A command center was established at the Hospital de Caridade Astrogildo de Azevedo (HCAA) in Santa Maria to direct where patients should be admitted, recruit staff, and procure additional supplies, as needed. The victims suffered primarily from smoke inhalation and many required endotracheal intubation and mechanical ventilation. There was a shortage of ventilators; therefore, some had to be borrowed from local hospitals, neighboring cities, and distant areas in the state. A total of 54 patients1 were transferred to hospitals in the capital city of Porto Alegre (Brazil). The main issues with the response to the fire were scene control and communication. Areas for improvement were identified, namely the establishment of a disaster-response plan, as well as regularly scheduled training in disaster preparedness/response. These activities are the first steps to improving mass-casualty responses.

  3. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  4. A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

    NASA Astrophysics Data System (ADS)

    Salvaggio, Katie N.

    Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation. In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multiple-view imagery. The result of these algorithms is a three-dimensional (3D) point cloud which can be used to derive a final model using surface reconstruction techniques. However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud. Voids exist in texturally difficult areas, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately. It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger more complex voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing. A method is presented for identifying voids in point clouds by using a voxel-based approach to partition the 3D space. By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified. This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction. Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area). Voids in the voxel space are manifested as unsampled voxels. A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged. This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage. Voids resulting from texturally difficult areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

  5. Improvements in Patient Acceptance by Hospitals Following the Introduction of a Smartphone App for the Emergency Medical Service System: A Population-Based Before-and-After Observational Study in Osaka City, Japan.

    PubMed

    Katayama, Yusuke; Kitamura, Tetsuhisa; Kiyohara, Kosuke; Iwami, Taku; Kawamura, Takashi; Izawa, Junichi; Gibo, Koichiro; Komukai, Sho; Hayashida, Sumito; Kiguchi, Takeyuki; Ohnishi, Mitsuo; Ogura, Hiroshi; Shimazu, Takeshi

    2017-09-11

    Recently, the number of ambulance dispatches has been increasing in Japan, and it is therefore difficult for hospitals to accept emergency patients smoothly and appropriately because of the limited hospital capacity. To facilitate the process of requesting patient transport and hospital acceptance, an emergency information system using information technology (IT) has been built and introduced in various communities. However, its effectiveness has not been thoroughly revealed. We introduced a smartphone app system in 2013 that enables emergency medical service (EMS) personnel to share information among themselves regarding on-scene ambulances and the hospital situation. The aim of this study was to assess the effects of introducing this smartphone app on the EMS system in Osaka City, Japan. This retrospective study analyzed the population-based ambulance records of Osaka Municipal Fire Department. The study period was 6 years, from January 1, 2010 to December 31, 2015. We enrolled emergency patients for whom on-scene EMS personnel conducted hospital selection. The main endpoint was the difficulty experienced in gaining hospital acceptance at the scene. The definition of difficulty was making ≥5 phone calls by EMS personnel at the scene to hospitals until a decision to transport was determined. The smartphone app was introduced in January 2013, and we compared the patients treated from 2010 to 2012 (control group) with those treated from 2013 to 2015 (smartphone app group) using an interrupted time-series analysis to assess the effects of introducing this smartphone app. A total of 600,526 emergency patients for whom EMS personnel selected hospitals were eligible for our analysis. There were 300,131 emergency patients in the control group (50.00%, 300,313/600,526) from 2010 to 2012 and 300,395 emergency patients in the smartphone app group (50.00%, 300,395/600,526) from 2013 to 2015. The rate of difficulty in hospital acceptance was 14.19% (42,585/300,131) in the control group and 10.93% (32,819/300,395) in the smartphone app group. No change over time in the number of difficulties in hospital acceptance was found before the introduction of the smartphone app (regression coefficient: -2.43, 95% CI -5.49 to 0.64), but after its introduction, the number of difficulties in hospital acceptance gradually decreased by month (regression coefficient: -11.61, 95% CI -14.57 to -8.65). Sharing information between an ambulance and a hospital by using the smartphone app at the scene was associated with decreased difficulty in obtaining hospital acceptance. Our app and findings may be worth considering in other areas of the world where emergency medical information systems with IT are needed. ©Yusuke Katayama, Tetsuhisa Kitamura, Kosuke Kiyohara, Taku Iwami, Takashi Kawamura, Junichi Izawa, Koichiro Gibo, Sho Komukai, Sumito Hayashida, Takeyuki Kiguchi, Mitsuo Ohnishi, Hiroshi Ogura, Takeshi Shimazu. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 11.09.2017.

  6. Intercomparison of Satellite-Derived Snow-Cover Maps

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Tait, Andrew B.; Foster, James L.; Chang, Alfred T. C.; Allen, Milan

    1999-01-01

    In anticipation of the launch of the Earth Observing System (EOS) Terra, and the PM-1 spacecraft in 1999 and 2000, respectively, efforts are ongoing to determine errors of satellite-derived snow-cover maps. EOS Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Microwave Scanning Radiometer-E (AMSR-E) snow-cover products will be produced. For this study we compare snow maps covering the same study area acquired from different sensors using different snow- mapping algorithms. Four locations are studied: 1) southern Saskatchewan; 2) a part of New England (New Hampshire, Vermont and Massachusetts) and eastern New York; 3) central Idaho and western Montana; and 4) parts of North and South Dakota. Snow maps were produced using a prototype MODIS snow-mapping algorithm used on Landsat Thematic Mapper (TM) scenes of each study area at 30-m and when the TM data were degraded to 1 -km resolution. National Operational Hydrologic Remote Sensing Center (NOHRSC) 1 -km resolution snow maps were also used, as were snow maps derived from 1/2 deg. x 1/2 deg. resolution Special Sensor Microwave Imager (SSM/1) data. A land-cover map derived from the International Geosphere-Biosphere Program (IGBP) land-cover map of North America was also registered to the scenes. The TM, NOHRSC and SSM/I snow maps, and land-cover maps were compared digitally. In most cases, TM-derived maps show less snow cover than the NOHRSC and SSM/I maps because areas of incomplete snow cover in forests (e.g., tree canopies, branches and trunks) are seen in the TM data, but not in the coarser-resolution maps. The snow maps generally agree with respect to the spatial variability of the snow cover. The 30-m resolution TM data provide the most accurate snow maps, and are thus used as the baseline for comparison with the other maps. Comparisons show that the percent change in amount of snow cover relative to the 3 0-m resolution TM maps is lowest using the TM I -km resolution maps, ranging from 0 to 40%. The highest percent change (less than 100%) is found in the New England study area, probably due to the presence of patchy snow cover. A scene with patchy snow cover is more difficult to map accurately than is a scene with a well-defined snowline such as is found on the North and South Dakota scene where the percent change ranged from 0 to 40%. There are also some important differences in the amount of snow mapped using the two different SSM/I algorithms because they utilize different channels.

  7. Ratings for emotion film clips.

    PubMed

    Gabert-Quillen, Crystal A; Bartolini, Ellen E; Abravanel, Benjamin T; Sanislow, Charles A

    2015-09-01

    Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.

  8. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.

    PubMed

    Revina, Yulia; Petro, Lucy S; Muckli, Lars

    2017-09-22

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. High resolution satellite observations of mesoscale oceanography in the Tasman Sea, 1978 - 1979

    NASA Technical Reports Server (NTRS)

    Nilsson, C. S.; Andrews, J. C.; Hornibrook, M.; Latham, A. R.; Speechley, G. C.; Scully-Power, P. (Principal Investigator)

    1982-01-01

    Of the Nearly 1000 standard infrared photographic images received, 273 images were on computer compatible tape. It proved necessary to digitally enhance the scene contrast to cover only a select few degrees K over the photographic grey scale appropriate to the scene-specific range of sea surface temperature (SST). Some 178 images were so enhanced. Comparison with sea truth show that SST, as seen by satellite, provides a good guide to the ocean currents and eddies off East Australia, both in summer and winter. This is in contrast, particularly in summer, to SST mapped by surface survey, which usually lacks the necessary spatial resolution.

  10. Recall versus familiarity when recall fails for words and scenes: The differential roles of the hippocampus, perirhinal cortex, and category-specific cortical regions☆

    PubMed Central

    Ryals, Anthony J.; Cleary, Anne M.; Seger, Carol A.

    2013-01-01

    This fMRI study examined recall and familiarity for words and scenes using the novel recognition without cued recall (RWCR) paradigm. Subjects performed a cued recall task in which half of the test cues resembled studied items (and thus were familiar) and half did not. Subjects also judged the familiarity of the cue itself. RWCR is the finding that, among cues for which recall fails, subjects generally rate cues that resemble studied items as more familiar than cues that do not. For words, left and right hippocampal activity increased when recall succeeded relative to when it failed. When recall failed, right hippocampal activity was decreased for familiar relative to unfamiliar cues. In contrast, right Prc activity increased for familiar cues for which recall failed relative to both familiar cues for which recall succeeded and to unfamiliar cues. For scenes, left hippocampal activity increased when recall succeeded relative to when it failed but did not differentiate familiar from unfamiliar cues when recall failed. In contrast, right Prc activity increased for familiar relative to unfamiliar cues when recall failed. Category-specific cortical regions showed effects unique to their respective stimulus types: The visual word form area (VWFA) showed effects for recall vs. familiarity specific to words, and the parahippocampal place area (PPA) showed effects for recall vs. familiarity specific to scenes. In both cases, these effects were such that there was increased activity occurring during recall relative to when recall failed, and decreased activity occurring for familiar relative to unfamiliar cues when recall failed. PMID:23142268

  11. Defining the most probable location of the parahippocampal place area using cortex-based alignment and cross-validation.

    PubMed

    Weiner, Kevin S; Barnett, Michael A; Witthoft, Nathan; Golarai, Golijeh; Stigliani, Anthony; Kay, Kendrick N; Gomez, Jesse; Natu, Vaidehi S; Amunts, Katrin; Zilles, Karl; Grill-Spector, Kalanit

    2018-04-15

    The parahippocampal place area (PPA) is a widely studied high-level visual region in the human brain involved in place and scene processing. The goal of the present study was to identify the most probable location of place-selective voxels in medial ventral temporal cortex. To achieve this goal, we first used cortex-based alignment (CBA) to create a probabilistic place-selective region of interest (ROI) from one group of 12 participants. We then tested how well this ROI could predict place selectivity in each hemisphere within a new group of 12 participants. Our results reveal that a probabilistic ROI (pROI) generated from one group of 12 participants accurately predicts the location and functional selectivity in individual brains from a new group of 12 participants, despite between subject variability in the exact location of place-selective voxels relative to the folding of parahippocampal cortex. Additionally, the prediction accuracy of our pROI is significantly higher than that achieved by volume-based Talairach alignment. Comparing the location of the pROI of the PPA relative to published data from over 500 participants, including data from the Human Connectome Project, shows a striking convergence of the predicted location of the PPA and the cortical location of voxels exhibiting the highest place selectivity across studies using various methods and stimuli. Specifically, the most predictive anatomical location of voxels exhibiting the highest place selectivity in medial ventral temporal cortex is the junction of the collateral and anterior lingual sulci. Methodologically, we make this pROI freely available (vpnl.stanford.edu/PlaceSelectivity), which provides a means to accurately identify a functional region from anatomical MRI data when fMRI data are not available (for example, in patient populations). Theoretically, we consider different anatomical and functional factors that may contribute to the consistent anatomical location of place selectivity relative to the folding of high-level visual cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Space Radar Image of West Texas - SAR Scan

    NASA Image and Video Library

    1999-04-15

    This radar image of the Midland/Odessa region of West Texas, demonstrates an experimental technique, called ScanSAR, that allows scientists to rapidly image large areas of the Earth's surface. The large image covers an area 245 kilometers by 225 kilometers (152 miles by 139 miles). It was obtained by the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying aboard the space shuttle Endeavour on October 5, 1994. The smaller inset image is a standard SIR-C image showing a portion of the same area, 100 kilometers by 57 kilometers (62 miles by 35 miles) and was taken during the first flight of SIR-C on April 14, 1994. The bright spots on the right side of the image are the cities of Odessa (left) and Midland (right), Texas. The Pecos River runs from the top center to the bottom center of the image. Along the left side of the image are, from top to bottom, parts of the Guadalupe, Davis and Santiago Mountains. North is toward the upper right. Unlike conventional radar imaging, in which a radar continuously illuminates a single ground swath as the space shuttle passes over the terrain, a Scansar radar illuminates several adjacent ground swaths almost simultaneously, by "scanning" the radar beam across a large area in a rapid sequence. The adjacent swaths, typically about 50 km (31 miles) wide, are then merged during ground processing to produce a single large scene. Illumination for this L-band scene is from the top of the image. The beams were scanned from the top of the scene to the bottom, as the shuttle flew from left to right. This scene was acquired in about 30 seconds. A normal SIR-C image is acquired in about 13 seconds. The ScanSAR mode will likely be used on future radar sensors to construct regional and possibly global radar images and topographic maps. The ScanSAR processor is being designed for 1996 implementation at NASA's Alaska SAR Facility, located at the University of Alaska Fairbanks, and will produce digital images from the forthcoming Canadian RADARSAT satellite. http://photojournal.jpl.nasa.gov/catalog/PIA01787

  13. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.

  14. High resolution observations of low contrast phenomena from an Advanced Geosynchronous Platform (AGP)

    NASA Technical Reports Server (NTRS)

    Maxwell, M. S.

    1984-01-01

    Present technology allows radiometric monitoring of the Earth, ocean and atmosphere from a geosynchronous platform with good spatial, spectral and temporal resolution. The proposed system could provide a capability for multispectral remote sensing with a 50 m nadir spatial resolution in the visible bands, 250 m in the 4 micron band and 1 km in the 11 micron thermal infrared band. The diffraction limited telescope has a 1 m aperture, a 10 m focal length (with a shorter focal length in the infrared) and linear and area arrays of detectors. The diffraction limited resolution applies to scenes of any brightness but for a dark low contrast scenes, the good signal to noise ratio of the system contribute to the observation capability. The capabilities of the AGP system are assessed for quantitative observations of ocean scenes. Instrument and ground system configuration are presented and projected sensor capabilities are analyzed.

  15. Forest disturbance interactions and successional pathways in the Southern Rocky Mountains

    USGS Publications Warehouse

    Lu Liang,; Hawbaker, Todd J.; Zhu, Zhiliang; Xuecao Li,; Peng Gong,

    2016-01-01

    The pine forests in the southern portion of the Rocky Mountains are a heterogeneous mosaic of disturbance and recovery. The most extensive and intensive stress and mortality are received from human activity, fire, and mountain pine beetles (MPB;Dendroctonus ponderosae). Understanding disturbance interactions and disturbance-succession pathways are crucial for adapting management strategies to mitigate their impacts and anticipate future ecosystem change. Driven by this goal, we assessed the forest disturbance and recovery history in the Southern Rocky Mountains Ecoregion using a 13-year time series of Landsat image stacks. An automated classification workflow that integrates temporal segmentation techniques and a random forest classifier was used to examine disturbance patterns. To enhance efficiency in selecting representative samples at the ecoregion scale, a new sampling strategy that takes advantage of the scene-overlap among adjacent Landsat images was designed. The segment-based assessment revealed that the overall accuracy for all 14 scenes varied from 73.6% to 92.5%, with a mean of 83.1%. A design-based inference indicated the average producer’s and user’s accuracies for MPB mortality were 85.4% and 82.5% respectively. We found that burn severity was largely unrelated to the severity of pre-fire beetle outbreaks in this region, where the severity of post-fire beetle outbreaks generally decreased in relation to burn severity. Approximately half the clear-cut and burned areas were in various stages of recovery, but the regeneration rate was much slower for MPB-disturbed sites. Pre-fire beetle outbreaks and subsequent fire produced positive compound effects on seedling reestablishment in this ecoregion. Taken together, these results emphasize that although multiple disturbances do play a role in the resilience mechanism of the serotinous lodgepole pine, the overall recovery could be slow due to the vast area of beetle mortality.

  16. Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions.

    PubMed

    Pigarev, Ivan N; Levichkina, Ekaterina V

    2016-01-01

    Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.

  17. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.

    PubMed

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-08-27

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.

  18. Some of the thousand words a picture is worth.

    PubMed

    Mandler, J M; Johnson, N S

    1976-09-01

    The effects of real-world schemata on recognition of complex pictures were studied. Two kinds of pictures were used: pictures of objects forming real-world scenes and unorganized collections of the same objects. The recognition test employed distractors that varied four types of information: inventory, spatial location, descriptive and spatial composition. Results emphasized the selective nature of schemata since superior recognition of one kind of information was offset by loss of another. Spatial location information was better recognized in real-world scenes and spatial composition information was better recognized in unorganized scenes. Organized and unorganized pictures did not differ with respect of inventory and descriptive information. The longer the pictures were studied, the longer subjects took to recognize them. Reaction time for hits, misses, and false alarms increased dramatically as presentation time increased from 5 to 60 sec. It was suggested that detection of a difference in a distractor terminated search, but that when no difference was detected, an exhaustive search of the available information took place.

  19. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  20. fMRI responses to pictures of mutilation and contamination.

    PubMed

    Schienle, Anne; Schäfer, Axel; Hermann, Andrea; Walter, Bertram; Stark, Rudolf; Vaitl, Dieter

    2006-01-30

    Findings from several functional magnetic resonance imaging (fMRI) studies implicate the existence of a distinct neural disgust substrate, whereas others support the idea of distributed and integrative brain systems involved in emotional processing. In the present fMRI experiment 12 healthy females viewed pictures from four emotion categories. Two categories were disgust-relevant and depicted contamination or mutilation. The other scenes showed attacks (fear) or were affectively neutral. The two types of disgust elicitors received comparable ratings for disgust, fear and arousal. Both were associated with activation of the occipitotemporal cortex, the amygdala, and the orbitofrontal cortex; insula activity was nonsignificant in the two disgust conditions. Mutilation scenes induced greater inferior parietal activity than contamination scenes, which might mirror their greater capacity to capture attention. Our results are in disagreement with the idea of selective disgust processing at the insula. They point to a network of brain regions involved in the decoding of stimulus salience and the regulation of attention.

  1. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-04-14

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

  2. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes

    PubMed Central

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-01-01

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656

  3. Modelling vehicle colour and pattern for multiple deployment environments

    NASA Astrophysics Data System (ADS)

    Liggins, Eric; Moorhead, Ian R.; Pearce, Daniel A.; Baker, Christopher J.; Serle, William P.

    2016-10-01

    Military land platforms are often deployed around the world in very different climate zones. Procuring vehicles in a large range of camouflage patterns and colour schemes is expensive and may limit the environments in which they can be effectively used. As such this paper reports a modelling approach for use in the optimisation and selection of a colour palette, to support operations in diverse environments and terrains. Three different techniques were considered based upon the differences between vehicle and background in L*a*b* colour space, to predict the optimum (initially single) colour to reduce the vehicle signature in the visible band. Calibrated digital imagery was used as backgrounds and a number of scenes were sampled. The three approaches used, and reported here are a) background averaging behind the vehicle b) background averaging in the area surrounding the vehicle and c) use of the spatial extension to CIE L*a*b*; S-CIELAB (Zhang and Wandell, Society for Information Display Symposium Technical Digest, vol. 27, pp. 731-734, 1996). Results are compared with natural scene colour statistics. The models used showed good agreement in the colour predictions for individual and multiple terrains or climate zones. A further development of the technique examines the effect of different patterns and colour combinations on the S-CIELAB spatial colour difference metric, when scaled for appropriate viewing ranges.

  4. 33 CFR 155.4040 - Response times for each salvage and marine firefighting service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... within the inland waters or the nearshore or offshore area, you must submit in writing, in your plan, the... identified in your response plan for areas OCONUS. (c) Table 155.4040(c) provides additional amplifying... on scene. vii) Salvage plan Plan completed and submitted to Incident Commander/Unified Command. (viii...

  5. 33 CFR 155.4040 - Response times for each salvage and marine firefighting service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... within the inland waters or the nearshore or offshore area, you must submit in writing, in your plan, the... identified in your response plan for areas OCONUS. (c) Table 155.4040(c) provides additional amplifying... on scene. vii) Salvage plan Plan completed and submitted to Incident Commander/Unified Command. (viii...

  6. 33 CFR 155.4040 - Response times for each salvage and marine firefighting service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... within the inland waters or the nearshore or offshore area, you must submit in writing, in your plan, the... identified in your response plan for areas OCONUS. (c) Table 155.4040(c) provides additional amplifying... on scene. vii) Salvage plan Plan completed and submitted to Incident Commander/Unified Command. (viii...

  7. Void-Filled SRTM Digital Elevation Model of Afghanistan

    USGS Publications Warehouse

    Chirico, Peter G.; Barrios, Boris

    2005-01-01

    EXPLANATION The purpose of this data set is to provide a single consistent elevation model to be used for national scale mapping, GIS, remote sensing applications, and natural resource assessments for Afghanistan's reconstruction. For 11 days in February of 2000, the National Aeronautics and Space Administration (NASA), the National Geospatial-Intelligence Agency ian Space Agency (ASI) flew X-band and C-band radar interferometry onboard the Space Shuttle Endeavor. The mission covered the Earth between 60?N and 57?S and will provide interferometric digital elevation models (DEMs) of approximately 80% of the Earth's land mass when processing is complete. The radar-pointing angle was approximately 55? at scene center. Ascending and descending orbital passes generated multiple interferometric data scenes for nearly all areas. Up to eight passes of data were merged to form the final processed Shuttle Radar Topography Mission (SRTM) DEMs. The effect of merging scenes averages elevation values recorded in coincident scenes and reduces, but does not completely eliminate, the amount of area with layover and terrain shadow effects. The most significant form of data processing for the Afghanistan DEM was gap-filling areas where the SRTM data contained a data void. These void areas are as a result of radar shadow, layover, standing water, and other effects of terrain as well as technical radar interferometry phase unwrapping issues. To fill these gaps, topographic contours were digitized from 1:200,000 - scale Soviet General Staff Topographic Maps which date from the middle to late 1980's. Digital contours were gridded to form elevation models for void areas and subsequently were merged with the SRTM data through GIS and image processing techniques. The data contained in this publication includes SRTM DEM quadrangles projected and clipped in geographic coordinates for the entire country. An index of all available SRTM DEM quadrangles is displayed here: Index_Geo_DD.pdf. Also included are quadrangles projected into their appropriate Universal Transverse Mercator (UTM) projection. The country of Afghanistan spans three UTM Zones: Zone 41, Zone 42, and Zone 43. Maps are stored in their respective UTM Zone projection. Indexes of all available SRTM DEM quadrangles in their respective UTM zone are displayed here: Index_UTM_Z41.pdf, Index_UTM_Z42.pdf, Index_UTM_Z43.pdf.

  8. A validation of ground ambulance pre-hospital times modeled using geographic information systems

    PubMed Central

    2012-01-01

    Background Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. Methods The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. Results There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. Conclusions The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area. PMID:23033894

  9. Use of Persistent Scatterer Interferometry to Assess Land Deformation in the Nile Delta and its Controlling Factors

    NASA Astrophysics Data System (ADS)

    Gebremichael, E.; Sultan, M.; Becker, R.; Emil, M.; Ahmed, M.; Chouinard, K.

    2015-12-01

    We applied Persistent scatterer interferometry (PSInSAR) to assess land deformation (subsidence and uplift) across the entire Nile delta and its surroundings and to identify possible causes of the observed deformation. For the purpose of the present study, 100 Envisat Advanced Synthetic Aperture Radar (ASAR; level 0) scenes that were acquired along four tracks and covering a time span of seven years (2004 to 2010) were used. The scenes extend from the Mediterranean coast in the north to Cairo city in the south. These scenes were focused using Repeat Orbit Interferometry PACkage (ROI_PAC) software and the subsequent PSI processing was done using the Stanford Method for Persistent Scatterers (StaMPS) method. A low coherence threshold (0.2) was used to decrease the impact of vegetation-related poor coherence and decorrelation of the scenes over the investigated time span. Subsidence was observed over: (1) the Demietta Nile River branch (3 to 14 mm/yr) where it intersects the Mediterranean coastline, (2) thick (~ 40 m) Holocene sediments in lake Manzala (up to 9 mm/yr), (3) reclaimed desert areas (west of Nile Delta; up to 12 mm/yr) of high groundwater extraction, (4) along parts of a previously proposed flexure line (up to 10 mm/yr), and (5) along the eastern sections of the Mediterranean coastline (up to 15.7 mm/yr). The city of Alexandria (underlain by carbonate platform) and the terminus of the Rosetta branch of the Nile River seem to experience almost no ground movement (mean subsidence of 0.28 mm/yr and 0.74 mm/yr respectively) while the cities of Ras Elbar and Port Said (underlain by thick Holocene sediment) exhibit the highest subsidence values (up to 14 mm/yr and 8.5 mm/yr respectively). The city of Cairo has also experienced subsidence in limited areas of up to 7.8 mm/yr. High spatial correlation was also observed between the subsiding areas and the Abu Madi incised valley; the largest gas field in the Nile Delta. Most of the area undergoing subsidence in the Nile Delta is related to sediment compaction and/or groundwater extraction, with other factors such as gas extraction and tectonic drivers correlating with smaller areas.

  10. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  11. Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)

    NASA Technical Reports Server (NTRS)

    Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.

    2001-01-01

    A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.

  12. Illicit drug use in the flemish nightlife scene between 2003 and 2009.

    PubMed

    Van Havere, Tina; Lammertyn, Jan; Vanderplasschen, Wouter; Bellis, Mark; Rosiers, Johan; Broekaert, Eric

    2012-01-01

    Given the importance of party people as innovators and early adaptors in the diffusion of substance use, and given the lack of longitudinal scope in studies of the nightlife scene, we explored changes in illicit drug use among young people participating in the nightlife scene in Flanders. A survey among party people selected at dance events, rock festivals and clubs was held in the summer of 2003 and repeated in 2005, 2007 and 2009. In total, 2,812 respondents filled in a questionnaire on the use of cannabis, ecstasy, cocaine, amphetamines, GHB and ketamine. The results of the multiple logistic regression analyses show that in the group of frequent pub visitors, the predicting probability of cannabis use increased over time, while the gap in drug use between dance music lovers and non-lovers of dance music narrowed. For cocaine use during the last year, an increase was found related to the housing situation (alone or with parents) of respondents. While the odds of using ecstasy decreased over the years, the odds of using GHB increased. We can conclude that monitoring emerging trends, which can be quickly observed in the nightlife scene, provides meaningful information for anticipating possible trends. Copyright © 2012 S. Karger AG, Basel.

  13. ViCoMo: visual context modeling for scene understanding in video surveillance

    NASA Astrophysics Data System (ADS)

    Creusen, Ivo M.; Javanbakhti, Solmaz; Loomans, Marijn J. H.; Hazelhoff, Lykele B.; Roubtsova, Nadejda; Zinger, Svitlana; de With, Peter H. N.

    2013-10-01

    The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan-tilt-zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.

  14. Use of Seasat synthetic aperture radar and Landsat multispectral scanner subsystem data for Alaskan glaciology studies

    NASA Technical Reports Server (NTRS)

    Hall, D. K.; Ormsby, J. P.

    1983-01-01

    Three Seasat synthetic aperture radar (SAR) and three Landsat multispectral scanner subsystem (MSS) scenes of three areas of Alaska were analyzed for hydrological information. The areas were: the Dease Inlet in northern Alaska and its oriented or thaw lakes, the Ruth and Tokositna valley glaciers in south central Alaska, and the Malaspina piedmont glacier on Alaska's southern coast. Results for the first area showed that the location and identification of some older remnant lake basins were more easily determined in the registered data using an MSS/SAR overlay than in either SAR or MSS data alone. Separately, both SAR and MSS data were useful for determination of surging glaciers based on their distinctive medial moraines, and Landsat data were useful for locating the glacier firn zone. For the Malaspina Glacier scenes, the SAR data were useful for locating heavily crevassed ice beneath glacial debris, and Landsat provided data concerning the extent of the debris overlying the glacier.

  15. South Polar Designs

    NASA Image and Video Library

    2006-04-29

    This MOC image shows an undulating scene in the south polar region of Mars. Small, elevated mesas of smooth, relatively homogeneous-appearing material are separated by low-lying regions that are speckled and darkened in some local areas

  16. The Frozen Canyons of Pluto North Pole

    NASA Image and Video Library

    2016-02-27

    This ethereal scene captured by NASA New Horizons spacecraft tells yet another story of Pluto diversity of geological and compositional features-this time in an enhanced color image of the north polar area.

  17. Large Area Crop Inventory Experiment (LACIE). Effect of sun angle and haze on generation of LANDSAT imagery. [Houston, Texas

    NASA Technical Reports Server (NTRS)

    Chesnutwood, C. M.; Kraus, G. L. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. When heavy haze was present over a nonvegetated scene, the mean radiance values for all MSS channels were lowered, with the greatest decrease occurring in channels 1 and 2. Over a vegetated scene, any apparent decrease in mean radiance values due to haze may be marked by an increase in mean radiance values which occurred as vegetation increased in vigor during its growth cycle. Mean radiance values for nonvegetated targets (except water) were closely correlated to the changes in sun elevation angle throughout the year. A sun angle correction to a fixed reference data appeared to offer the possibility of compensating for haze covered areas by predicting the mean radiance values which would be closely correlated to the normal sun declination curve.

  18. Integrated framework for developing search and discrimination metrics

    NASA Astrophysics Data System (ADS)

    Copeland, Anthony C.; Trivedi, Mohan M.

    1997-06-01

    This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.

  19. Context matters: Anterior and posterior cortical midline responses to sad movie scenes.

    PubMed

    Schlochtermeier, L H; Pehrs, C; Bakels, J-H; Jacobs, A M; Kappelhoff, H; Kuchinke, L

    2017-04-15

    Narrative movies can create powerful emotional responses. While recent research has advanced the understanding of neural networks involved in immersive movie viewing, their modulation within a movie's dynamic context remains inconclusive. In this study, 24 healthy participants passively watched sad scene climaxes taken from 24 romantic comedies, while brain activity was measured using functional magnetic resonance (fMRI). To study effects of context, the sad scene climaxes were presented with either coherent scene context, replaced non-coherent context or without context. In a second viewing, the same clips were rated continuously for sadness. The ratings varied over time with peaks of experienced sadness within the assumed climax intervals. Activations in anterior and posterior cortical midline regions increased if presented with both coherent and replaced context, while activation in the temporal gyri decreased. This difference was more pronounced for the coherent context condition. Psycho-Physiological interactions (PPI) analyses showed a context-dependent coupling of midline regions with occipital visual and sub-cortical reward regions. Our results demonstrate the pivotal role of midline structures and their interaction with perceptual and reward areas in processing contextually embedded socio-emotional information in movies. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  1. The duality of temporal encoding – the intrinsic and extrinsic representation of time

    PubMed Central

    Golan, Ronen; Zakay, Dan

    2015-01-01

    While time is well acknowledged for having a fundamental part in our perception, questions on how it is represented are still matters of great debate. One of the main issues in question is whether time is represented intrinsically at the neural level, or is it represented within dedicated brain regions. We used an fMRI block design to test if we can impose covert encoding of temporal features of faces and natural scenes stimuli within category selective neural populations by exposing subjects to four types of temporal variance, ranging from 0% up to 50% variance. We found a gradual increase in neural activation associated with the gradual increase in temporal variance within category selective areas. A second level analysis showed the same pattern of activations within known brain regions associated with time representation, such as the Cerebellum, the Caudate, and the Thalamus. We concluded that temporal features are integral to perception and are simultaneously represented within category selective regions and globally within dedicated regions. Our second conclusion, drown from our covert procedure, is that time encoding, at its basic level, is an automated process that does not require attention allocated toward the temporal features nor does it require dedicated resources. PMID:26379604

  2. Bringing color to emotion: The influence of color on attentional bias to briefly presented emotional images.

    PubMed

    Bekhtereva, Valeria; Müller, Matthias M

    2017-10-01

    Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.

  3. Animating Preservice Teachers' Noticing

    ERIC Educational Resources Information Center

    de Araujo, Zandra; Amador, Julie; Estapa, Anne; Weston, Tracy; Aming-Attai, Rachael; Kosko, Karl W.

    2015-01-01

    The incorporation of animation in mathematics teacher education courses is one method for transforming practices and promoting practice-based education. Animation can be used as an approximation of practice that engages preservice teachers (PSTs) in creating classroom scenes in which they select characters, regulate movement, and construct…

  4. Preliminary Comparisons of the Information Content and Utility of TM Versus MSS Data

    NASA Technical Reports Server (NTRS)

    Markham, B. L.

    1984-01-01

    Comparisons were made between subscenes from the first TM scene acquired of the Washington, D.C. area and a MSS scene acquired approximately one year earlier. Three types of analyses were conducted to compare TM and MSS data: a water body analysis, a principal components analysis and a spectral clustering analysis. The water body analysis compared the capability of the TM to the MSS for detecting small uniform targets. Of the 59 ponds located on aerial photographs 34 (58%) were detected by the TM with six commission errors (15%) and 13 (22%) were detected by the MSS with three commission errors (19%). The smallest water body detected by the TM was 16 meters; the smallest detected by the MSS was 40 meters. For the principal components analysis, means and covariance matrices were calculated for each subscene, and principal components images generated and characterized. In the spectral clustering comparison each scene was independently clustered and the clusters were assigned to informational classes. The preliminary comparison indicated that TM data provides enhancements over MSS in terms of (1) small target detection and (2) data dimensionality (even with 4-band data). The extra dimension, partially resultant from TM band 1, appears useful for built-up/non-built-up area separation.

  5. Forensic entomology cases in Thailand: a review of cases from 2000 to 2006.

    PubMed

    Sukontason, Kom; Narongchai, Paitoon; Kanchai, Chaturong; Vichairat, Karnda; Sribanditmongkol, Pongruk; Bhoopat, Tanin; Kurahashi, Hiromu; Chockjamsai, Manoch; Piangjai, Somsak; Bunchu, Nophawan; Vongvivach, Somsak; Samai, Wirachai; Chaiwong, Tarinee; Methanitikorn, Rungkanta; Ngern-Klun, Rachadawan; Sripakdee, Duanghatai; Boonsriwong, Worachote; Siriwattanarungsee, Sirisuda; Srimuangwong, Chaowakit; Hanterdsith, Boonsak; Chaiwan, Khankam; Srisuwan, Chalard; Upakut, Surasak; Moopayak, Kittikhun; Vogtsberger, Roy C; Olson, Jimmy K; Sukontason, Kabkaew L

    2007-10-01

    This paper presents and discusses 30 cases of cadavers that had been transferred for forensic entomology investigations to the Department of Forensic Medicine, Faculty of Medicine, Chiang Mai University, northern Thailand, from 2000 to 2006. Variable death scenes were determined, including forested area and suburban and urban outdoor and indoor environments. The fly specimens found in the corpses obtained were the most commonly of the blow fly of family Calliphoridae, and consisted of Chrysomya megacephala (F.), Chrysomya rufifacies (Macquart) Chrysomya villeneuvi Patton, Chrysomya nigripes Aubertin, Chrysomya bezziana Villeneuve, Chrysomya chani Kurahashi, Lucilia cuprina (Wiedemann), Hemipyrellia ligurriens (Wiedemann), and two unknown species. Flies of the family Muscidae [Hydrotaea spinigera Stein, Synthesiomyia nudiseta (Wulp)], Piophilidae [Piophila casei (L.)], Phoridae [Megaselia scalaris (Loew)], Sarcophagidae [Parasarcophaga ruficornis (F.) and three unknown species], and Stratiomyiidae (Sargus sp.) were also collected from these human remains. Larvae and adults of the beetle, Dermestes maculatus DeGeer (Coleoptera: Dermestidae), were also found in some cases. Chrysomya megacephala and C. rufifacies were the most common species found in the ecologically varied death scene habitats associated with both urban and forested areas, while C. nigripes was commonly discovered in forested places. S. nudiseta was collected only from corpses found in an indoor death scene.

  6. Factors Influencing Quality of Pain Management in a Physician Staffed Helicopter Emergency Medical Service.

    PubMed

    Oberholzer, Nicole; Kaserer, Alexander; Albrecht, Roland; Seifert, Burkhardt; Tissi, Mario; Spahn, Donat R; Maurer, Konrad; Stein, Philipp

    2017-07-01

    Pain is frequently encountered in the prehospital setting and needs to be treated quickly and sufficiently. However, incidences of insufficient analgesia after prehospital treatment by emergency medical services are reported to be as high as 43%. The purpose of this analysis was to identify modifiable factors in a specific emergency patient cohort that influence the pain suffered by patients when admitted to the hospital. For that purpose, this retrospective observational study included all patients with significant pain treated by a Swiss physician-staffed helicopter emergency service between April and October 2011 with the following characteristics to limit selection bias: Age > 15 years, numerical rating scale (NRS) for pain documented at the scene and at hospital admission, NRS > 3 at the scene, initial Glasgow coma scale > 12, and National Advisory Committee for Aeronautics score < VI. Univariate and multivariable logistic regression analyses were performed to evaluate patient and mission characteristics of helicopter emergency service associated with insufficient pain management. A total of 778 patients were included in the analysis. Insufficient pain management (NRS > 3 at hospital admission) was identified in 298 patients (38%). Factors associated with insufficient pain management were higher National Advisory Committee for Aeronautics scores, high NRS at the scene, nontrauma patients, no analgesic administration, and treatment by a female physician. In 16% (128 patients), despite ongoing pain, no analgesics were administered. Factors associated with this untreated persisting pain were short time at the scene (below 10 minutes), secondary missions of helicopter emergency service, moderate pain at the scene, and nontrauma patients. Sufficient management of severe pain is significantly better if ketamine is combined with an opioid (65%), compared to a ketamine or opioid monotherapy (46%, P = .007). In the studied specific Swiss cohort, nontrauma patients, patients on secondary missions, patients treated only for a short time at the scene before transport, patients who receive no analgesic, and treatment by a female physician may be risk factors for insufficient pain management. Patients suffering pain at the scene (NRS > 3) should receive an analgesic whenever possible. Patients with severe pain at the scene (NRS ≥ 8) may benefit from the combination of ketamine with an opioid. The finding about sex differences concerning analgesic administration is intriguing and possibly worthy of further study.

  7. Perceptual load in different regions of the visual scene and its relevance for driving.

    PubMed

    Marciano, Hadas; Yeshurun, Yaffa

    2015-06-01

    The aim of this study was to better understand the role played by perceptual load, at both central and peripheral regions of the visual scene, in driving safety. Attention is a crucial factor in driving safety, and previous laboratory studies suggest that perceptual load is an important factor determining the efficiency of attentional selectivity. Yet, the effects of perceptual load on driving were never studied systematically. Using a driving simulator, we orthogonally manipulated the load levels at the road (central load) and its sides (peripheral load), while occasionally introducing critical events at one of these regions. Perceptual load affected driving performance at both regions of the visual scene. Critically, the effect was different for central versus peripheral load: Whereas load levels on the road mainly affected driving speed, load levels on its sides mainly affected the ability to detect critical events initiating from the roadsides. Moreover, higher levels of peripheral load impaired performance but mainly with low levels of central load, replicating findings with simple letter stimuli. Perceptual load has a considerable effect on driving, but the nature of this effect depends on the region of the visual scene at which the load is introduced. Given the observed importance of perceptual load, authors of future studies of driving safety should take it into account. Specifically, these findings suggest that our understanding of factors that may be relevant for driving safety would benefit from studying these factors under different levels of load at different regions of the visual scene. © 2014, Human Factors and Ergonomics Society.

  8. MAC Europe 1991: Evaluation of AVIRIS, GER imaging spectrometry data for the land application testsite Oberpfaffenhofen

    NASA Technical Reports Server (NTRS)

    Lehmann, F.; Richter, R.; Rothfuss, H.; Werner, K.; Hausknecht, P.; Mueller, A.; Strobl, P.

    1992-01-01

    During the MAC Europe 91 Campaign, the area of Oberpfaffenhofen including the land application testsite Oberpfaffenhofen was flown by the AVIRIS imaging spectrometer, the GER 2 imaging spectrometer (63 band scanner), and two SAR systems (NASA/JPL AIRSAR and DLR E-SAR). In parallel to the overflights ground spectrometry (ASD, IRIS M IV) and atmospheric measurements were carried out in order to provide data for optical sensor calibration. Ground spectrometry measurements were carried out in the runway area of the DLR research center Oberpfaffenhofen. This area was used as well during the GER 2 European flight campaign EISAC 89 as a calibration target. The land application testsite Oberpfaffenhofen is located 3 km north of the DLR research center. During the MAC Europe 91 Campaign a ground survey was carried out for documentation in the ground information data base (vegetation type, vegetation geometry, soil type, and soil mixture). Crop stands analyzed were corn, barley and rape. The DLR runway area and the land application testsite Oberpfaffenhofen were flown with the AVIRIS on 29 July and with the GER 2 on 12 and 23 July and 3 Sep. AVIRIS and GER 2 scenes were processed and atmospherically corrected for optical data analysis of optical and radar data. For the AVIRIS and the GER 2 scenes, signal-to-noise ratios (SNR) estimates were calculated. An example of the reflectance of 6 calibration targets inside a GER 2 scene of Oberpfaffenhofen is given. SNR values for the GER 2 for a medium albedo target are given. The integrated analysis for the optical and radar data was carried out in cooperation with the DLR Institute for Microwave Technologies.

  9. Choosing Your Poison: Optimizing Simulator Visual System Selection as a Function of Operational Tasks

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Kaiser, Mary K.

    2013-01-01

    Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.

  10. How visual attention is modified by disparities and textures changes?

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérome; Wyckens, Emmanuel; Le Meur, Olivier

    2013-03-01

    The 3D image/video quality of experience is a multidimensional concept that depends on 2D image quality, depth quantity and visual comfort. The relationship between these parameters is not yet clearly defined. From this perspective, we aim to understand how texture complexity, depth quantity and visual comfort influence the way people observe 3D content in comparison with 2D. Six scenes with different structural parameters were generated using Blender software. For these six scenes, the following parameters were modified: texture complexity and the amount of depth changing the camera baseline and the convergence distance at the shooting side. Our study was conducted using an eye-tracker and a 3DTV display. During the eye-tracking experiment, each observer freely examined images with different depth levels and texture complexities. To avoid memory bias, we ensured that each observer had only seen scene content once. Collected fixation data were used to build saliency maps and to analyze differences between 2D and 3D conditions. Our results show that the introduction of disparity shortened saccade length; however fixation durations remained unaffected. An analysis of the saliency maps did not reveal any differences between 2D and 3D conditions for the viewing duration of 20 s. When the whole period was divided into smaller intervals, we found that for the first 4 s the introduced disparity was conducive to the section of saliency regions. However, this contribution is quite minimal if the correlation between saliency maps is analyzed. Nevertheless, we did not find that discomfort (comfort) had any influence on visual attention. We believe that existing metrics and methods are depth insensitive and do not reveal such differences. Based on the analysis of heat maps and paired t-tests of inter-observer visual congruency values we deduced that the selected areas of interest depend on texture complexities.

  11. Epidemiology and location of primary retrieval missions in a Scottish aeromedical service.

    PubMed

    Neagle, Gregg; Curatolo, Lisa; Ferris, John; Donald, Mike; Hearns, Stephen; Corfield, Alasdair R

    2017-07-25

    Prehospital critical care teams comprising an appropriately trained physician and paramedic or nurse have been associated with improved outcomes in selected trauma patients. These teams are a scarce and expensive resource, especially when delivered by rotary air assets. The optimal tasking of prehospital critical care teams is therefore vital and remains a subject of debate. Emergency Medical Retrieval Service (EMRS) provides a prehospital critical care response team to incidents over a large area of Scotland either by air or by road. A convenience sample of consecutive EMRS missions covering a period of 18 months from May 2013 to January 2015 was taken. These missions were matched with the ambulance service information on geographical location of the incident. In order to assess the appropriateness of tasking, interventions undertaken on each mission were analysed and divided into two subcategories: 'critical care interventions' and 'advanced medical interventions'. A tasking was deemed appropriate if it included either category of intervention or if a patient was pronounced life extinct at the scene. A total of 1279 primary missions were undertaken during the study period. Of these, 493 primary missions met the inclusion criteria and generated complete location data. The median distance to scene was calculated as 5.6 miles for land responses and 34.2 miles for air responses. Overall, critical care interventions were performed on 17% (84/493) of patients. A further 21% (102/493) of patients had an advanced medical intervention. Including those patients for whom life was pronounced extinct on scene by the EMRS team, a total of 42% (206/493) taskings were appropriate. Overall, our data show a wide geographical spread of tasking for our service, which is in keeping with other suburban/rural models of prehospital care. Tasking accuracy is also comparable to the accuracy shown by other similar services.

  12. Radar backscattering from snow facies of the Greenland ice sheet: Results from the AIRSAR 1991 campaign

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Jezek, K.; Vanzyl, J. J.; Drinkwater, Mark R.; Lou, Y. L.

    1993-01-01

    In June 1991, the NASA/JPL airborne SAR (AIRSAR) acquired C- (lambda = 5.6cm), L- (lambda = 24cm), and P- (lambda = 68m) band polarimetric SAR data over the Greenland ice sheet. These data are processed using version 3.55 of the AIRSAR processor which provides radiometrically and polarimetrically calibrated images. The internal calibration of the AIRSAR data is cross-checked using the radar response from corner reflectors deployed prior to flight in one of the scenes. In addition, a quantitative assessment of the noise power level at various frequencies and polarizations is made in all the scenes. Synoptic SAR data corresponding to a swath width of about 12 by 50 km in length (compared to the standard 12 x 12 km size of high-resolution scenes) are also processed and calibrated to study transitions in radar backscatter as a function of snow facies at selected frequencies and polarizations. The snow facies on the Greenland ice sheet are traditionally categorized based on differences in melting regime during the summer months. The interior of Greenland corresponds to the dry snow zone where terrain elevation is the highest and no snow melt occurs. The lowest elevation boundary of the dry snow zone is known traditionally as the dry snow line. Beneath it is the percolation zone where melting occurs in the summer and water percolates through the snow freezing at depth to form massive ice lenses and ice pipes. At the downslope margin of this zone is the wet snow line. Below it, the wet snow zone corresponds to the lowest elevations where snow remains at the end of the summer. Ablation produces enough meltwater to create areas of snow saturated with water, together with ponds and lakes. The lowest altitude zone of ablation sees enough summer melt to remove all traces of seasonal snow accumulation, such that the surface comprises bare glacier ice.

  13. Computer vision for RGB-D sensors: Kinect and its applications.

    PubMed

    Shao, Ling; Han, Jungong; Xu, Dong; Shotton, Jamie

    2013-10-01

    Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use as an off-the-shelf technology. This special issue is specifically dedicated to new algorithms and/or new applications based on the Kinect (or similar RGB-D) sensors. In total, we received over ninety submissions from more than twenty countries all around the world. The submissions cover a wide range of areas including object and scene classification, 3-D pose estimation, visual tracking, data fusion, human action/activity recognition, 3-D reconstruction, mobile robotics, and so on. After two rounds of review by at least two (mostly three) expert reviewers for each paper, the Guest Editors have selected twelve high-quality papers to be included in this highly popular special issue. The papers that comprise this issue are briefly summarized.

  14. Recall versus familiarity when recall fails for words and scenes: the differential roles of the hippocampus, perirhinal cortex, and category-specific cortical regions.

    PubMed

    Ryals, Anthony J; Cleary, Anne M; Seger, Carol A

    2013-01-25

    This fMRI study examined recall and familiarity for words and scenes using the novel recognition without cued recall (RWCR) paradigm. Subjects performed a cued recall task in which half of the test cues resembled studied items (and thus were familiar) and half did not. Subjects also judged the familiarity of the cue itself. RWCR is the finding that, among cues for which recall fails, subjects generally rate cues that resemble studied items as more familiar than cues that do not. For words, left and right hippocampal activity increased when recall succeeded relative to when it failed. When recall failed, right hippocampal activity was decreased for familiar relative to unfamiliar cues. In contrast, right Prc activity increased for familiar cues for which recall failed relative to both familiar cues for which recall succeeded and to unfamiliar cues. For scenes, left hippocampal activity increased when recall succeeded relative to when it failed but did not differentiate familiar from unfamiliar cues when recall failed. In contrast, right Prc activity increased for familiar relative to unfamiliar cues when recall failed. Category-specific cortical regions showed effects unique to their respective stimulus types: The visual word form area (VWFA) showed effects for recall vs. familiarity specific to words, and the parahippocampal place area (PPA) showed effects for recall vs. familiarity specific to scenes. In both cases, these effects were such that there was increased activity occurring during recall relative to when recall failed, and decreased activity occurring for familiar relative to unfamiliar cues when recall failed. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Creating and Sustaining Online Professional Learning Communities. Technology, Education--Connections

    ERIC Educational Resources Information Center

    Falk, Joni K., Ed.; Drayton, Brian, Ed.

    2009-01-01

    This volume presents the work of trailblazing researchers and developers of electronic communities for professional learning. It illuminates the essential work behind the scenes in building successful online communities and scaffolding site interactions, including content selection, creation and management, administrative structures, tools and…

  16. Text Detection and Translation from Natural Scenes

    DTIC Science & Technology

    2001-06-01

    is no explicit tags around Chinese words. A module for Chinese word segmentation is included in the system. This segmentor uses a word- frequency ... list to make segmentation decisions. We tested the EBMT based method using randomly selected 50 signs from our database, assuming perfect sign

  17. Cortical systems mediating visual attention to both objects and spatial locations

    PubMed Central

    Shomstein, Sarah; Behrmann, Marlene

    2006-01-01

    Natural visual scenes consist of many objects occupying a variety of spatial locations. Given that the plethora of information cannot be processed simultaneously, the multiplicity of inputs compete for representation. Using event-related functional MRI, we show that attention, the mechanism by which a subset of the input is selected, is mediated by the posterior parietal cortex (PPC). Of particular interest is that PPC activity is differentially sensitive to the object-based properties of the input, with enhanced activation for those locations bound by an attended object. Of great interest too is the ensuing modulation of activation in early cortical regions, reflected as differences in the temporal profile of the blood oxygenation level-dependent (BOLD) response for within-object versus between-object locations. These findings indicate that object-based selection results from an object-sensitive reorienting signal issued by the PPC. The dynamic circuit between the PPC and earlier sensory regions then enables observers to attend preferentially to objects of interest in complex scenes. PMID:16840559

  18. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  19. Vauxhall’s post-industrial pleasure gardens: "death wish" and hedonism in 21st-century London.

    PubMed

    Andersson, Johan

    2011-01-01

    In recent years Vauxhall in south London has been transformed and rebranded as an urban leisure zone for gay men. Disused railway arches and warehouses have been converted into nightclubs and a significant night-time economy has developed rivalling Soho's existing gay village. However, with its commodified forms of public sex and high levels of recreational drug use, Vauxhall's club scene looks rather different from the British gay villages of the 1990s. This article examines how the area's nightlife entrepreneurs have capitalised on the recent liberalisation of licensing laws while drawing on the historical associations with the Vauxhall Pleasure Gardens (1660-1859) in attempts to market the area as a site of embedded hedonism. Overall, the aesthetic and cultural themes of Vauxhall's club scene seem to contradict earlier assumptions about the desexualisation and sanitisation of contemporary gay culture.

  20. Wind vs. Dust Devil Streaks

    NASA Technical Reports Server (NTRS)

    2004-01-01

    22 February 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image presents a fine illustration of the difference between streaks made by dust devils and streaks made by wind gusts. Dust devils are usually solitary, spinning vortices. They resemble a tornado, or the swirling motion of a familiar, Tasmanian cartoon character. Wind gusts, on the other hand, can cover a larger area and affect more terrain at the same time. The dark, straight, and parallel features resembling scrape marks near the right/center of this image are thought to have been formed by a singular gust of wind, whereas the more haphazard dark streaks that crisscross the scene were formed by dozens of individual dust devils, acting at different times. This southern summer image is located in Noachis Terra near 67.0oS, 316.2oW. Sunlight illuminates the scene from the upper left; the picture covers an area 3 km (1.9 mi) wide.

  1. Atypical Exit Wound in High-Voltage Electrocution.

    PubMed

    Parakkattil, Jamshid; Kandasamy, Shanmugam; Das, Siddhartha; Devnath, Gerard Pradeep; Chaudhari, Vinod Ashok; Shaha, Kusa Kumar

    2017-12-01

    Electrocution fatality cases are difficult to investigate. High-voltage electrocution burns resemble burns caused by other sources, especially if the person survives for few days. In that case, circumstantial evidence if correlated with the autopsy findings helps in determining the cause and manner of death. In addition, the crime scene findings also help to explain the pattern of injuries observed at autopsy. A farmer came in contact with a high-voltage transmission wire and sustained superficial to deep burns over his body. A charred and deeply scorched area was seen over the face, which was suggestive of the electric entry wound. The exit wound was present over both feet and lower leg and was atypical in the form of a burnt area of peeled blistered skin, charring, and deep scorching. The injuries were correlated with crime scene findings, and the circumstances that lead to his electrocution are discussed here.

  2. Proto-object categorisation and local gist vision using low-level spatial features.

    PubMed

    Martins, Jaime A; Rodrigues, J M F; du Buf, J M H

    2015-09-01

    Object categorisation is a research area with significant challenges, especially in conditions with bad lighting, occlusions, different poses and similar objects. This makes systems that rely on precise information unable to perform efficiently, like a robotic arm that needs to know which objects it can reach. We propose a biologically inspired object detection and categorisation framework that relies on robust low-level object shape. Using only edge conspicuity and disparity features for scene figure-ground segregation and object categorisation, a trained neural network classifier can quickly categorise broad object families and consequently bootstrap a low-level scene gist system. We argue that similar processing is possibly located in the parietal pathway leading to the LIP cortex and, via areas V5/MT and MST, providing useful information to the superior colliculus for eye and head control. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Volcano Near Pavonis Mons

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-549, 19 November 2003

    The volcanic plains to the east, southeast, and south of the giant Tharsis volcano, Pavonis Mons, are dotted by dozens of small volcanoes. This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an example located near 2.1oS, 109.1oW. The elongate depression in the lower left (southwest) quarter of the image is the collapsed vent area for this small, unnamed volcano. A slightly sinuous, leveed channel runs from the depression toward the upper right (north-northeast); this is the trace of a collapsed lava tube. The entire scene has been mantled by dust, such that none of the original volcanic rocks are exposed--except minor occurrences on the steepest slopes in the vent area. The scene is 3 km (1.9 mi) wide and illuminated by sunlight from the left/upper left.

  4. Scene recognition following locomotion around a scene.

    PubMed

    Motes, Michael A; Finlay, Cory A; Kozhevnikov, Maria

    2006-01-01

    Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40 degrees to 360 degrees). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36 degrees to 180 degrees. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.

  5. Smoking scenes in popular Japanese serial television dramas: descriptive analysis during the same 3-month period in two consecutive years.

    PubMed

    Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu

    2006-06-01

    Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.

  6. Postponing the Encyclopedia: Children as Researchers.

    ERIC Educational Resources Information Center

    Pinsel, Marc I.; Pinsel, Jerry K.

    Research is the planned collection, selection, and processing of information that typically takes three forms--historical, descriptive, or experimental. Historical research seeks to uncover facts with respect to events that have already happened, descriptive research seeks to uncover facts with respect to the current scene of events, and…

  7. Higher Education Management. The Key Elements.

    ERIC Educational Resources Information Center

    Warner, David, Ed.; Palfreyman, David, Ed.

    This book presents the views of 15 individual authors on the principles of management in higher education from a British perspective. Preliminary material includes brief biographical sketches of each contributing author and a list of selected abbreviations. Individual chapters are: (1) "Setting the Scene" (David Palfreyman and David…

  8. The Language Testing Cycle: From Inception to Washback. Series S, Number 13.

    ERIC Educational Resources Information Center

    Wigglesworth, Gillian, Ed.; Elder, Catherine, Ed.

    A selection of essays on language testing includes: "Perspectives on the Testing Cycle: Setting the Scene" (Catherine Elder, Gillian Wigglesworth); "The Politicisation of English: The Case of the STEP Test and the Chinese Students" (Lesleyanne Hawthorne); "Developing Language Tests for Specific Populations" (Rosemary…

  9. Blur Detection is Unaffected by Cognitive Load.

    PubMed

    Loschky, Lester C; Ringer, Ryan V; Johnson, Aaron P; Larson, Adam M; Neider, Mark; Kramer, Arthur F

    2014-03-01

    Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N -back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N -back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N -back level on N -back performance, scene recognition memory, and gaze dispersion, but no effect of N -back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N -back task.

  10. Polygons in Seasonal Frost

    NASA Technical Reports Server (NTRS)

    2004-01-01

    8 February 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a summertime scene in the south polar region of the red planet. A patch of bright frost--possibly water ice--is seen in the lower third of the image. Polygon patterns that have developed in the ice as it sublimes away can be seen; these are not evident in the defrosted surfaces, so they are thought to have formed in the frost. This image is located near 82.6oS, 352.5oW. Sunlight illuminates this scene from the upper left; the image covers an area 3 km (1.9 mi) wide.

  11. Multidate Landsat lake quality monitoring program

    NASA Technical Reports Server (NTRS)

    Fisher, L. T.; Scarpace, F. L.; Thomsen, R. G.

    1979-01-01

    A unified package of files and programs has been developed to automate the multidate Landsat-derived analyses of water quality for about 3000 inland lakes throughout Wisconsin. A master lakes file which stores geographic information on the lakes, a file giving the latitudes and longitudes of control points for scene navigation, and a program to estimate control point locations and produce microfiche character maps for scene navigation are among the files and programs of the system. The use of ground coordinate systems to isolate irregular shaped areas which can be accessed at will appears to provide an economical means of restricting the size of the data set.

  12. "A cool little buzz": alcohol intoxication in the dance club scene.

    PubMed

    Hunt, Geoffrey; Moloney, Molly; Fazio, Adam

    2014-06-01

    In recent years, there has been increasing concern about youthful "binge" drinking and intoxication. Yet the meaning of intoxication remains under-theorized. This paper examines intoxication in a young adult nightlife scene, using data from a 2005-2008 National Institute on Drug Abuse-funded project on Asian American youth and nightlife. Analyzing in-depth qualitative interview data with 250 Asian American young adults in the San Francisco area, we examine their narratives about alcohol intoxication with respect to sociability, stress, and fun, and their navigation of the fine line between being "buzzed" and being "wasted." Finally, limitations of the study and directions for future research are noted.

  13. Bone fragments a body can make

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stout, S.D.; Ross, L.M. Jr.

    Data obtained from various analytical techniques applied to a number of small bone fragments recovered from a crime scene were used to provide evidence for the occurrence of a fatality. Microscopic and histomorphometric analyses confirmed that the fragments were from a human skull. X-ray microanalysis of darkened areas on the bone fragments revealed a chemical signature that matched the chemical signature of a shotgun pellet recovered at the scene of the crime. The above findings supported the deoxyribonucleic acid (DNA) fingerprint evidence which, along with other evidence, was used to convict a man for the murder of his wife, evenmore » though her body was never recovered.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Jiangye

    Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less

  15. Designer's approach for scene selection in tests of preference and restoration along a continuum of natural to manmade environments.

    PubMed

    Hunter, MaryCarol R; Askarinejad, Ali

    2015-01-01

    It is well-established that the experience of nature produces an array of positive benefits to mental well-being. Much less is known about the specific attributes of green space which produce these effects. In the absence of translational research that links theory with application, it is challenging to design urban green space for its greatest restorative potential. This translational research provides a method for identifying which specific physical attributes of an environmental setting are most likely to influence preference and restoration responses. Attribute identification was based on a triangulation process invoking environmental psychology and aesthetics theories, principles of design founded in mathematics and aesthetics, and empirical research on the role of specific physical attributes of the environment in preference or restoration responses. From this integration emerged a list of physical attributes defining aspects of spatial structure and environmental content found to be most relevant to the perceptions involved with preference and restoration. The physical attribute list offers a starting point for deciphering which scene stimuli dominate or collaborate in preference and restoration responses. To support this, functional definitions and metrics-efficient methods for attribute quantification are presented. Use of these research products and the process for defining place-based metrics can provide (a) greater control in the selection and interpretation of the scenes/images used in tests of preference and restoration and (b) an expanded evidence base for well-being designers of the built environment.

  16. Designer's approach for scene selection in tests of preference and restoration along a continuum of natural to manmade environments

    PubMed Central

    Hunter, MaryCarol R.; Askarinejad, Ali

    2015-01-01

    It is well-established that the experience of nature produces an array of positive benefits to mental well-being. Much less is known about the specific attributes of green space which produce these effects. In the absence of translational research that links theory with application, it is challenging to design urban green space for its greatest restorative potential. This translational research provides a method for identifying which specific physical attributes of an environmental setting are most likely to influence preference and restoration responses. Attribute identification was based on a triangulation process invoking environmental psychology and aesthetics theories, principles of design founded in mathematics and aesthetics, and empirical research on the role of specific physical attributes of the environment in preference or restoration responses. From this integration emerged a list of physical attributes defining aspects of spatial structure and environmental content found to be most relevant to the perceptions involved with preference and restoration. The physical attribute list offers a starting point for deciphering which scene stimuli dominate or collaborate in preference and restoration responses. To support this, functional definitions and metrics—efficient methods for attribute quantification are presented. Use of these research products and the process for defining place-based metrics can provide (a) greater control in the selection and interpretation of the scenes/images used in tests of preference and restoration and (b) an expanded evidence base for well-being designers of the built environment. PMID:26347691

  17. UV 380 nm reflectivity of the Earth's surface, clouds and aerosols

    NASA Astrophysics Data System (ADS)

    Herman, J. R.; Celarier, E.; Larko, D.

    2001-03-01

    The 380 nm radiance measurements of the Total Ozone Mapping Spectrometer (TOMS) have been converted into a global data set of daily (1979-1992) Lambert equivalent reflectivities R of the Earth's surface and boundary layer (clouds, aerosols, surface haze, and snow/ice) and then corrected to RPC for the presence of partly clouded scenes. Since UV surface reflectivity is between 2 and 8% for both land and water during all seasons of the year (except for ice and snow cover), reflectivities larger than the surface value indicate the presence of clouds, haze, or aerosols in the satellite field of view. A statistical analysis of 14 years of daily reflectivity data shows that most snow-/ice-free scenes observed by TOMS have a reflectivity less than 10% for the majority of days during a year. The 380 nm reflectivity data show that the true surface reflectivity is 2-3% lower than the most frequently occurring reflectivity value for each TOMS scene as seen from space. Most likely the cause is a combination of frequently occurring boundary layer water and/or aerosol haze. For most regions the observation of extremely clear conditions needed to estimate the surface reflectivity from space is a comparatively rare occurrence. Certain areas (e.g., Australia, southern Africa, portions of northern Africa) are cloud-free more than 80% of the year, which exposes these regions to larger amounts of UV radiation than at comparable latitudes in the Northern Hemisphere. Regions over rain forests, jungle areas, Europe and Russia, the bands surrounding the Arctic and Antarctic regions, and many ocean areas have significant cloud cover (R>15%) more than half of each year. In the low to middle latitudes the areas with the heaviest cloud cover (highest reflectivity for most of the year) are the forest areas of northern South America, southern Central America, the jungle areas of equatorial Africa, and high mountain regions such as the Himalayas or the Andes. The TOMS reflectivity data show both the presence of large nearly clear ocean areas and the effects of the major ocean currents on cloud production.

  18. Best-next-view algorithm for three-dimensional scene reconstruction using range images

    NASA Astrophysics Data System (ADS)

    Banta, J. E.; Zhien, Yu; Wang, X. Z.; Zhang, G.; Smith, M. T.; Abidi, Mongi A.

    1995-10-01

    The primary focus of the research detailed in this paper is to develop an intelligent sensing module capable of automatically determining the optimal next sensor position and orientation during scene reconstruction. To facilitate a solution to this problem, we have assembled a system for reconstructing a 3D model of an object or scene from a sequence of range images. Candidates for the best-next-view position are determined by detecting and measuring occlusions to the range camera's view in an image. Ultimately, the candidate which will reveal the greatest amount of unknown scene information is selected as the best-next-view position. Our algorithm uses ray tracing to determine how much new information a given sensor perspective will reveal. We have tested our algorithm successfully on several synthetic range data streams, and found the system's results to be consistent with an intuitive human search. The models recovered by our system from range data compared well with the ideal models. Essentially, we have proven that range information of physical objects can be employed to automatically reconstruct a satisfactory dynamic 3D computer model at a minimal computational expense. This has obvious implications in the contexts of robot navigation, manufacturing, and hazardous materials handling. The algorithm we developed takes advantage of no a priori information in finding the best-next-view position.

  19. Agricultural and Ranching area, Rio Sao Francisco, Brazil

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This agricultural and Ranching area, Rio Sao Francisco, Brazil (13.0S, 43.5W) has been under study for several years. See scene STS-31-92-045 for comparison. This area has many small single family subsistence farms, large square and rectangular commercial farms and pastures for livestock grazing. Over the several years of observation, the number and size of farms has increased and center-pivot, swing-arm irrigation systems have been installed.

  20. Where's Wally: the influence of visual salience on referring expression generation.

    PubMed

    Clarke, Alasdair D F; Elsner, Micha; Rohde, Hannah

    2013-01-01

    REFERRING EXPRESSION GENERATION (REG) PRESENTS THE CONVERSE PROBLEM TO VISUAL SEARCH: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such "good" descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Where's Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  1. Drivers' and non-drivers' performance in a change detection task with static driving scenes: is there a benefit of experience?

    PubMed

    Zhao, Nan; Chen, Wenfeng; Xuan, Yuming; Mehler, Bruce; Reimer, Bryan; Fu, Xiaolan

    2014-01-01

    The 'looked-but-failed-to-see' phenomenon is crucial to driving safety. Previous research utilising change detection tasks related to driving has reported inconsistent effects of driver experience on the ability to detect changes in static driving scenes. Reviewing these conflicting results, we suggest that drivers' increased ability to detect changes will only appear when the task requires a pattern of visual attention distribution typical of actual driving. By adding a distant fixation point on the road image, we developed a modified change blindness paradigm and measured detection performance of drivers and non-drivers. Drivers performed better than non-drivers only in scenes with a fixation point. Furthermore, experience effect interacted with the location of the change and the relevance of the change to driving. These results suggest that learning associated with driving experience reflects increased skill in the efficient distribution of visual attention across both the central focus area and peripheral objects. This article provides an explanation for the previously conflicting reports of driving experience effects in change detection tasks. We observed a measurable benefit of experience in static driving scenes, using a modified change blindness paradigm. These results have translational opportunities for picture-based training and testing tools to improve driver skill.

  2. Planarity constrained multi-view depth map reconstruction for urban scenes

    NASA Astrophysics Data System (ADS)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  3. Infants’ Looking to Surprising Events: When Eye-Tracking Reveals More than Looking Time

    PubMed Central

    Yeung, H. Henny; Denison, Stephanie; Johnson, Scott P.

    2016-01-01

    Research on infants’ reasoning abilities often rely on looking times, which are longer to surprising and unexpected visual scenes compared to unsurprising and expected ones. Few researchers have examined more precise visual scanning patterns in these scenes, and so, here, we recorded 8- to 11-month-olds’ gaze with an eye tracker as we presented a sampling event whose outcome was either surprising, neutral, or unsurprising: A red (or yellow) ball was drawn from one of three visible containers populated 0%, 50%, or 100% with identically colored balls. When measuring looking time to the whole scene, infants were insensitive to the likelihood of the sampling event, replicating failures in similar paradigms. Nevertheless, a new analysis of visual scanning showed that infants did spend more time fixating specific areas-of-interest as a function of the event likelihood. The drawn ball and its associated container attracted more looking than the other containers in the 0% condition, but this pattern was weaker in the 50% condition, and even less strong in the 100% condition. Results suggest that measuring where infants look may be more sensitive than simply how much looking there is to the whole scene. The advantages of eye tracking measures over traditional looking measures are discussed. PMID:27926920

  4. Forensic science information needs of patrol officers: The perceptions of the patrol officers, their supervisors and administrators, detectives, and crime scene technicians

    NASA Astrophysics Data System (ADS)

    Aydogdu, Eyup

    Thanks to the rapid developments in science and technology in recent decades, especially in the past two decades, forensic sciences have been making invaluable contributions to criminal justice systems. With scientific evaluation of physical evidence, policing has become more effective in fighting crime and criminals. On the other hand, law enforcement personnel have made mistakes during the detection, protection, collection, and evaluation of physical evidence. Law enforcement personnel, especially patrol officers, have been criticized for ignoring or overlooking physical evidence at crime scenes. This study, conducted in a large American police department, was aimed to determine the perceptions of patrol officers, their supervisors and administrators, detectives, and crime scene technicians about the forensic science needs of patrol officers. The results showed no statistically significant difference among the perceptions of the said groups. More than half of the respondents perceived that 14 out of 16 areas of knowledge were important for patrol officers to have: crime scene documentation, evidence collection, interviewing techniques, firearm evidence, latent and fingerprint evidence, blood evidence, death investigation information, DNA evidence, document evidence, electronically recorded evidence, trace evidence, biological fluid evidence, arson and explosive evidence, and impression evidence. Less than half of the respondents perceived forensic entomology and plant evidence as important for patrol officers.

  5. Finding and recognizing objects in natural scenes: complementary computations in the dorsal and ventral visual systems

    PubMed Central

    Rolls, Edmund T.; Webb, Tristan J.

    2014-01-01

    Searching for and recognizing objects in complex natural scenes is implemented by multiple saccades until the eyes reach within the reduced receptive field sizes of inferior temporal cortex (IT) neurons. We analyze and model how the dorsal and ventral visual streams both contribute to this. Saliency detection in the dorsal visual system including area LIP is modeled by graph-based visual saliency, and allows the eyes to fixate potential objects within several degrees. Visual information at the fixated location subtending approximately 9° corresponding to the receptive fields of IT neurons is then passed through a four layer hierarchical model of the ventral cortical visual system, VisNet. We show that VisNet can be trained using a synaptic modification rule with a short-term memory trace of recent neuronal activity to capture both the required view and translation invariances to allow in the model approximately 90% correct object recognition for 4 objects shown in any view across a range of 135° anywhere in a scene. The model was able to generalize correctly within the four trained views and the 25 trained translations. This approach analyses the principles by which complementary computations in the dorsal and ventral visual cortical streams enable objects to be located and recognized in complex natural scenes. PMID:25161619

  6. Effect of Clouds on Apertures of Space-based Air Fluorescence Detectors

    NASA Technical Reports Server (NTRS)

    Sokolsky, P.; Krizmanic, J.

    2003-01-01

    Space-based ultra-high-energy cosmic ray detectors observe fluorescence light from extensive air showers produced by these particles in the troposphere. Clouds can scatter and absorb this light and produce systematic errors in energy determination and spectrum normalization. We study the possibility of using IR remote sensing data from MODIS and GOES satellites to delimit clear areas of the atmosphere. The efficiency for detecting ultra-high-energy cosmic rays whose showers do not intersect clouds is determined for real, night-time cloud scenes. We use the MODIS SST cloud mask product to define clear pixels for cloud scenes along the equator and use the OWL Monte Carlo to generate showers in the cloud scenes. We find the efficiency for cloud-free showers with closest approach of three pixels to a cloudy pixel is 6.5% exclusive of other factors. We conclude that defining a totally cloud-free aperture reduces the sensitivity of space-based fluorescence detectors to unacceptably small levels.

  7. Narrative comprehension and production in children with SLI: An eye movement study

    PubMed Central

    ANDREU, LLORENÇ; SANZ-TORRENT, MONICA; OLMOS, JOAN GUÀRDIA; MACWHINNEY, BRIAN

    2014-01-01

    This study investigates narrative comprehension and production in children with specific language impairment (SLI). Twelve children with SLI (mean age 5; 8 years) and 12 typically developing children (mean age 5; 6 years) participated in an eye-tracking experiment designed to investigate online narrative comprehension and production in Catalan- and Spanish-speaking children with SLI. The comprehension task involved the recording of eye movements during the visual exploration of successive scenes in a story, while listening to the associated narrative. With regard to production, the children were asked to retell the story, while once again looking at the scenes, as their eye movements were monitored. During narrative production, children with SLI look at the most semantically relevant areas of the scenes fewer times than their age-matched controls, but no differences were found in narrative comprehension. Moreover, the analyses of speech productions revealed that children with SLI retained less information and made more semantic and syntactic errors during retelling. Implications for theories that characterize SLI are discussed. PMID:21453036

  8. Face recognition for criminal identification: An implementation of principal component analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.

    2017-10-01

    In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.

  9. 'Working behind the scenes'. An ethical view of mental health nursing and first-episode psychosis.

    PubMed

    Moe, Cathrine; Kvig, Erling I; Brinchmann, Beate; Brinchmann, Berit S

    2013-08-01

    The aim of this study was to explore and reflect upon mental health nursing and first-episode psychosis. Seven multidisciplinary focus group interviews were conducted, and data analysis was influenced by a grounded theory approach. The core category was found to be a process named 'working behind the scenes'. It is presented along with three subcategories: 'keeping the patient in mind', 'invisible care' and 'invisible network contact'. Findings are illuminated with the ethical principles of respect for autonomy and paternalism. Nursing care is dynamic, and clinical work moves along continuums between autonomy and paternalism and between ethical reflective and non-reflective practice. 'Working behind the scenes' is considered to be in a paternalistic area, containing an ethical reflection. Treating and caring for individuals experiencing first-episode psychosis demands an ethical awareness and great vigilance by nurses. The study is a contribution to reflection upon everyday nursing practice, and the conclusion concerns the importance of making invisible work visible.

  10. Extended census transform histogram for land-use scene classification

    NASA Astrophysics Data System (ADS)

    Yuan, Baohua; Li, Shijin

    2017-04-01

    With the popular use of high-resolution satellite images, more and more research efforts have been focused on land-use scene classification. In scene classification, effective visual features can significantly boost the final performance. As a typical texture descriptor, the census transform histogram (CENTRIST) has emerged as a very powerful tool due to its effective representation ability. However, the most prominent limitation of CENTRIST is its small spatial support area, which may not necessarily be adept at capturing the key texture characteristics. We propose an extended CENTRIST (eCENTRIST), which is made up of three subschemes in a greater neighborhood scale. The proposed eCENTRIST not only inherits the advantages of CENTRIST but also encodes the more useful information of local structures. Meanwhile, multichannel eCENTRIST, which can capture the interactions from multichannel images, is developed to obtain higher categorization accuracy rates. Experimental results demonstrate that the proposed method can achieve competitive performance when compared to state-of-the-art methods.

  11. Computer 3D site model generation based on aerial images

    NASA Astrophysics Data System (ADS)

    Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.

    1997-07-01

    The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.

  12. Real time imaging and infrared background scene analysis using the Naval Postgraduate School infrared search and target designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Bernier, Jean D.

    1991-09-01

    The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.

  13. Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception.

    PubMed

    Dima, Diana C; Perry, Gavin; Singh, Krish D

    2018-06-11

    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Thematic mapper data quality and performance assessment in renewable resource/agricultural remote sensing

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Macdonald, R. B. (Principal Investigator)

    1982-01-01

    A "quick look" investigation of the initial LANDSAT-4, thematic mapper (TM) scene received from Goddard Space Flight Center was performed to gain early insight into the characteristics of TM data. The initial scene, containing only the first four bands of the seven bands recorded by the TM, was acquired over the Detroit, Michigan, area on July 20, 1982. It yielded abundant information for scientific investigation. A wide variety of studies were conducted to assess all aspects of TM data. They ranged from manual analyses of image products to detect obvious optical, electronic, or mechanical defects to detailed machine analyses of the digital data content for evaluation of spectral separability of vegetative/nonvegetative classes. These studies were applied to several segments extracted from the full scene. No attempt was made to perform end-to-end statistical evaluations. However, the output of these studies do identify a degree of positive performance from the TM and its potential for advancing state-of-the-art crop inventory and condition assessment technology.

  15. Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow

    NASA Astrophysics Data System (ADS)

    Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar

    2018-03-01

    Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.

  16. Winds of Change: Teacher Education for the Open Area School. ATE Bulletin 36.

    ERIC Educational Resources Information Center

    Ross, Elinor; And Others

    The purpose of this bulletin is to present the state of the scene in teacher preparation for open area schools. It begins with two accounts of college students and their instructors struggling to get started in new programs. The third article cites skills of the British primary teacher as possible competencies toward which a program of teacher…

  17. Assessing change in large-scale forest area by visually interpreting Landsat images

    Treesearch

    Jerry D. Greer; Frederick P. Weber; Raymond L. Czaplewski

    2000-01-01

    As part of the Forest Resources Assessment 1990, the Food and Agriculture Organization of the United Nations visually interpreted a stratified random sample of 117 Landsat scenes to estimate global status and change in tropical forest area. Images from 1980 and 1990 were interpreted by a group of widely experienced technical people in many different tropical countries...

  18. Earth observation taken by the Expedition 43 crew

    NASA Image and Video Library

    2015-05-08

    ISS043E182380 (05/08/2015) --- NASA astronaut Scott Kelly aboard the International Space Station captured this desert scene in northern Africa on May 8th, 2015. The area shown is the Calanscio Sand Sea, in northeastern Libya.

  19. Hasty retreat of glaciers in the Palena province of Chile

    NASA Astrophysics Data System (ADS)

    Paul, F.; Mölg, N.; Bolch, T.

    2013-12-01

    Mapping glacier extent from optical satellite data has become a most efficient tool to create or update glacier inventories and determine glacier changes over time. A most valuable archive in this regard is the nearly 30-year time series of Landsat Thematic Mapper (TM) data that is freely available (already orthorectified) for most regions in the world from the USGS. One region with a most dramatic glacier shrinkage and a missing systematic assessment of changes, is the Palena province in Chile, south of Puerto Montt. A major bottleneck for accurate determination of glacier changes in this region is related to the huge amounts of snow falling in this very maritime region, hiding the perimeter of glaciers throughout the year. Consequently, we found only three years with Landsat scenes that can be used to map glacier extent through time. We here present the results of a glacier change analysis from six Landsat scenes (path-rows 232-89/90) acquired in 1985, 2000 and 2011 covering the Palena district in Chile. Clean glacier ice was mapped automatically with a standard technique (TM3/TM band ratio) and manual editing was applied to remove wrongly classified lakes and to add debris-covered glacier parts. The digital elevation model (DEM) from SRTM was used to derive drainage divides, determine glacier specific topographic parameters, and analyse the area changes in regard to topography. The scene from 2000 has the best snow conditions and was used to eliminate seasonal snow in the other two scenes by digital combination of the binary glacier masks. The observed changes show a huge spatial variability with a strong dependence on elevation and glacier hypsometry. While small mountain glaciers at high elevations and steep slopes show virtually no change over the 26-year period, ice at low elevations from large valley glaciers shows a dramatic decline (area and thickness loss). Some glaciers retreated more than 3 km over this time period or even disappeared completely. Typically, these glaciers lost contact to the accumulation areas of tributaries and now consist of an ablation area only. Furthermore, numerous pro-glacial lakes formed or expanded rapidly, increasing the local hazard potential. On the other hand, some glaciers located on or near to (still active) volcanoes have also advanced in the same time period. Observed trends in temperature (decreasing) are in contrast to the observed strong glacier shrinkage.

  20. Parametric Coding of the Size and Clutter of Natural Scenes in the Human Brain

    PubMed Central

    Park, Soojin; Konkle, Talia; Oliva, Aude

    2015-01-01

    Estimating the size of a space and its degree of clutter are effortless and ubiquitous tasks of moving agents in a natural environment. Here, we examine how regions along the occipital–temporal lobe respond to pictures of indoor real-world scenes that parametrically vary in their physical “size” (the spatial extent of a space bounded by walls) and functional “clutter” (the organization and quantity of objects that fill up the space). Using a linear regression model on multivoxel pattern activity across regions of interest, we find evidence that both properties of size and clutter are represented in the patterns of parahippocampal cortex, while the retrosplenial cortex activity patterns are predominantly sensitive to the size of a space, rather than the degree of clutter. Parametric whole-brain analyses confirmed these results. Importantly, this size and clutter information was represented in a way that generalized across different semantic categories. These data provide support for a property-based representation of spaces, distributed across multiple scene-selective regions of the cerebral cortex. PMID:24436318

  1. Transport-aware imaging

    NASA Astrophysics Data System (ADS)

    Kutulakos, Kyros N.; O'Toole, Matthew

    2015-03-01

    Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.

  2. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  3. Graphics to H.264 video encoding for 3D scene representation and interaction on mobile devices using region of interest

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang

    2007-12-01

    In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.

  4. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  5. Effects of distribution density and cell dimension of 3D vegetation model on canopy NDVI simulation base on DART

    NASA Astrophysics Data System (ADS)

    Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei

    2017-09-01

    The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.

  6. A comparison of viewer reactions to outdoor scenes and photographs of those scenes

    Treesearch

    Elwood, Jr. Shafer; Thomas A. Richards; Thomas A. Richards

    1974-01-01

    A color-slide projection or photograph can be used to determine reactions to an actual scene if the presentation adequately includes most of the elements in the scene. Eight kinds of scenes were subjected to three different types of presentation: (A) viewing. the actual scenes, (B) viewing color slides of the scenes, and (C) viewing color photographs of the scenes. For...

  7. Prevalence of Selective Serotonin Reuptake Inhibitors in Pilot Fatalities of Civil Aviation Accidents, 1990-2001

    DTIC Science & Technology

    2003-05-01

    these 10 pilot fatalities were analgesics, sympathomimetics, diphenhydramine, and/or tramadol . Ethanol was found in 3 cases wherein no other drugs...health care providers at accident scenes, or at hospitals, for resuscitation, pain reduction, and/or surgical procedures. Whereas, other drugs—such as

  8. The Thing's the Play: Doing "Hamlet."

    ERIC Educational Resources Information Center

    Sowder, Wilbur H., Jr.

    1993-01-01

    Argues for the use of film in the teaching of William Shakespeare's "Hamlet" because the play was meant to be seen and heard and not just read. Outlines a method of teaching the play by which students select a scene and perform it. Gives an example of a successful student performance. (HB)

  9. Australia and New Zealand Applied Linguistics (ANZAL): Taking Stock

    ERIC Educational Resources Information Center

    Kleinsasser, Robert C.

    2004-01-01

    This paper reviews some emerging trends in applied linguistics in both Australia and New Zealand. It sketches the current scene of (selected) postgraduate applied linguistics programs in higher education and considers how various university programs define applied linguistics through the classes (titles) they have postgraduate students complete to…

  10. Grasp Preparation Improves Change Detection for Congruent Objects

    ERIC Educational Resources Information Center

    Symes, Ed; Tucker, Mike; Ellis, Rob; Vainio, Lari; Ottoboni, Giovanni

    2008-01-01

    A series of experiments provided converging support for the hypothesis that action preparation biases selective attention to action-congruent object features. When visual transients are masked in so-called "change-blindness scenes," viewers are blind to substantial changes between 2 otherwise identical pictures that flick back and forth. The…

  11. Conducting a wildland visual resources inventory

    Treesearch

    James F. Palmer

    1979-01-01

    This paper describes a procedure for systematically inventorying the visual resources of wildland environments. Visual attributes are recorded photographically using two separate sampling methods: one based on professional judgment and the other on random selection. The location and description of each inventoried scene are recorded on U.S. Geological Survey...

  12. Report Card. Functional Models of Institutional Research and Other Selected Papers.

    ERIC Educational Resources Information Center

    Brown, Charles I., Ed.

    Presentations are on the five topics of functional models of institutional research: (1) and the political scene; (2) at public and private colleges and universities; (3) for improving communications between institutional researchers and data processors; (4) for deriving qualitative decisions from quantitative data; and (5) for special interest…

  13. The significance of a small, level-3 'semi evacuation' hospital in a terrorist attack in a nearby town.

    PubMed

    Pinkert, Moshe; Leiba, Adi; Zaltsman, Eilon; Erez, Onn; Blumenfeld, Amir; Avinoam, Shkolnick; Laor, Daniel; Schwartz, Dagan; Goldberg, Avishay; Levi, Yehezkel; Bar-Dayan, Yaron

    2007-09-01

    Terrorist attacks can occur in remote areas causing mass-casualty incidents MCIs far away from level-1 trauma centres. This study draws lessons from an MCI pertaining to the management of primary and secondary evacuation and the operational mode practiced. Data was collected from formal debriefings during and after the event, and the medical response, interactions and main outcomes analysed using Disastrous Incidents Systematic Analysis through Components, Interactions and Results (DISAST-CIR) methodology. A total of 112 people were evacuated from the scene-66 to the nearby level 3 Laniado hospital, including the eight critically and severely injured patients. Laniado hospital was instructed to act as an evacuation hospital but the flow of patients ended rapidly and it was decided to admit moderately injured victims. We introduce a novel concept of a 'semi-evacuation hospital'. This mode of operation should be selected for small-scale events in which the evacuation hospital has hospitalization capacity and is not geographically isolated. We suggest that level-3 hospitals in remote areas should be prepared and drilled to work in semi-evacuation mode during MCIs.

  14. Direct evidence for attention-dependent influences of the frontal eye-fields on feature-responsive visual cortex.

    PubMed

    Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon

    2014-11-01

    Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.

  15. The role of temporo-parietal junction (TPJ) in global Gestalt perception.

    PubMed

    Huberle, Elisabeth; Karnath, Hans-Otto

    2012-07-01

    Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.

  16. Special effects used in creating 3D animated scenes-part 1

    NASA Astrophysics Data System (ADS)

    Avramescu, A. M.

    2015-11-01

    In present, with the help of computer, we can create special effects that look so real that we almost don't perceive them as being different. These special effects are somehow hard to differentiate from the real elements like those on the screen. With the increasingly accesible 3D field that has more and more areas of application, the 3D technology goes easily from architecture to product designing. Real like 3D animations are used as means of learning, for multimedia presentations of big global corporations, for special effects and even for virtual actors in movies. Technology, as part of the movie art, is considered a prerequisite but the cinematography is the first art that had to wait for the correct intersection of technological development, innovation and human vision in order to attain full achievement. Increasingly more often, the majority of industries is using 3D sequences (three dimensional). 3D represented graphics, commercials and special effects from movies are all designed in 3D. The key for attaining real visual effects is to successfully combine various distinct elements: characters, objects, images and video scenes; like all these elements represent a whole that works in perfect harmony. This article aims to exhibit a game design from these days. Considering the advanced technology and futuristic vision of designers, nowadays we have different and multifarious game models. Special effects are decisively contributing in the creation of a realistic three-dimensional scene. These effects are essential for transmitting the emotional state of the scene. Creating the special effects is a work of finesse in order to achieve high quality scenes. Special effects can be used to get the attention of the onlooker on an object from a scene. Out of the conducted study, the best-selling game of the year 2010 was Call of Duty: Modern Warfare 2. This way, the article aims for the presented scene to be similar with many locations from this type of games, more precisely, a place from the Middle East, a very popular subject among game developers.

  17. The remote sensing of algae

    NASA Technical Reports Server (NTRS)

    Thorne, J. F.

    1977-01-01

    State agencies need rapid, synoptic and inexpensive methods for lake assessment to comply with the 1972 Amendments to the Federal Water Pollution Control Act. Low altitude aerial photography may be useful in providing information on algal type and quantity. Photography must be calibrated properly to remove sources of error including airlight, surface reflectance and scene-to-scene illumination differences. A 550-nm narrow wavelength band black and white photographic exposure provided a better correlation to algal biomass than either red or infrared photographic exposure. Of all the biomass parameters tested, depth-integrated chlorophyll a concentration correlated best to remote sensing data. Laboratory-measured reflectance of selected algae indicate that different taxonomic classes of algae may be discriminated on the basis of their reflectance spectra.

  18. Enhancement of time images for photointerpretation

    NASA Technical Reports Server (NTRS)

    Gillespie, A. R.

    1986-01-01

    The Thermal Infrared Multispectral Scanner (TIMS) images consist of six channels of data acquired in bands between 8 and 12 microns, thus they contain information about both temperature and emittance. Scene temperatures are controlled by reflectivity of the surface, but also by its geometry with respect to the Sun, time of day, and other factors unrelated to composition. Emittance is dependent upon composition alone. Thus the photointerpreter may wish to enhance emittance information selectively. Because thermal emittances in real scenes vary but little, image data tend to be highly correlated along channels. Special image processing is required to make this information available for the photointerpreter. Processing includes noise removal, construction of model emittance images, and construction of false-color pictures enhanced by decorrelation techniques.

  19. Automatic video segmentation and indexing

    NASA Astrophysics Data System (ADS)

    Chahir, Youssef; Chen, Liming

    1999-08-01

    Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.

  20. Compressive hyperspectral sensor for LWIR gas detection

    NASA Astrophysics Data System (ADS)

    Russell, Thomas A.; McMackin, Lenore; Bridge, Bob; Baraniuk, Richard

    2012-06-01

    Focal plane arrays with associated electronics and cooling are a substantial portion of the cost, complexity, size, weight, and power requirements of Long-Wave IR (LWIR) imagers. Hyperspectral LWIR imagers add significant data volume burden as they collect a high-resolution spectrum at each pixel. We report here on a LWIR Hyperspectral Sensor that applies Compressive Sensing (CS) in order to achieve benefits in these areas. The sensor applies single-pixel detection technology demonstrated by Rice University. The single-pixel approach uses a Digital Micro-mirror Device (DMD) to reflect and multiplex the light from a random assortment of pixels onto the detector. This is repeated for a number of measurements much less than the total number of scene pixels. We have extended this architecture to hyperspectral LWIR sensing by inserting a Fabry-Perot spectrometer in the optical path. This compressive hyperspectral imager collects all three dimensions on a single detection element, greatly reducing the size, weight and power requirements of the system relative to traditional approaches, while also reducing data volume. The CS architecture also supports innovative adaptive approaches to sensing, as the DMD device allows control over the selection of spatial scene pixels to be multiplexed on the detector. We are applying this advantage to the detection of plume gases, by adaptively locating and concentrating target energy. A key challenge in this system is the diffraction loss produce by the DMD in the LWIR. We report the results of testing DMD operation in the LWIR, as well as system spatial and spectral performance.

Top