NASA Technical Reports Server (NTRS)
Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry
2009-01-01
The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.
Guidance of visual attention by semantic information in real-world scenes
Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc
2014-01-01
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724
A model of proto-object based saliency
Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph
2013-01-01
Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601
The new generation of OpenGL support in ROOT
NASA Astrophysics Data System (ADS)
Tadel, M.
2008-07-01
OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.
Integration of an open interface PC scene generator using COTS DVI converter hardware
NASA Astrophysics Data System (ADS)
Nordland, Todd; Lyles, Patrick; Schultz, Bret
2006-05-01
Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.
Programmable personality interface for the dynamic infrared scene generator (IRSG2)
NASA Astrophysics Data System (ADS)
Buford, James A., Jr.; Mobley, Scott B.; Mayhall, Anthony J.; Braselton, William J.
1998-07-01
As scene generator platforms begin to rely specifically on commercial off-the-shelf (COTS) hardware and software components, the need for high speed programmable personality interfaces (PPIs) are required for interfacing to Infrared (IR) flight computer/processors and complex IR projectors in the hardware-in-the-loop (HWIL) simulation facilities. Recent technological advances and innovative applications of established technologies are beginning to allow development of cost effective PPIs to interface to COTS scene generators. At the U.S. Army Aviation and Missile Command (AMCOM) Missile Research, Development, and Engineering Center (MRDEC) researchers have developed such a PPI to reside between the AMCOM MRDEC IR Scene Generator (IRSG) and either a missile flight computer or the dynamic Laser Diode Array Projector (LDAP). AMCOM MRDEC has developed several PPIs for the first and second generation IRSGs (IRSG1 and IRSG2), which are based on Silicon Graphics Incorporated (SGI) Onyx and Onyx2 computers with Reality Engine 2 (RE2) and Infinite Reality (IR/IR2) graphics engines. This paper provides an overview of PPIs designed, integrated, tested, and verified at AMCOM MRDEC, specifically the IRSG2's PPI.
1991-09-30
it appears no other thing to me than a foul and pestilent congregation of vapors." Hamlet , Act II, Scene 229 "We will not turn our backs or look the... Shakespeare Selected Plays 210 (1981). (30) Remarks on Signing the Bill Amending the Clean Air Act, 26 Weekly Comp. Pres. Doc. 1823 (Nov. 15, 1990). (31
Multi-modal cockpit interface for improved airport surface operations
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J. (Inventor); Bailey, Randall E. (Inventor); Prinzel, III, Lawrence J. (Inventor); Kramer, Lynda J. (Inventor); Williams, Steven P. (Inventor)
2010-01-01
A system for multi-modal cockpit interface during surface operation of an aircraft comprises a head tracking device, a processing element, and a full-color head worn display. The processing element is configured to receive head position information from the head tracking device, to receive current location information of the aircraft, and to render a virtual airport scene corresponding to the head position information and the current aircraft location. The full-color head worn display is configured to receive the virtual airport scene from the processing element and to display the virtual airport scene. The current location information may be received from one of a global positioning system or an inertial navigation system.
Evaluation methodology for query-based scene understanding systems
NASA Astrophysics Data System (ADS)
Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.
2015-05-01
In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.
NASA Astrophysics Data System (ADS)
Erickson, Ricky A.; Moren, Stephen E.; Skalka, Marion S.
1998-07-01
Providing a flexible and reliable source of IR target imagery is absolutely essential for operation of an IR Scene Projector in a hardware-in-the-loop simulation environment. The Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) at Eglin AFB provides the capability, and requisite interfaces, to supply target IR imagery to its Wideband IR Scene Projector (WISP) from three separate sources at frame rates ranging from 30 - 120 Hz. Video can be input from a VCR source at the conventional 30 Hz frame rate. Pre-canned digital imagery and test patterns can be downloaded into stored memory from the host processor and played back as individual still frames or movie sequences up to a 120 Hz frame rate. Dynamic real-time imagery to the KHILS WISP projector system, at a 120 Hz frame rate, can be provided from a Silicon Graphics Onyx computer system normally used for generation of digital IR imagery through a custom CSA-built interface which is available for either the SGI/DVP or SGI/DD02 interface port. The primary focus of this paper is to describe our technical approach and experience in the development of this unique SGI computer and WISP projector interface.
Feature diagnosticity and task context shape activity in human scene-selective cortex.
Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S
2016-01-15
Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.
Selective scene perception deficits in a case of topographical disorientation.
Robin, Jessica; Lowe, Matthew X; Pishdadian, Sara; Rivest, Josée; Cant, Jonathan S; Moscovitch, Morris
2017-07-01
Topographical disorientation (TD) is a neuropsychological condition characterized by an inability to find one's way, even in familiar environments. One common contributing cause of TD is landmark agnosia, a visual recognition impairment specific to scenes and landmarks. Although many cases of TD with landmark agnosia have been documented, little is known about the perceptual mechanisms which lead to selective deficits in recognizing scenes. In the present study, we test LH, a man who exhibits TD and landmark agnosia, on measures of scene perception that require selectively attending to either the configural or surface properties of a scene. Compared to healthy controls, LH demonstrates perceptual impairments when attending to the configuration of a scene, but not when attending to its surface properties, such as the pattern of the walls or whether the ground is sand or grass. In contrast, when focusing on objects instead of scenes, LH demonstrates intact perception of both geometric and surface properties. This study demonstrates that in a case of TD and landmark agnosia, the perceptual impairments are selective to the layout of scenes, providing insight into the mechanism of landmark agnosia and scene-selective perceptual processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.
Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G
2017-05-01
We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.
Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System
2015-03-26
camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene
Visual search in scenes involves selective and non-selective pathways
Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R
2010-01-01
How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734
Jamin, Gaston; Luyten, Tom; Delsing, Rob; Braun, Susy
2017-10-17
Interactive art installations might engage nursing home residents with dementia. The main aim of this article was to describe the challenging design process of an interactive artwork for nursing home residents, in co-creation with all stakeholders and to share the used methods and lessons learned. This process is illustrated by the design of the interface of VENSTER as a case. Nursing home residents from the psychogeriatric ward, informal caregivers, client representatives, health care professionals and members of the management team were involved in the design process, which consisted of three phases: (1) identify requirements, (2) develop a prototype and (3) conduct usability tests. Several methods were used (e.g. guided co-creation sessions, "Wizard of Oz"). Each phase generated "lessons learned", which were used as the departure point of the next phase. Participants hardly paid attention to the installation and interface. There, however, seemed to be an untapped potential for creating an immersive experience by focussing more on the content itself as an interface (e.g. creating specific scenes with cues for interaction, scenes based on existing knowledge or prior experiences). "Fifteen lessons learned" which can potentially assist the design of an interactive artwork for nursing home residents suffering from dementia were derived from the design process. This description provides tools and best practices for stakeholders to make (better) informed choices during the creation of interactive artworks. It also illustrates how co-design can make the difference between designing a pleasurable experience and a meaningful one. Implications for rehabilitation Co-design with all stakeholders can make the difference between designing a pleasurable experience and a meaningful one. There seems to be an untapped potential for creating an immersive experience by focussing more on the content itself as an interface (e.g. creating specific scenes with cues for interaction, scenes based on existing knowledge or prior experiences). Content as an interface proved to be a crucial part of the overall user experience. The case-study provides tools and best practices (15 "lessons learned") for stakeholders to make (better) informed choices during the creation of interactive artworks.
The Neural Dynamics of Attentional Selection in Natural Scenes.
Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V
2016-10-12
The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.
Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-01-01
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information. PMID:29513219
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-03-07
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.
Electrophysiological revelations of trial history effects in a color oddball search task.
Shin, Eunsam; Chong, Sang Chul
2016-12-01
In visual oddball search tasks, viewing a no-target scene (i.e., no-target selection trial) leads to the facilitation or delay of the search time for a target in a subsequent trial. Presumably, this selection failure leads to biasing attentional set and prioritizing stimulus features unseen in the no-target scene. We observed attention-related ERP components and tracked the course of attentional biasing as a function of trial history. Participants were instructed to identify color oddballs (i.e., targets) shown in varied trial sequences. The number of no-target scenes preceding a target scene was increased from zero to two to reinforce attentional biasing, and colors presented in two successive no-target scenes were repeated or changed to systematically bias attention to specific colors. For the no-target scenes, the presentation of a second no-target scene resulted in an early selection of, and sustained attention to, the changed colors (mirrored in the frontal selection positivity, the anterior N2, and the P3b). For the target scenes, the N2pc indicated an earlier allocation of attention to the targets with unseen or remotely seen colors. Inhibitory control of attention, shown in the anterior N2, was greatest when the target scene was followed by repeated no-target scenes with repeated colors. Finally, search times and the P3b were influenced by both color previewing and its history. The current results demonstrate that attentional biasing can occur on a trial-by-trial basis and be influenced by both feature previewing and its history. © 2016 Society for Psychophysiological Research.
Thake, Carol L; Bambling, Matthew; Edirippulige, Sisira; Marx, Eric
2017-10-01
Research supports therapeutic use of nature scenes in healthcare settings, particularly to reduce stress. However, limited literature is available to provide a cohesive guide for selecting scenes that may provide optimal therapeutic effect. This study produced and tested a replicable process for selecting nature scenes with therapeutic potential. Psychoevolutionary theory informed the construction of the Importance for Survival Scale (IFSS), and its usefulness for identifying scenes that people generally prefer to view and that hold potential to reduce stress was tested. Relationships between Importance for Survival (IFS), preference, and restoration were tested. General community participants ( N = 20 males, 20 females; M age = 48 years) Q-sorted sets of landscape photographs (preranked by the researcher in terms of IFS using the IFSS) from most to least preferred, and then completed the Short-Version Revised Restoration Scale in response to viewing a selection of the scenes. Results showed significant positive relationships between IFS and each of scene preference (large effect), and restoration potential (medium effect), as well as between scene preference and restoration potential across the levels of IFS (medium effect), and for individual participants and scenes (large effect). IFS was supported as a framework for identifying nature scenes that people will generally prefer to view and that hold potential for restoration from emotional distress; however, greater therapeutic potential may be expected when people can choose which of the scenes they would prefer to view. Evidence for the effectiveness of the IFSS was produced.
Analysis of Urban Terrain Data for Use in the Development of an Urban Camouflage Pattern
1990-02-01
the entire lightness gamut , but concentrated in the red, orange, yellow and neutral regions of color space. 20. DISTRIBUTION I AVAILABILITY OF...le·nents grouped by color. ) Summary of Scenes Filmed for Urban Camouflage Study. 01Jtirnum Number of Do·nains Separated by Type; Sele:::ted CIELAB ...Values for All Urban Scenes. Selected CIELAB Values for Type I Urban Scenes. Selected CIELAB Values for Type II Urban Scenes. v Page 3 6 7 8 9
Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification
Wu, Lin
2017-01-01
With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories. PMID:28706534
NASA Technical Reports Server (NTRS)
Nguyen, Lac; Kenney, Patrick J.
1993-01-01
Development of interactive virtual environments (VE) has typically consisted of three primary activities: model (object) development, model relationship tree development, and environment behavior definition and coding. The model and relationship tree development activities are accomplished with a variety of well-established graphic library (GL) based programs - most utilizing graphical user interfaces (GUI) with point-and-click interactions. Because of this GUI format, little programming expertise on the part of the developer is necessary to create the 3D graphical models or to establish interrelationships between the models. However, the third VE development activity, environment behavior definition and coding, has generally required the greatest amount of time and programmer expertise. Behaviors, characteristics, and interactions between objects and the user within a VE must be defined via command line C coding prior to rendering the environment scenes. In an effort to simplify this environment behavior definition phase for non-programmers, and to provide easy access to model and tree tools, a graphical interface and development tool has been created. The principal thrust of this research is to effect rapid development and prototyping of virtual environments. This presentation will discuss the 'Visual Interface for Virtual Interaction Development' (VIVID) tool; an X-Windows based system employing drop-down menus for user selection of program access, models, and trees, behavior editing, and code generation. Examples of these selection will be highlighted in this presentation, as will the currently available program interfaces. The functionality of this tool allows non-programming users access to all facets of VE development while providing experienced programmers with a collection of pre-coded behaviors. In conjunction with its existing, interfaces and predefined suite of behaviors, future development plans for VIVID will be described. These include incorporation of dual user virtual environment enhancements, tool expansion, and additional behaviors.
Castaldelli-Maia, João Mauricio; Oliveira, Hercílio Pereira; Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh
2012-01-01
Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008). Qualitative study at two universities in the state of São Paulo. Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8). These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P < 0.001). Scenes with high scores (> 8) were defined as "quality scenes". Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.
High-fidelity real-time maritime scene rendering
NASA Astrophysics Data System (ADS)
Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin
2011-06-01
The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.
Figure-Ground Organization in Visual Cortex for Natural Scenes
2016-01-01
Abstract Figure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes, and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ∼30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge. PMID:28058269
Groen, Iris I A; Silson, Edward H; Baker, Chris I
2017-02-19
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
2017-01-01
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013
Bölte, Jens; Hofmann, Reinhild; Meier, Claudine C.; Dobel, Christian
2018-01-01
At the interface between scene perception and speech production, we investigated how rapidly action scenes can activate semantic and lexical information. Experiment 1 examined how complex action-scene primes, presented for 150 ms, 100 ms, or 50 ms and subsequently masked, influenced the speed with which immediately following action-picture targets are named. Prime and target actions were either identical, showed the same action with different actors and environments, or were unrelated. Relative to unrelated primes, identical and same-action primes facilitated naming the target action, even when presented for 50 ms. In Experiment 2, neutral primes assessed the direction of effects. Identical and same-action scenes induced facilitation but unrelated actions induced interference. In Experiment 3, written verbs were used as targets for naming, preceded by action primes. When target verbs denoted the prime action, clear facilitation was obtained. In contrast, interference was observed when target verbs were phonologically similar, but otherwise unrelated, to the names of prime actions. This is clear evidence for word-form activation by masked action scenes. Masked action pictures thus provide conceptual information that is detailed enough to facilitate apprehension and naming of immediately following scenes. Masked actions even activate their word-form information–as is evident when targets are words. We thus show how language production can be primed with briefly flashed masked action scenes, in answer to long-standing questions in scene processing. PMID:29652939
Effects of aging on neural connectivity underlying selective memory for emotional scenes
Waring, Jill D.; Addis, Donna Rose; Kensinger, Elizabeth A.
2012-01-01
Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults’ encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults’ connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. PMID:22542836
Effects of aging on neural connectivity underlying selective memory for emotional scenes.
Waring, Jill D; Addis, Donna Rose; Kensinger, Elizabeth A
2013-02-01
Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults' encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults' connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. Published by Elsevier Inc.
A new approach to modeling the influence of image features on fixation selection in scenes
Nuthmann, Antje; Einhäuser, Wolfgang
2015-01-01
Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. PMID:25752239
Oculomotor capture during real-world scene viewing depends on cognitive load.
Matsukura, Michi; Brockmole, James R; Boot, Walter R; Henderson, John M
2011-03-25
It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. Copyright © 2011 Elsevier Ltd. All rights reserved.
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.
1989-08-01
paths for integration with the off-aperture and dual-mirror VPD designs. PREFACE The goal of this work was to explore integration of an eye line-of- gaze ...Relationship in one plane between point-of- gaze on a flat scene and relative eye, detector, and scene positions...and eye line-of- gaze measurement. As a first step towards the design of an appropriate eye trac.<ing system for interface with the virtual cockpit
Deciding what is possible and impossible following hippocampal damage in humans.
McCormick, Cornelia; Rosenthal, Clive R; Miller, Thomas D; Maguire, Eleanor A
2017-03-01
There is currently much debate about whether the precise role of the hippocampus in scene processing is predominantly constructive, perceptual, or mnemonic. Here, we developed a novel experimental paradigm designed to control for general perceptual and mnemonic demands, thus enabling us to specifically vary the requirement for constructive processing. We tested the ability of patients with selective bilateral hippocampal damage and matched control participants to detect either semantic (e.g., an elephant with butterflies for ears) or constructive (e.g., an endless staircase) violations in realistic images of scenes. Thus, scenes could be semantically or constructively 'possible' or 'impossible'. Importantly, general perceptual and memory requirements were similar for both types of scene. We found that the patients performed comparably to control participants when deciding whether scenes were semantically possible or impossible, but were selectively impaired at judging if scenes were constructively possible or impossible. Post-task debriefing indicated that control participants constructed flexible mental representations of the scenes in order to make constructive judgements, whereas the patients were more constrained and typically focused on specific fragments of the scenes, with little indication of having constructed internal scene models. These results suggest that one contribution the hippocampus makes to scene processing is to construct internal representations of spatially coherent scenes, which may be vital for modelling the world during both perception and memory recall. © 2016 The Authors. Hippocampus Published by Wiley Periodicals, Inc. © 2016 The Authors. Hippocampus Published by Wiley Periodicals, Inc.
Emotional and neutral scenes in competition: orienting, efficiency, and identification.
Calvo, Manuel G; Nummenmaa, Lauri; Hyönä, Jukka
2007-12-01
To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on--and shorter saccade latencies to--emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes.
Selective looking at natural scenes: Hedonic content and gender.
Bradley, Margaret M; Costa, Vincent D; Lang, Peter J
2015-10-01
Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. Copyright © 2015 Elsevier B.V. All rights reserved.
The elephant in the room: Inconsistency in scene viewing and representation.
Spotorno, Sara; Tatler, Benjamin W
2017-10-01
We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
1992-03-17
No. 1 Approved for Public Release; Distribution Unlimited PHILLIPS LABORATORY AIR FORCE SYSTEMS COMMAND HANSCOM AIR FORCE BASE, MASSACHUSETTS 01731...the SWOE thermal models and the design of a new Command Interface System and User Interface System . 14. SUBJECT TERMS 15. NUMBER OF PAGES 116 BTI/SWOE...to the 3-D Tree Model 24 4.2.1 Operation Via the SWOE Command Interface System 26 4.2.2 Addition of Radiation Exchange to the Environment 26 4.2.3
Optical to optical interface device
NASA Technical Reports Server (NTRS)
Oliver, D. S.; Vohl, P.; Nisenson, P.
1972-01-01
The development, fabrication, and testing of a preliminary model of an optical-to-optical (noncoherent-to-coherent) interface device for use in coherent optical parallel processing systems are described. The developed device demonstrates a capability for accepting as an input a scene illuminated by a noncoherent radiation source and providing as an output a coherent light beam spatially modulated to represent the original noncoherent scene. The converter device developed under this contract employs a Pockels readout optical modulator (PROM). This is a photosensitive electro-optic element which can sense and electrostatically store optical images. The stored images can be simultaneously or subsequently readout optically by utilizing the electrostatic storage pattern to control an electro-optic light modulating property of the PROM. The readout process is parallel as no scanning mechanism is required. The PROM provides the functions of optical image sensing, modulation, and storage in a single active material.
Re-engaging with the past: recapitulation of encoding operations during episodic retrieval
Morcom, Alexa M.
2014-01-01
Recollection of events is accompanied by selective reactivation of cortical regions which responded to specific sensory and cognitive dimensions of the original events. This reactivation is thought to reflect the reinstatement of stored memory representations and therefore to reflect memory content, but it may also reveal processes which support both encoding and retrieval. The present study used event-related functional magnetic resonance imaging to investigate whether regions selectively engaged in encoding face and scene context with studied words are also re-engaged when the context is later retrieved. As predicted, encoding face and scene context with visually presented words elicited activity in distinct, context-selective regions. Retrieval of face and scene context also re-engaged some of the regions which had shown successful encoding effects. However, this recapitulation of encoding activity did not show the same context selectivity observed at encoding. Successful retrieval of both face and scene context re-engaged regions which had been associated with encoding of the other type of context, as well as those associated with encoding the same type of context. This recapitulation may reflect retrieval attempts which are not context-selective, but use shared retrieval cues to re-engage encoding operations in service of recollection. PMID:24904386
Age Differences in Selective Memory of Goal-Relevant Stimuli Under Threat.
Durbin, Kelly A; Clewett, David; Huang, Ringo; Mather, Mara
2018-02-01
When faced with threat, people often selectively focus on and remember the most pertinent information while simultaneously ignoring any irrelevant information. Filtering distractors under arousal requires inhibitory mechanisms, which take time to recruit and often decline in older age. Despite the adaptive nature of this ability, relatively little research has examined how both threat and time spent preparing these inhibitory mechanisms affect selective memory for goal-relevant information across the life span. In this study, 32 younger and 31 older adults were asked to encode task-relevant scenes, while ignoring transparent task-irrelevant objects superimposed onto them. Threat levels were increased on some trials by threatening participants with monetary deductions if they later forgot scenes that followed threat cues. We also varied the time between threat induction and a to-be-encoded scene (i.e., 2 s, 4 s, 6 s) to determine whether both threat and timing effects on memory selectivity differ by age. We found that age differences in memory selectivity only emerged after participants spent a long time (i.e., 6 s) preparing for selective encoding. Critically, this time-dependent age difference occurred under threatening, but not neutral, conditions. Under threat, longer preparation time led to enhanced memory for task-relevant scenes and greater memory suppression of task-irrelevant objects in younger adults. In contrast, increased preparation time after threat induction had no effect on older adults' scene memory and actually worsened memory suppression of task-irrelevant objects. These findings suggest that increased time to prepare top-down encoding processes benefits younger, but not older, adults' selective memory for goal-relevant information under threat. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.
2015-01-01
Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164
Social Class and Racial Differences in Children's Perceptions of Television Violence.
ERIC Educational Resources Information Center
Greenberg, Bradley S.; Gordon, Thomas F.
Perceptions of media violence and comparisons of those perceptions for different viewer subgroups were examined in a study of fifth-grade boys' perceptions of selected television scenes which differed in kind and degree of violence. Two parallel videotapes were edited to contain scenes of different kinds of physical violence, a practice scene, and…
Foulsham, Tom; Alan, Rana; Kingstone, Alan
2011-10-01
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.
Effects of chromatic image statistics on illumination induced color differences.
Lucassen, Marcel P; Gevers, Theo; Gijsenij, Arjan; Dekker, Niels
2013-09-01
We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.
ERIC Educational Resources Information Center
Amit, Elinor; Mehoudar, Eyal; Trope, Yaacov; Yovel, Galit
2012-01-01
It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test…
Software for Partly Automated Recognition of Targets
NASA Technical Reports Server (NTRS)
Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark; Selinsky, T.
2002-01-01
The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user's tendencies while the user is selecting targets and to increase the user's productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.
Baumann, Oliver; Mattingley, Jason B
2016-02-24
The human parahippocampal cortex has been ascribed central roles in both visuospatial and mnemonic processes. More specifically, evidence suggests that the parahippocampal cortex subserves both the perceptual analysis of scene layouts as well as the retrieval of associative contextual memories. It remains unclear, however, whether these two functional roles can be dissociated within the parahippocampal cortex anatomically. Here, we provide evidence for a dissociation between neural activation patterns associated with visuospatial analysis of scenes and contextual mnemonic processing along the parahippocampal longitudinal axis. We used fMRI to measure parahippocampal responses while participants engaged in a task that required them to judge the contextual relatedness of scene and object pairs, which were presented either as words or pictures. Results from combined factorial and conjunction analyses indicated that the posterior section of parahippocampal cortex is driven predominantly by judgments associated with pictorial scene analysis, whereas its anterior section is more active during contextual judgments regardless of stimulus category (scenes vs objects) or modality (word vs picture). Activation maxima associated with visuospatial and mnemonic processes were spatially segregated, providing support for the existence of functionally distinct subregions along the parahippocampal longitudinal axis and suggesting that, in humans, the parahippocampal cortex serves as a functional interface between perception and memory systems. Copyright © 2016 the authors 0270-6474/16/362536-07$15.00/0.
Selective attention during scene perception: evidence from negative priming.
Gordon, Robert D
2006-10-01
In two experiments, we examined the role of semantic scene content in guiding attention during scene viewing. In each experiment, performance on a lexical decision task was measured following the brief presentation of a scene. The lexical decision stimulus named an object that was either present or not present in the scene. The results of Experiment 1 revealed no priming from inconsistent objects (whose identities conflicted with the scene in which they appeared), but negative priming from consistent objects. The results of Experiment 2 indicated that negative priming from consistent objects occurs only when inconsistent objects are present in the scenes. Together, the results suggest that observers are likely to attend to inconsistent objects, and that representations of consistent objects are suppressed in the presence of an inconsistent object. Furthermore, the data suggest that inconsistent objects draw attention because they are relatively difficult to identify in an inappropriate context.
How emotion leads to selective memory: neuroimaging evidence.
Waring, Jill D; Kensinger, Elizabeth A
2011-06-01
Often memory for emotionally arousing items is enhanced relative to neutral items within complex visual scenes, but this enhancement can come at the expense of memory for peripheral background information. This 'trade-off' effect has been elicited by a range of stimulus valence and arousal levels, yet the magnitude of the effect has been shown to vary with these factors. Using fMRI, this study investigated the neural mechanisms underlying this selective memory for emotional scenes. Further, we examined how these processes are affected by stimulus dimensions of arousal and valence. The trade-off effect in memory occurred for low to high arousal positive and negative scenes. There was a core emotional memory network associated with the trade-off among all the emotional scene types, however, there were additional regions that were uniquely associated with the trade-off for each individual scene type. These results suggest that there is a common network of regions associated with the emotional memory trade-off effect, but that valence and arousal also independently affect the neural activity underlying the effect. Copyright © 2011 Elsevier Ltd. All rights reserved.
How emotion leads to selective memory: Neuroimaging evidence
Waring, Jill D.; Kensinger, Elizabeth A.
2011-01-01
Often memory for emotionally arousing items is enhanced relative to neutral items within complex visual scenes, but this enhancement can come at the expense of memory for peripheral background information. This ‘trade-off’ effect has been elicited by a range of stimulus valence and arousal levels, yet the magnitude of the effect has been shown to vary with these factors. Using fMRI, this study investigated the neural mechanisms underlying this selective memory for emotional scenes. Further, we examined how these processes are affected by stimulus dimensions of arousal and valence. The trade-off effect in memory occurred for low to high arousal positive and negative scenes. There was a core emotional memory network associated with the trade-off among all the emotional scene types, however there were additional regions that were uniquely associated with the trade-off for each individual scene type. These results suggest that there is a common network of regions associated with the emotional memory tradeoff effect, but that valence and arousal also independently affect the neural activity underlying the effect. PMID:21414333
Global ensemble texture representations are critical to rapid scene perception.
Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A
2017-06-01
Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A distributed code for color in natural scenes derived from center-surround filtered cone signals
Kellner, Christian J.; Wachtler, Thomas
2013-01-01
In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289
Dr TIM: Ray-tracer TIM, with additional specialist scientific capabilities
NASA Astrophysics Data System (ADS)
Oxburgh, Stephen; Tyc, Tomáš; Courtial, Johannes
2014-03-01
We describe several extensions to TIM, a raytracing program for ray-optics research. These include relativistic raytracing; simulation of the external appearance of Eaton lenses, Luneburg lenses and generalised focusing gradient-index lens (GGRIN) lenses, which are types of perfect imaging devices; raytracing through interfaces between spaces with different optical metrics; and refraction with generalised confocal lenslet arrays, which are particularly versatile METATOYs. Catalogue identifier: AEKY_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licencing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 106905 No. of bytes in distributed program, including test data, etc.: 6327715 Distribution format: tar.gz Programming language: Java. Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6. Operating system: Any, developed under Mac OS X Version 10.6 and 10.8.3. RAM: Typically 130 MB (interactive version running under Mac OS X Version 10.8.3) Classification: 14, 18. Catalogue identifier of previous version: AEKY_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)711 External routines: JAMA [1] (source code included) Does the new version supersede the previous version?: Yes Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Reasons for new version: Significant extension of the capabilities (see Summary of revisions), as demanded by our research. Summary of revisions: Added capabilities include the simulation of different types of camera moving at relativistic speeds relative to the scene; visualisation of the external appearance of generalised focusing gradient-index (GGRIN) lenses, including Maxwell fisheye, Eaton and Luneburg lenses; calculation of refraction at the interface between spaces with different optical metrics; and handling of generalised confocal lenslet arrays (gCLAs), a new type of METATOY. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories and geometric optic transformations; can simulate photos taken with different types of camera moving at relativistic speeds, interfaces between spaces with different optical metrics, the view through METATOYs and generalised focusing gradient-index lenses; can create anaglyphs (for viewing with coloured “3D glasses”), HDMI-1.4a standard 3D images, and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene. References: [1] JAMA: A Java Matrix Package, http://math.nist.gov/javanumerics/jama/
Santangelo, Valerio; Di Francesco, Simona Arianna; Mastroberardino, Serena; Macaluso, Emiliano
2015-12-01
The Brief presentation of a complex scene entails that only a few objects can be selected, processed indepth, and stored in memory. Both low-level sensory salience and high-level context-related factors (e.g., the conceptual match/mismatch between objects and scene context) contribute to this selection process, but how the interplay between these factors affects memory encoding is largely unexplored. Here, during fMRI we presented participants with pictures of everyday scenes. After a short retention interval, participants judged the position of a target object extracted from the initial scene. The target object could be either congruent or incongruent with the context of the scene, and could be located in a region of the image with maximal or minimal salience. Behaviourally, we found a reduced impact of saliency on visuospatial working memory performance when the target was out-of-context. Encoding-related fMRI results showed that context-congruent targets activated dorsoparietal regions, while context-incongruent targets de-activated the ventroparietal cortex. Saliency modulated activity both in dorsal and ventral regions, with larger context-related effects for salient targets. These findings demonstrate the joint contribution of knowledge-based and saliency-driven attention for memory encoding, highlighting a dissociation between dorsal and ventral parietal regions. © 2015 Wiley Periodicals, Inc.
Stainer, Matthew J.; Scott-Brown, Kenneth C.; Tatler, Benjamin W.
2013-01-01
Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate “sub-scenes.” Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers. PMID:24069008
DspaceOgre 3D Graphics Visualization Tool
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.
2011-01-01
This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.
2009-03-01
model locations, time of day, and video size. The models in the scene consisted of three-dimensional representations of common civilian automobiles in...oats, wheat). Identify automobiles as sedans or station wagons. Identify individual telephone/electric poles in residential neighborhoods. Detect
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Chaitin, L. J.; Duda, R. O.; Johanson, P. A.; Raphael, B.; Rosen, C. A.; Yates, R. A.
1970-01-01
The program is reported for developing techniques in artificial intelligence and their application to the control of mobile automatons for carrying out tasks autonomously. Visual scene analysis, short-term problem solving, and long-term problem solving are discussed along with the PDP-15 simulator, LISP-FORTRAN-MACRO interface, resolution strategies, and cost effectiveness.
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion
NASA Astrophysics Data System (ADS)
von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger
2015-04-01
The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to fit to each other. Also the data selection may depend on the visualization task. Not only can the amplitude data be used but also different seismic attribute transformations. The development is supplemented by interviews, to analyse the efficiency and manageability of the stereoscopic workplace environment. Another point of investigation is the immersion, i.e. the increased concentration on the observed scene when passing through the data, triggered by the stereoscopic viewing. This effect is reinforced by a user interface which is so intuitive and simple that it does not draw attention away from the scene. For the seismic interpretation purpose the stereoscopic view supports the pattern recognition of geological structures and the detection of their spatial heterogeneity. These are topics which are relevant for the actual geothermal exploration in Germany.
Visual encoding and fixation target selection in free viewing: presaccadic brain potentials
Nikolaev, Andrey R.; Jurica, Peter; Nakatani, Chie; Plomp, Gijs; van Leeuwen, Cees
2013-01-01
In scrutinizing a scene, the eyes alternate between fixations and saccades. During a fixation, two component processes can be distinguished: visual encoding and selection of the next fixation target. We aimed to distinguish the neural correlates of these processes in the electrical brain activity prior to a saccade onset. Participants viewed color photographs of natural scenes, in preparation for a change detection task. Then, for each participant and each scene we computed an image heat map, with temperature representing the duration and density of fixations. The temperature difference between the start and end points of saccades was taken as a measure of the expected task-relevance of the information concentrated in specific regions of a scene. Visual encoding was evaluated according to whether subsequent change was correctly detected. Saccades with larger temperature difference were more likely to be followed by correct detection than ones with smaller temperature differences. The amplitude of presaccadic activity over anterior brain areas was larger for correct detection than for detection failure. This difference was observed for short “scrutinizing” but not for long “explorative” saccades, suggesting that presaccadic activity reflects top-down saccade guidance. Thus, successful encoding requires local scanning of scene regions which are expected to be task-relevant. Next, we evaluated fixation target selection. Saccades “moving up” in temperature were preceded by presaccadic activity of higher amplitude than those “moving down”. This finding suggests that presaccadic activity reflects attention deployed to the following fixation location. Our findings illustrate how presaccadic activity can elucidate concurrent brain processes related to the immediate goal of planning the next saccade and the larger-scale goal of constructing a robust representation of the visual scene. PMID:23818877
NASA Astrophysics Data System (ADS)
Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément
2015-07-01
3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.
Investigation of scene identification algorithms for radiation budget measurements
NASA Technical Reports Server (NTRS)
Diekmann, F. J.
1986-01-01
The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.
The occipital place area represents the local elements of scenes
Kamps, Frederik S.; Julian, Joshua B.; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D.
2016-01-01
Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents in not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements – both in spatial boundary and scene content representation – while PPA and RSC represent global scene properties. PMID:26931815
The occipital place area represents the local elements of scenes.
Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D
2016-05-15
Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. Copyright © 2016 Elsevier Inc. All rights reserved.
Semantic guidance of eye movements in real-world scenes
Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc
2011-01-01
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914
Semantic guidance of eye movements in real-world scenes.
Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc
2011-05-25
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.
Young, Leanne R; Yu, Weikei; Holloway, Michael; Rodgers, Barry N; Chapman, Sandra B; Krawczyk, Daniel C
2017-09-01
There has been great interest in characterizing the response of the amygdala to emotional faces, especially in the context of social cognition. Although amygdala activation is most often associated with fearful or angry stimuli, there is considerable evidence that the response of the amygdala to neutral faces is both robust and reliable. This characteristic of amygdala function is of particular interest in the context of assessing populations with executive function deficits, such as traumatic brain injuries, which can be evaluated using fMRI attention modulation tasks that evaluate prefrontal control over representations, notably faces. The current study tested the hypothesis that the amygdala may serve as a marker of selective attention to neutral faces. Using fMRI, we gathered data within a chronic traumatic brain injury population. Blood Oxygenation Level Dependent (BOLD) signal change within the left and right amygdalae and fusiform face areas was measured while participants viewed neutral faces and scenes, under conditions requiring participants to (1) categorize pictures of faces and scenes, (2) selectively attend to either faces or scenes, or (3) attend to both faces and scenes. Findings revealed that the amygdala is an effective marker for selective attention to neutral faces and, furthermore, it was more face-specific than the fusiform face area. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid-mode read-in integrated circuit for infrared scene projectors
NASA Astrophysics Data System (ADS)
Cho, Min Ji; Shin, Uisub; Lee, Hee Chul
2017-05-01
The infrared scene projector (IRSP) is a tool for evaluating infrared sensors by producing infrared images. Because sensor testing with IRSPs is safer than field testing, the usefulness of IRSPs is widely recognized at present. The important performance characteristics of IRSPs are the thermal resolution and the thermal dynamic range. However, due to an existing trade-off between these requirements, it is often difficult to find a workable balance between them. The conventional read-in integrated circuit (RIIC) can be classified into two types: voltage-mode and current-mode types. An IR emitter driven by a voltage-mode RIIC offers a fine thermal resolution. On the other hand, an emitter driven by the current-mode RIIC has the advantage of a wide thermal dynamic range. In order to provide various scenes, i.e., from highresolution scenes to high-temperature scenes, both of the aforementioned advantages are required. In this paper, a hybridmode RIIC which is selectively operated in two modes is proposed. The mode-selective characteristic of the proposed RIIC allows users to generate high-fidelity scenes regardless of the scene content. A prototype of the hybrid-mode RIIC was fabricated using a 0.18-μm 1-poly 6-metal CMOS process. The thermal range and the thermal resolution of the IR emitter driven by the proposed circuit were calculated based on measured data. The estimated thermal dynamic range of the current mode was from 261K to 790K, and the estimated thermal resolution of the voltage mode at 300K was 23 mK with a 12-bit gray-scale resolution.
Software for Partly Automated Recognition of Targets
NASA Technical Reports Server (NTRS)
Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark
2003-01-01
The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user s tendencies while the user is selecting targets and to increase the user s productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.
Local Planning Considerations for the Wildland-Structural Intermix in the Year 2000
Robert L. Irwin
1987-01-01
California's foothill counties are the scene of rapid development. All types of construction in former wildlands is creating an intermix of wildland-structures-wildland that is different from the traditional "urban-wildland interface." The fire and structural environment for seven counties is described. Fire statistics are compared with growth patterns...
People, planners and policy: is there an interface?
Susan Kopka
1979-01-01
This research attempts to isolate some of the dimensions of human evaluations/perceptions of the built environment through the use of an Audience Response Machine and a video tape of environmental scenes. The results suggest that there are commonalities in peoples' evaluations/perceptions and that this type of inquiry has prescriptive value for design/planning....
The Allocation of Visual Attention in Multimedia Search Interfaces
ERIC Educational Resources Information Center
Hughes, Edith Allen
2017-01-01
Multimedia analysts are challenged by the massive numbers of unconstrained video clips generated daily. Such clips can include any possible scene and events, and generally have limited quality control. Analysts who must work with such data are overwhelmed by its volume and lack of computational tools to probe it effectively. Even with advances…
Neural Codes for One's Own Position and Direction in a Real-World "Vista" Environment.
Sulpizio, Valentina; Boccia, Maddalena; Guariglia, Cecilia; Galati, Gaspare
2018-01-01
Humans, like animals, rely on an accurate knowledge of one's spatial position and facing direction to keep orientated in the surrounding space. Although previous neuroimaging studies demonstrated that scene-selective regions (the parahippocampal place area or PPA, the occipital place area or OPA and the retrosplenial complex or RSC), and the hippocampus (HC) are implicated in coding position and facing direction within small-(room-sized) and large-scale navigational environments, little is known about how these regions represent these spatial quantities in a large open-field environment. Here, we used functional magnetic resonance imaging (fMRI) in humans to explore the neural codes of these navigationally-relevant information while participants viewed images which varied for position and facing direction within a familiar, real-world circular square. We observed neural adaptation for repeated directions in the HC, even if no navigational task was required. Further, we found that the amount of knowledge of the environment interacts with the PPA selectivity in encoding positions: individuals who needed more time to memorize positions in the square during a preliminary training task showed less neural attenuation in this scene-selective region. We also observed adaptation effects, which reflect the real distances between consecutive positions, in scene-selective regions but not in the HC. When examining the multi-voxel patterns of activity we observed that scene-responsive regions and the HC encoded both spatial information and that the RSC classification accuracy for positions was higher in individuals scoring higher to a self-reported questionnaire of spatial abilities. Our findings provide new insight into how the human brain represents a real, large-scale "vista" space, demonstrating the presence of neural codes for position and direction in both scene-selective and hippocampal regions, and revealing the existence, in the former regions, of a map-like spatial representation reflecting real-world distance between consecutive positions.
Psychophysiological responses and restorative values of wilderness environments
Chun-Yen Chang; Ping-Kun Chen; William E. Hammitt; Lisa Machnik
2007-01-01
Scenes of natural areas were used as stimuli to analyze the psychological and physiological responses of subjects while viewing wildland scenes. Attention Restoration Theory (Kaplan 1995) and theorized components of restorative environments were used as an orientation for selection of the visual stimuli. Conducted in Taiwan, the studies recorded the psychophysiological...
NASA Astrophysics Data System (ADS)
Bernier, Jean D.
1991-09-01
The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.
ROSE: the road simulation environment
NASA Astrophysics Data System (ADS)
Liatsis, Panos; Mitronikas, Panogiotis
1997-05-01
Evaluation of advanced sensing systems for autonomous vehicle navigation (AVN) is currently carried out off-line with prerecorded image sequences taken by physically attaching the sensors to the ego-vehicle. The data collection process is cumbersome and costly as well as highly restricted to specific road environments and weather conditions. This work proposes the use of scientific animation in modeling and representation of real-world traffic scenes and aims to produce an efficient, reliable and cost-effective concept evaluation suite for AVN sensing algorithms. ROSE is organized in a modular fashion consisting of the route generator, the journey generator, the sequence description generator and the renderer. The application was developed in MATLAB and POV-Ray was selected as the rendering module. User-friendly graphical user interfaces have been designed to allow easy selection of animation parameters and monitoring of the generation proces. The system, in its current form, allows the generation of various traffic scenarios, providing for an adequate number of static/dynamic objects, road types and environmental conditions. Initial tests on the robustness of various image processing algorithms to varying lighting and weather conditions have been already carried out.
Emotion and attention: event-related brain potential studies.
Schupp, Harald T; Flaisch, Tobias; Stockburger, Jessica; Junghöfer, Markus
2006-01-01
Emotional pictures guide selective visual attention. A series of event-related brain potential (ERP) studies is reviewed demonstrating the consistent and robust modulation of specific ERP components by emotional images. Specifically, pictures depicting natural pleasant and unpleasant scenes are associated with an increased early posterior negativity, late positive potential, and sustained positive slow wave compared with neutral contents. These modulations are considered to index different stages of stimulus processing including perceptual encoding, stimulus representation in working memory, and elaborate stimulus evaluation. Furthermore, the review includes a discussion of studies exploring the interaction of motivated attention with passive and active forms of attentional control. Recent research is reviewed exploring the selective processing of emotional cues as a function of stimulus novelty, emotional prime pictures, learned stimulus significance, and in the context of explicit attention tasks. It is concluded that ERP measures are useful to assess the emotion-attention interface at the level of distinct processing stages. Results are discussed within the context of two-stage models of stimulus perception brought out by studies of attention, orienting, and learning.
Firearms in major motion pictures, 1995-2004.
Binswanger, Ingrid A; Cowan, John A
2009-03-01
Firearms are a major cause of injury and death. We sought to determine (1) the prevalence of movie scenes that depicted firearms and verbal firearm safety messages; (2) the context and health outcomes in firearm scenes; and (3) the association between the Motion Picture Association of America ratings and firearm scene characteristics. Ten top revenue-grossing motion pictures were selected for each year from 1995 to 2004 in descending order of gross revenues. Data on firearm scenes were collected by movie coders using dual-monitor computer workstations and real-time collection tools. Seventy of the 100 movies had scenes with firearms and the majority of movies with firearms were rated PG-13. Firearm scenes (N = 624) accounted for 17% of screen time in movies with firearms. Among firearm scenes, crime or illegal activity was involved in 45%, deaths occurred in 19%, and injuries occurred in 12%. A verbal reference to safety was made in 0.8%. Depictions of firearms in top revenue-grossing movies were common, but safety messages were exceedingly rare. Major motion pictures present an under-used opportunity for education about firearm safety.
Cybersickness in the presence of scene rotational movements along different axes.
Lo, W T; So, R H
2001-02-01
Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.
Hausfeld, Lars; Riecke, Lars; Formisano, Elia
2018-06-01
Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
On validating remote sensing simulations using coincident real data
NASA Astrophysics Data System (ADS)
Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan
2016-05-01
The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.
Functional neuroanatomy of intuitive physical inference
Mikhael, John G.; Tenenbaum, Joshua B.; Kanwisher, Nancy
2016-01-01
To engage with the world—to understand the scene in front of us, plan actions, and predict what will happen next—we must have an intuitive grasp of the world’s physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events—a “physics engine” in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general “multiple demand” system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892
Iranian Audience Poll on Smoking Scenes in Persian Movies in 2011
Heydari, Gholamreza
2014-01-01
Background: Scenes depicting smoking are among the causes of smoking initiation in youth. The present study was the first in Iran to collect some primary information regarding the presence of smoking scenes in movies and propagation of tobacco use. Methods: This cross-sectional study was conducted by polling audience about smoking scenes in Persian movies on theaters in 2011. Data were collected using a questionnaire. A total of 2000 subjects were selected for questioning. The questioning for all movies was carried out 2 weeks after the movie premiered at 4 different times including twice during the week and twice at weekends. Results: A total of 39 movies were selected for further assessment. In general, 2,129 viewers participated in the study. General opinion of 676 subjects (31.8%) was that these movies can lead to initiation or continuation of smoking in viewers. Women significantly thought that these movies can lead to initiation of smoking (37.4% vs. 29%). This belief was stronger among non-smokers as well (33.7% vs. 26%). Conclusions: Despite the prohibition of cigarette advertisements in the mass media and movies, we still witness scenes depicting smoking by the good or bad characters of the movies so more observation in this field is needed. PMID:24627742
Functional neuroanatomy of intuitive physical inference.
Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy
2016-08-23
To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.
Wildfires and Forest Development in Tropical and Subtropical Asia: Outlook for the Year 2000
Johann G. Goldammer
1987-01-01
California's foothill counties are the scene of rapid development. All types of construction in former wildlands is creating an intermix of wildland-structures-wildland that is different from the traditional "urban-wildland interface." The fire and structural environment for seven counties is described. Fire statistics are compared with growth patterns...
Algodoo: A Tool for Encouraging Creativity in Physics Teaching and Learning
ERIC Educational Resources Information Center
Gregorcic, Bor; Bodin, Madelen
2017-01-01
Algodoo (http://www.algodoo.com) is a digital sandbox for physics 2D simulations. It allows students and teachers to easily create simulated "scenes" and explore physics through a user-friendly and visually attractive interface. In this paper, we present different ways in which students and teachers can use Algodoo to visualize and solve…
STS-35 ASTRO-1 telescopes documented in OV-102's payload bay (PLB)
1990-12-10
STS035-13-008 (2-10 Dec. 1990) --- The various components of the Astro-1 payload are seen backdropped against the blue and white Earth in this 35mm scene photographed through Columbia's aft flight deck windows. Parts of the Hopkins Ultraviolet Telescope (HUT), Ultraviolet Imaging Telescope (UIT) and the Wisconsin Ultraviolet Photo-Polarimeter Experiment (WUPPE) are visible on the Spacelab Pallet in the foreground. The Broad Band X-Ray Telescope (BBXRT) is behind this pallet and is not visible in this scene. The smaller cylinder in the foreground is the "Igloo," which is a pressurized container housing the Command and Data Management System, which interfaces with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes.
Coding of navigational affordances in the human visual system
Epstein, Russell A.
2017-01-01
A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669
The Forensic Confirmation Bias: A Comparison Between Experts and Novices.
van den Eeden, Claire A J; de Poot, Christianne J; van Koppen, Peter J
2018-05-17
A large body of research has described the influence of context information on forensic decision-making. In this study, we examined the effect of context information on the search for and selection of traces by students (N = 36) and crime scene investigators (N = 58). Participants investigated an ambiguous mock crime scene and received prior information indicating suicide, a violent death or no information. Participants described their impression of the scene and wrote down which traces they wanted to secure. Results showed that context information impacted first impression of the scene and crime scene behavior, namely number of traces secured. Participants in the murder condition secured most traces. Furthermore, the students secured more crime-related traces. Students were more confident in their first impression. This study does not indicate that experts outperform novices. We therefore argue for proper training on cognitive processes as an integral part of all forensic education. © 2018 American Academy of Forensic Sciences.
The neural bases of spatial frequency processing during scene perception
Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole
2014-01-01
Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226
How affective information from faces and scenes interacts in the brain
Vandenbulcke, Mathieu; Sinke, Charlotte B. A.; Goebel, Rainer; de Gelder, Beatrice
2014-01-01
Facial expression perception can be influenced by the natural visual context in which the face is perceived. We performed an fMRI experiment presenting participants with fearful or neutral faces against threatening or neutral background scenes. Triangles and scrambled scenes served as control stimuli. The results showed that the valence of the background influences face selective activity in the right anterior parahippocampal place area (PPA) and subgenual anterior cingulate cortex (sgACC) with higher activation for neutral backgrounds compared to threatening backgrounds (controlled for isolated background effects) and that this effect correlated with trait empathy in the sgACC. In addition, the left fusiform gyrus (FG) responds to the affective congruence between face and background scene. The results show that valence of the background modulates face processing and support the hypothesis that empathic processing in sgACC is inhibited when affective information is present in the background. In addition, the findings reveal a pattern of complex scene perception showing a gradient of functional specialization along the posterior–anterior axis: from sensitivity to the affective content of scenes (extrastriate body area: EBA and posterior PPA), over scene emotion–face emotion interaction (left FG) via category–scene interaction (anterior PPA) to scene–category–personality interaction (sgACC). PMID:23956081
ERIC Educational Resources Information Center
Carlin, Michael T.; Soraci, Sal A.; Strawbridge, Christina P.
2005-01-01
Memory for scene changes that were identified immediately (passive encoding) or following systematic and effortful search (generative encoding) was compared across groups differing in age and intelligence. In the context of flicker methodology, generative search for the changing object involved selection and rejection of multiple potential…
Finding the Cause: Verbal Framing Helps Children Extract Causal Evidence Embedded in a Complex Scene
ERIC Educational Resources Information Center
Butler, Lucas P.; Markman, Ellen M.
2012-01-01
In making causal inferences, children must both identify a causal problem and selectively attend to meaningful evidence. Four experiments demonstrate that verbally framing an event ("Which animals make Lion laugh?") helps 4-year-olds extract evidence from a complex scene to make accurate causal inferences. Whereas framing was unnecessary when…
Effect of fixation positions on perception of lightness
NASA Astrophysics Data System (ADS)
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.
2015-03-01
Visual acuity, luminance sensitivity, contrast sensitivity, and color sensitivity are maximal in the fovea and decrease with retinal eccentricity. Therefore every scene is perceived by integrating the small, high resolution samples collected by moving the eyes around. Moreover, when viewing ambiguous figures the fixated position influences the dominance of the possible percepts. Therefore fixations could serve as a selection mechanism whose function is not confined to finely resolve the selected detail of the scene. Here this hypothesis is tested in the lightness perception domain. In a first series of experiments we demonstrated that when observers matched the color of natural objects they based their lightness judgments on objects' brightest parts. During this task the observers tended to fixate points with above average luminance, suggesting a relationship between perception and fixations that we causally proved using a gaze contingent display in a subsequent experiment. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. In a second series of experiments we considered a high level strategy that the visual system uses to segment the visual scene in a layered representation. We demonstrated that eye movement sampling mediates between the layer segregation and its effects on lightness perception. Together these studies show that eye fixations are partially responsible for the selection of information from a scene that allows the visual system to estimate the reflectance of a surface.
Procurement specification color graphic camera system
NASA Technical Reports Server (NTRS)
Prow, G. E.
1980-01-01
The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.
Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W
2015-02-10
Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation. © 2015 ARVO.
Fixations on objects in natural scenes: dissociating importance from salience
't Hart, Bernard M.; Schmidt, Hannah C. E. F.; Roth, Christine; Einhäuser, Wolfgang
2013-01-01
The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's “importance” for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (“common”/“important”) or a rarely named (“rare”/“unimportant”) object, track the observers' eye movements during scene viewing and ask them to provide keywords describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in reading. PMID:23882251
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Feature Selection for Classification of Polar Regions Using a Fuzzy Expert System
NASA Technical Reports Server (NTRS)
Penaloza, Mauel A.; Welch, Ronald M.
1996-01-01
Labeling, feature selection, and the choice of classifier are critical elements for classification of scenes and for image understanding. This study examines several methods for feature selection in polar regions, including the list, of a fuzzy logic-based expert system for further refinement of a set of selected features. Six Advanced Very High Resolution Radiometer (AVHRR) Local Area Coverage (LAC) arctic scenes are classified into nine classes: water, snow / ice, ice cloud, land, thin stratus, stratus over water, cumulus over water, textured snow over water, and snow-covered mountains. Sixty-seven spectral and textural features are computed and analyzed by the feature selection algorithms. The divergence, histogram analysis, and discriminant analysis approaches are intercompared for their effectiveness in feature selection. The fuzzy expert system method is used not only to determine the effectiveness of each approach in classifying polar scenes, but also to further reduce the features into a more optimal set. For each selection method,features are ranked from best to worst, and the best half of the features are selected. Then, rules using these selected features are defined. The results of running the fuzzy expert system with these rules show that the divergence method produces the best set features, not only does it produce the highest classification accuracy, but also it has the lowest computation requirements. A reduction of the set of features produced by the divergence method using the fuzzy expert system results in an overall classification accuracy of over 95 %. However, this increase of accuracy has a high computation cost.
A Model of Manual Control with Perspective Scene Viewing
NASA Technical Reports Server (NTRS)
Sweet, Barbara Townsend
2013-01-01
A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).
Locus Coeruleus Activity Strengthens Prioritized Memories Under Arousal.
Clewett, David V; Huang, Ringo; Velasco, Rico; Lee, Tae-Ho; Mather, Mara
2018-02-07
Recent models posit that bursts of locus ceruleus (LC) activity amplify neural gain such that limited attention and encoding resources focus even more on prioritized mental representations under arousal. Here, we tested this hypothesis in human males and females using fMRI, neuromelanin MRI, and pupil dilation, a biomarker of arousal and LC activity. During scanning, participants performed a monetary incentive encoding task in which threat of punishment motivated them to prioritize encoding of scene images over superimposed objects. Threat of punishment elicited arousal and selectively enhanced memory for goal-relevant scenes. Furthermore, trial-level pupil dilations predicted better scene memory under threat, but were not related to object memory outcomes. fMRI analyses revealed that greater threat-evoked pupil dilations were positively associated with greater scene encoding activity in LC and parahippocampal cortex, a region specialized to process scene information. Across participants, this pattern of LC engagement for goal-relevant encoding was correlated with neuromelanin signal intensity, providing the first evidence that LC structure relates to its activation pattern during cognitive processing. Threat also reduced dynamic functional connectivity between high-priority (parahippocampal place area) and lower-priority (lateral occipital cortex) category-selective visual cortex in ways that predicted increased memory selectivity. Together, these findings support the idea that, under arousal, LC activity selectively strengthens prioritized memory representations by modulating local and functional network-level patterns of information processing. SIGNIFICANCE STATEMENT Adaptive behavior relies on the ability to select and store important information amid distraction. Prioritizing encoding of task-relevant inputs is especially critical in threatening or arousing situations, when forming these memories is essential for avoiding danger in the future. However, little is known about the arousal mechanisms that support such memory selectivity. Using fMRI, neuromelanin MRI, and pupil measures, we demonstrate that locus ceruleus (LC) activity amplifies neural gain such that limited encoding resources focus even more on prioritized mental representations under arousal. For the first time, we also show that LC structure relates to its involvement in threat-related encoding processes. These results shed new light on the brain mechanisms by which we process important information when it is most needed. Copyright © 2018 the authors 0270-6474/18/381558-17$15.00/0.
Assessing Multiple Object Tracking in Young Children Using a Game
ERIC Educational Resources Information Center
Ryokai, Kimiko; Farzin, Faraz; Kaltman, Eric; Niemeyer, Greg
2013-01-01
Visual tracking of multiple objects in a complex scene is a critical survival skill. When we attempt to safely cross a busy street, follow a ball's position during a sporting event, or monitor children in a busy playground, we rely on our brain's capacity to selectively attend to and track the position of specific objects in a dynamic scene. This…
Long-Term Memories Bias Sensitivity and Target Selection in Complex Scenes
Patai, Eva Zita; Doallo, Sonia; Nobre, Anna Christina
2014-01-01
In everyday situations we often rely on our memories to find what we are looking for in our cluttered environment. Recently, we developed a new experimental paradigm to investigate how long-term memory (LTM) can guide attention, and showed how the pre-exposure to a complex scene in which a target location had been learned facilitated the detection of the transient appearance of the target at the remembered location (Summerfield, Lepsien, Gitelman, Mesulam, & Nobre, 2006; Summerfield, Rao, Garside, & Nobre, 2011). The present study extends these findings by investigating whether and how LTM can enhance perceptual sensitivity to identify targets occurring within their complex scene context. Behavioral measures showed superior perceptual sensitivity (d′) for targets located in remembered spatial contexts. We used the N2pc event-related potential to test whether LTM modulated the process of selecting the target from its scene context. Surprisingly, in contrast to effects of visual spatial cues or implicit contextual cueing, LTM for target locations significantly attenuated the N2pc potential. We propose that the mechanism by which these explicitly available LTMs facilitate perceptual identification of targets may differ from mechanisms triggered by other types of top-down sources of information. PMID:23016670
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Computational mechanisms underlying cortical responses to the affordance properties of visual scenes
Epstein, Russell A.
2018-01-01
Biologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we develop a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes; that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that responses in the CNN to scene images were highly predictive of fMRI responses in the OPA. Moreover the CNN accounted for the portion of OPA variance relating to the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal operations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithms. PMID:29684011
Spectral feature characterization methods for blood stain detection in crime scene backgrounds
NASA Astrophysics Data System (ADS)
Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.
2016-05-01
Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.
The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude
Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander
2016-01-01
Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103
A three-layer model of natural image statistics.
Gutmann, Michael U; Hyvärinen, Aapo
2013-11-01
An important property of visual systems is to be simultaneously both selective to specific patterns found in the sensory input and invariant to possible variations. Selectivity and invariance (tolerance) are opposing requirements. It has been suggested that they could be joined by iterating a sequence of elementary selectivity and tolerance computations. It is, however, unknown what should be selected or tolerated at each level of the hierarchy. We approach this issue by learning the computations from natural images. We propose and estimate a probabilistic model of natural images that consists of three processing layers. Two natural image data sets are considered: image patches, and complete visual scenes downsampled to the size of small patches. For both data sets, we find that in the first two layers, simple and complex cell-like computations are performed. In the third layer, we mainly find selectivity to longer contours; for patch data, we further find some selectivity to texture, while for the downsampled complete scenes, some selectivity to curvature is observed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina
2017-05-01
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)
1994-01-01
The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system administration.
2000-03-01
consent response, a " Midas touch " date, subtle/slight eyebrow lifts and jaw clenches problem could occur, with commands activating have been...from the display. For flights where the outside scene is visible, it Problems such as this can result in misjudgments of remains to be determined...without degrading the probability of kill [28]. display. The subjects commented that the key problem was the ambiguity in depth judgement along Any
Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.
Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno
2015-05-01
The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Loth, Eva; Gomez, Juan Carlos; Happe, Francesca
2011-01-01
This study combined an event schema approach with top-down processing perspectives to investigate whether high-functioning children and adults with autism spectrum disorder (ASD) spontaneously attend to and remember context-relevant aspects of scenes. Participants read one story of story-pairs (e.g., burglary or tea party). They then inspected a…
Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.
1983-01-01
The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.
Azzopardi, George; Petkov, Nicolai
2014-01-01
The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms. PMID:25126068
Research on three-dimensional visualization based on virtual reality and Internet
NASA Astrophysics Data System (ADS)
Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai
2007-06-01
To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.
Willems, Roel M; Clevis, Krien; Hagoort, Peter
2011-09-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
Dietz, Aimee; Weissling, Kristy; Griffith, Julie; McKelvey, Miechelle; Macke, Devan
2014-12-01
The purpose of this collective case study was to describe the communication behaviors of five people with chronic aphasia when they retold personal narratives to an unfamiliar communication partner using four variants of a visual scene display (VSD) interface. The results revealed that spoken language comprised roughly 70% of expressive modality units; variable patterns of use for other modalities emerged. Although inconsistent across participants, several people with aphasia experienced no trouble sources during the retells using VSDs with personally relevant photographs and text boxes. Overall, participants perceived the personally relevant photographs and the text as helpful during the retells. These patterns may serve as a springboard for future experimental investigations regarding how interface design influences the communicative and linguistic performance of people with aphasia.
Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands
Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.
2000-01-01
The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc
Hemrich, Günter
2005-06-01
This case study reviews the experience of the Somalia Food Security Assessment Unit (FSAU) of operating a food security information system in the context of a complex emergency. In particular, it explores the linkages between selected features of the protracted crisis environment in Somalia and conceptual and operational aspects of food security information work. The paper specifically examines the implications of context characteristics for the establishment and operations of the FSAU field monitoring component and for the interface with information users and their diverse information needs. It also analyses the scope for linking food security and nutrition analysis and looks at the role of conflict and gender analysis in food security assessment work. Background data on the food security situation in Somalia and an overview of some key features of the FSAU set the scene for the case study. The paper is targeted at those involved in designing, operating and funding food security information activities.
1990-12-02
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against dark space. Parts of the Hopkins Ultraviolet Telescope (HUT), Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
Active sensing in the categorization of visual patterns
Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M
2016-01-01
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
STS-35 ASTRO-1 telescopes documented in OV-102's payload bay (PLB)
1990-12-10
STS035-604-058 (2-10 Dec 1990) --- The various components of the Astro-1 payload are seen backdropped against the blue and white Earth in this scene photographed through Columbia's aft flight deck windows. Parts of the Hopkins Ultraviolet Telescope (HUT), Ultraviolet Imaging Telescope (UIT) and the Wisconsin Ultraviolet Photopolarimetry Experiment (WUPPE) are visible on the Spacelab pallet in the foreground. The Broad Band X-ray Telescope (BBXRT) is behind this pallet and is not visible in this scene. The smaller cylinder in the foreground is the "Igloo," which is a pressurized container housing the Command and Data Management System, which interfaces with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The photograph was made with a handheld Rolleiflex camera aimed through Columbia's aft flight deck windows.
Irma 5.1 multisensor signature prediction model
NASA Astrophysics Data System (ADS)
Savage, James; Coker, Charles; Thai, Bea; Aboutalib, Omar; Yamaoka, Neil; Kim, Charles
2005-05-01
The Irma synthetic signature prediction code is being developed to facilitate the research and development of multisensor systems. Irma was one of the first high resolution Infrared (IR) target and background signature models to be developed for tactical weapon application. Originally developed in 1980 by the Munitions Directorate of the Air Force Research Laboratory (AFRL/MN), the Irma model was used exclusively to generate IR scenes. In 1988, a number of significant upgrades to Irma were initiated including the addition of a laser (or active) channel. This two-channel version was released to the user community in 1990. In 1992, an improved scene generator was incorporated into the Irma model, which supported correlated frame-to-frame imagery. A passive IR/millimeter wave (MMW) code was completed in 1994. This served as the cornerstone for the development of the co-registered active/passive IR/MMW model, Irma 4.0. In 2000, Irma version 5.0 was released which encompassed several upgrades to both the physical models and software. Circular polarization was added to the passive channel and the doppler capability was added to the active MMW channel. In 2002, the multibounce technique was added to the Irma passive channel. In the ladar channel, a user-friendly Ladar Sensor Assistant (LSA) was incorporated which provides capability and flexibility for sensor modeling. Irma 5.0 runs on several platforms including Windows, Linux, Solaris, and SGI Irix. Since 2000, additional capabilities and enhancements have been added to the ladar channel including polarization and speckle effect. Work is still ongoing to add time-jittering model to the ladar channel. A new user interface has been introduced to aid users in the mechanism of scene generation and running the Irma code. The user interface provides a canvas where a user can add and remove objects using mouse clicks to construct a scene. The scene can then be visualized to find the desired sensor position. The synthetic ladar signatures have been validated twice and underwent a third validation test near the end of 04. These capabilities will be integrated into the next release, Irma 5.1, scheduled for completion in the summer of FY05. Irma is currently being used to support a number of civilian and military applications. The Irma user base includes over 130 agencies within the Air Force, Army, Navy, DARPA, NASA, Department of Transportation, academia, and industry. The purpose of this paper is to report the progress of the Irma 5.1 development effort.
Thunderstorm clouds over Western Africa
NASA Technical Reports Server (NTRS)
1989-01-01
The overshooting tops of a series of strong thunderstorms are seen in this late afternoon scene over the African Ivory Coast, exact location unknown. The low sun angle of the setting sun casts long shadows, accentuating the shapes and heights of the clouds. This seasonal thunderstorm is an African Intertropical Front located along the land/sea breeze interface over the West African coastline and is a normal occurance for this time of year.
NASA Astrophysics Data System (ADS)
Xi, Lei; Guo, Wei; Che, Yinchao; Zhang, Hao; Wang, Qiang; Ma, Xinming
To solve problems in detecting the origin of agricultural products, this paper brings about an embedded data-based terminal, applies middleware thinking, and provides reusable long-range two-way data exchange module between business equipment and data acquisition systems. The system is constructed by data collection node and data center nodes. Data collection nodes taking embedded data terminal NetBoxII as the core, consisting of data acquisition interface layer, controlling information layer and data exchange layer, completing the data reading of different front-end acquisition equipments, and packing the data TCP to realize the data exchange between data center nodes according to the physical link (GPRS / CDMA / Ethernet). Data center node consists of the data exchange layer, the data persistence layer, and the business interface layer, which make the data collecting durable, and provide standardized data for business systems based on mapping relationship of collected data and business data. Relying on public communications networks, application of the system could establish the road of flow of information between the scene of origin certification and management center, and could realize the real-time collection, storage and processing between data of origin certification scene and databases of certification organization, and could achieve needs of long-range detection of agricultural origin.
Attention in the real world: toward understanding its neural basis
Peelen, Marius V.; Kastner, Sabine
2016-01-01
The efficient selection of behaviorally relevant objects from cluttered environments supports our everyday goals. Attentional selection has typically been studied in search tasks involving artificial and simplified displays. Although these studies have revealed important basic principles of attention, they do not explain how the brain efficiently selects familiar objects in complex and meaningful real-world scenes. Findings from recent neuroimaging studies indicate that real-world search is mediated by ‘what’ and ‘where’ attentional templates that are implemented in high-level visual cortex. These templates represent target-diagnostic properties and likely target locations, respectively, and are shaped by object familiarity, scene context, and memory. We propose a framework for real-world search that incorporates these recent findings and specifies directions for future study. PMID:24630872
Separate and Simultaneous Adjustment of Light Qualities in a Real Scene
Pont, Sylvia C.; Heynderick, Ingrid
2017-01-01
Humans are able to estimate light field properties in a scene in that they have expectations of the objects’ appearance inside it. Previously, we probed such expectations in a real scene by asking whether a “probe object” fitted a real scene with regard to its lighting. But how well are observers able to interactively adjust the light properties on a “probe object” to its surrounding real scene? Image ambiguities can result in perceptual interactions between light properties. Such interactions formed a major problem for the “readability” of the illumination direction and diffuseness on a matte smooth spherical probe. We found that light direction and diffuseness judgments using a rough sphere as probe were slightly more accurate than when using a smooth sphere, due to the three-dimensional (3D) texture. We here extended the previous work by testing independent and simultaneous (i.e., the light field properties separated one by one or blended together) adjustments of light intensity, direction, and diffuseness using a rough probe. Independently inferred light intensities were close to the veridical values, and the simultaneously inferred light intensity interacted somewhat with the light direction and diffuseness. The independently inferred light directions showed no statistical difference with the simultaneously inferred directions. The light diffuseness inferences correlated with but contracted around medium veridical values. In summary, observers were able to adjust the basic light properties through both independent and simultaneous adjustments. The light intensity, direction, and diffuseness are well “readable” from our rough probe. Our method allows “tuning the light” (adjustment of its spatial distribution) in interfaces for lighting design or perception research. PMID:28203350
Algodoo: A Tool for Encouraging Creativity in Physics Teaching and Learning
NASA Astrophysics Data System (ADS)
Gregorcic, Bor; Bodin, Madelen
2017-01-01
Algodoo (http://www.algodoo.com) is a digital sandbox for physics 2D simulations. It allows students and teachers to easily create simulated "scenes" and explore physics through a user-friendly and visually attractive interface. In this paper, we present different ways in which students and teachers can use Algodoo to visualize and solve physics problems, investigate phenomena and processes, and engage in out-of-school activities and projects. Algodoo, with its approachable interface, inhabits a middle ground between computer games and "serious" computer modeling. It is suitable as an entry-level modeling tool for students of all ages and can facilitate discussions about the role of computer modeling in physics.
Space Shuttle Columbia views the world with imaging radar: The SIR-A experiment
NASA Technical Reports Server (NTRS)
Ford, J. P.; Cimino, J. B.; Elachi, C.
1983-01-01
Images acquired by the Shuttle Imaging Radar (SIR-A) in November 1981, demonstrate the capability of this microwave remote sensor system to perceive and map a wide range of different surface features around the Earth. A selection of 60 scenes displays this capability with respect to Earth resources - geology, hydrology, agriculture, forest cover, ocean surface features, and prominent man-made structures. The combined area covered by the scenes presented amounts to about 3% of the total acquired. Most of the SIR-A images are accompanied by a LANDSAT multispectral scanner (MSS) or SEASAT synthetic-aperture radar (SAR) image of the same scene for comparison. Differences between the SIR-A image and its companion LANDSAT or SEASAT image at each scene are related to the characteristics of the respective imaging systems, and to seasonal or other changes that occurred in the time interval between acquisition of the images.
Scene recognition based on integrating active learning with dictionary learning
NASA Astrophysics Data System (ADS)
Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen
2018-04-01
Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.
Synchronous contextual irregularities affect early scene processing: replication and extension.
Mudrik, Liad; Shalgi, Shani; Lamy, Dominique; Deouell, Leon Y
2014-04-01
Whether contextual regularities facilitate perceptual stages of scene processing is widely debated, and empirical evidence is still inconclusive. Specifically, it was recently suggested that contextual violations affect early processing of a scene only when the incongruent object and the scene are presented a-synchronously, creating expectations. We compared event-related potentials (ERPs) evoked by scenes that depicted a person performing an action using either a congruent or an incongruent object (e.g., a man shaving with a razor or with a fork) when scene and object were presented simultaneously. We also explored the role of attention in contextual processing by using a pre-cue to direct subjects׳ attention towards or away from the congruent/incongruent object. Subjects׳ task was to determine how many hands the person in the picture used in order to perform the action. We replicated our previous findings of frontocentral negativity for incongruent scenes that started ~ 210 ms post stimulus presentation, even earlier than previously found. Surprisingly, this incongruency ERP effect was negatively correlated with the reaction times cost on incongruent scenes. The results did not allow us to draw conclusions about the role of attention in detecting the regularity, due to a weak attention manipulation. By replicating the 200-300 ms incongruity effect with a new group of subjects at even earlier latencies than previously reported, the results strengthen the evidence for contextual processing during this time window even when simultaneous presentation of the scene and object prevent the formation of prior expectations. We discuss possible methodological limitations that may account for previous failures to find this an effect, and conclude that contextual information affects object model selection processes prior to full object identification, with semantic knowledge activation stages unfolding only later on. Copyright © 2014 Elsevier Ltd. All rights reserved.
System for critical infrastructure security based on multispectral observation-detection module
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław
2013-10-01
Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.
Goal-Side Selection in Soccer Penalty Kicking When Viewing Natural Scenes
Weigelt, Matthias; Memmert, Daniel
2012-01-01
The present study investigates the influence of goalkeeper displacement on goal-side selection in soccer penalty kicking. Facing a penalty situation, participants viewed photo-realistic images of a goalkeeper and a soccer goal. In the action selection task, they were asked to kick to the greater goal-side, and in the perception task, they indicated the position of the goalkeeper on the goal line. To this end, the goalkeeper was depicted in a regular goalkeeping posture, standing either in the exact middle of the goal or being displaced at different distances to the left or right of the goal’s center. Results showed that the goalkeeper’s position on the goal line systematically affected goal-side selection, even when participants were not aware of the displacement. These findings provide further support for the notion that the implicit processing of the stimulus layout in natural scenes can effect action selection in complex environments, such in soccer penalty shooting. PMID:22973246
Early top-down control of visual processing predicts working memory performance
Rutman, Aaron M.; Clapp, Wesley C.; Chadick, James Z.; Gazzaley, Adam
2009-01-01
Selective attention confers a behavioral benefit for both perceptual and working memory (WM) performance, often attributed to top-down modulation of sensory neural processing. However, the direct relationship between early activity modulation in sensory cortices during selective encoding and subsequent WM performance has not been established. To explore the influence of selective attention on WM recognition, we used electroencephalography (EEG) to study the temporal dynamics of top-down modulation in a selective, delayed-recognition paradigm. Participants were presented with overlapped, “double-exposed” images of faces and natural scenes, and were instructed to either remember the face or the scene while simultaneously ignoring the other stimulus. Here, we present evidence that the degree to which participants modulate the early P100 (97–129 ms) event-related potential (ERP) during selective stimulus encoding significantly correlates with their subsequent WM recognition. These results contribute to our evolving understanding of the mechanistic overlap between attention and memory. PMID:19413473
A Multi-Wavelength Thermal Infrared and Reflectance Scene Simulation Model
NASA Technical Reports Server (NTRS)
Ballard, J. R., Jr.; Smith, J. A.; Smith, David E. (Technical Monitor)
2002-01-01
Several theoretical calculations are presented and our approach discussed for simulating overall composite scene thermal infrared exitance and canopy bidirectional reflectance of a forest canopy. Calculations are performed for selected wavelength bands of the DOE Multispectral Thermal Imagery and comparisons with atmospherically corrected MTI imagery are underway. NASA EO-1 Hyperion observations also are available and the favorable comparison of our reflective model results with these data are reported elsewhere.
The 3D widgets for exploratory scientific visualization
NASA Technical Reports Server (NTRS)
Herndon, Kenneth P.; Meyer, Tom
1995-01-01
Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.
Deconstructing Visual Scenes in Cortex: Gradients of Object and Spatial Layout Information
Kravitz, Dwight J.; Baker, Chris I.
2013-01-01
Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity. PMID:22473894
Werner, Annette
2014-11-01
Illumination in natural scenes changes at multiple temporal and spatial scales: slow changes in global illumination occur in the course of a day, and we encounter fast and localised illumination changes when visually exploring the non-uniform light field of three-dimensional scenes; in addition, very long-term chromatic variations may come from the environment, like for example seasonal changes. In this context, I consider the temporal and spatial properties of chromatic adaptation and discuss their functional significance for colour constancy in three-dimensional scenes. A process of fast spatial tuning in chromatic adaptation is proposed as a possible sensory mechanism for linking colour constancy to the spatial structure of a scene. The observed middlewavelength selectivity of this process is particularly suitable for adaptation to the mean chromaticity and the compensation of interreflections in natural scenes. Two types of sensory colour constancy are distinguished, based on the functional differences of their temporal and spatial scales: a slow type, operating at a global scale for the compensation of the ambient illumination; and a fast colour constancy, which is locally restricted and well suited to compensate region-specific variations in the light field of three dimensional scenes. Copyright © 2014 Elsevier B.V. All rights reserved.
Clevis, Krien; Hagoort, Peter
2011-01-01
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540
Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing
NASA Astrophysics Data System (ADS)
Sedlar, Michael F.; Griffith, Jerry A.
1988-07-01
This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.
NASA Astrophysics Data System (ADS)
Yang, L.; Shi, L.; Li, P.; Yang, J.; Zhao, L.; Zhao, B.
2018-04-01
Due to the forward scattering and block of radar signal, the water, bare soil, shadow, named low backscattering objects (LBOs), often present low backscattering intensity in polarimetric synthetic aperture radar (PolSAR) image. Because the LBOs rise similar backscattering intensity and polarimetric responses, the spectral-based classifiers are inefficient to deal with LBO classification, such as Wishart method. Although some polarimetric features had been exploited to relieve the confusion phenomenon, the backscattering features are still found unstable when the system noise floor varies in the range direction. This paper will introduce a simple but effective scene classification method based on Bag of Words (BoW) model using Support Vector Machine (SVM) to discriminate the LBOs, without relying on any polarimetric features. In the proposed approach, square windows are firstly opened around the LBOs adaptively to determine the scene images, and then the Scale-Invariant Feature Transform (SIFT) points are detected in training and test scenes. The several SIFT features detected are clustered using K-means to obtain certain cluster centers as the visual word lists and scene images are represented using word frequency. At last, the SVM is selected for training and predicting new scenes as some kind of LBOs. The proposed method is executed over two AIRSAR data sets at C band and L band, including water, bare soil and shadow scenes. The experimental results illustrate the effectiveness of the scene method in distinguishing LBOs.
A Sensitive Measurement for Estimating Impressions of Image-Contents
NASA Astrophysics Data System (ADS)
Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao
We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.
Anisotropic scene geometry resampling with occlusion filling for 3DTV applications
NASA Astrophysics Data System (ADS)
Kim, Jangheon; Sikora, Thomas
2006-02-01
Image and video-based rendering technologies are receiving growing attention due to their photo-realistic rendering capability in free-viewpoint. However, two major limitations are ghosting and blurring due to their sampling-based mechanism. The scene geometry which supports to select accurate sampling positions is proposed using global method (i.e. approximate depth plane) and local method (i.e. disparity estimation). This paper focuses on the local method since it can yield more accurate rendering quality without large number of cameras. The local scene geometry has two difficulties which are the geometrical density and the uncovered area including hidden information. They are the serious drawback to reconstruct an arbitrary viewpoint without aliasing artifacts. To solve the problems, we propose anisotropic diffusive resampling method based on tensor theory. Isotropic low-pass filtering accomplishes anti-aliasing in scene geometry and anisotropic diffusion prevents filtering from blurring the visual structures. Apertures in coarse samples are estimated following diffusion on the pre-filtered space, the nonlinear weighting of gradient directions suppresses the amount of diffusion. Aliasing artifacts from low density are efficiently removed by isotropic filtering and the edge blurring can be solved by the anisotropic method at one process. Due to difference size of sampling gap, the resampling condition is defined considering causality between filter-scale and edge. Using partial differential equation (PDE) employing Gaussian scale-space, we iteratively achieve the coarse-to-fine resampling. In a large scale, apertures and uncovered holes can be overcoming because only strong and meaningful boundaries are selected on the resolution. The coarse-level resampling with a large scale is iteratively refined to get detail scene structure. Simulation results show the marked improvements of rendering quality.
The effect of non-visual working memory load on top-down modulation of visual processing
Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark
2009-01-01
While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of task-relevant and irrelevant visual representations. In our dual-task paradigm, each trial began with the auditory presentation of six random (high load) or sequentially-ordered (low load) digits. Next, two relevant visual stimuli (e.g., faces), presented amongst two temporally interspersed visual distractors (e.g., scenes), were to be encoded and maintained across a 7-sec delay interval, after which memory for the relevant images and digits was probed. When taxed by high load digit maintenance, participants exhibited impaired performance on the visual WM task and a selective failure to attenuate the neural processing of task-irrelevant scene stimuli. The over-processing of distractor scenes under high load was indexed by elevated encoding activity in a scene-selective region-of-interest relative to low load and passive viewing control conditions, as well as by improved long-term recognition memory for these items. In contrast, the load manipulation did not affect participants' ability to upregulate activity in this region when scenes were task-relevant. These results highlight the critical role of domain-general WM resources in the goal-directed regulation of distractor processing. Moreover, the consequences of increased WM load in young adults closely resemble the effects of cognitive aging on distractor filtering [Gazzaley et al., (2005) Nature Neuroscience 8, 1298-1300], suggesting the possibility of a common underlying mechanism. PMID:19397858
Grudzen, Corita R; Elliott, Marc N; Kerndt, Peter R; Schuster, Mark A; Brook, Robert H; Gelberg, Lillian
2009-04-01
We compared the prevalence of condom use during a variety of sexual acts portrayed in adult films produced for heterosexual and homosexual audiences to assess compliance with state Occupational Health and Safety Administration regulations. We analyzed 50 heterosexual and 50 male homosexual films released between August 1, 2005, and July 31, 2006, randomly selected from the distributor of 85% of the heterosexual adult films released each year in the United States. Penile-vaginal intercourse was protected with condoms in 3% of heterosexual scenes. Penile-anal intercourse, common in both heterosexual (42%) and homosexual (80%) scenes, was much less likely to be protected with condoms in heterosexual than in homosexual scenes (10% vs 78%; P < .001). No penile-oral acts were protected with condoms in any of the selected films. Heterosexual films were much less likely than were homosexual films to portray condom use, raising concerns about transmission of HIV and other sexually transmitted diseases, especially among performers in heterosexual adult films. In addition, the adult film industry, especially the heterosexual industry, is not adhering to state occupational safety regulations.
Programmable hyperspectral image mapper with on-array processing
NASA Technical Reports Server (NTRS)
Cutts, James A. (Inventor)
1995-01-01
A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.
The I4 Online Query Tool for Earth Observations Data
NASA Technical Reports Server (NTRS)
Stefanov, William L.; Vanderbloemen, Lisa A.; Lawrence, Samuel J.
2015-01-01
The NASA Earth Observation System Data and Information System (EOSDIS) delivers an average of 22 terabytes per day of data collected by orbital and airborne sensor systems to end users through an integrated online search environment (the Reverb/ECHO system). Earth observations data collected by sensors on the International Space Station (ISS) are not currently included in the EOSDIS system, and are only accessible through various individual online locations. This increases the effort required by end users to query multiple datasets, and limits the opportunity for data discovery and innovations in analysis. The Earth Science and Remote Sensing Unit of the Exploration Integration and Science Directorate at NASA Johnson Space Center has collaborated with the School of Earth and Space Exploration at Arizona State University (ASU) to develop the ISS Instrument Integration Implementation (I4) data query tool to provide end users a clean, simple online interface for querying both current and historical ISS Earth Observations data. The I4 interface is based on the Lunaserv and Lunaserv Global Explorer (LGE) open-source software packages developed at ASU for query of lunar datasets. In order to avoid mirroring existing databases - and the need to continually sync/update those mirrors - our design philosophy is for the I4 tool to be a pure query engine only. Once an end user identifies a specific scene or scenes of interest, I4 transparently takes the user to the appropriate online location to download the data. The tool consists of two public-facing web interfaces. The Map Tool provides a graphic geobrowser environment where the end user can navigate to an area of interest and select single or multiple datasets to query. The Map Tool displays active image footprints for the selected datasets (Figure 1). Selecting a footprint will open a pop-up window that includes a browse image and a link to available image metadata, along with a link to the online location to order or download the actual data. Search results are either delivered in the form of browse images linked to the appropriate online database, similar to the Map Tool, or they may be transferred within the I4 environment for display as footprints in the Map Tool. Datasets searchable through I4 (http://eol.jsc.nasa.gov/I4_tool) currently include: Crew Earth Observations (CEO) cataloged and uncataloged handheld astronaut photography; Sally Ride EarthKAM; Hyperspectral Imager for the Coastal Ocean (HICO); and the ISS SERVIR Environmental Research and Visualization System (ISERV). The ISS is a unique platform in that it will have multiple users over its lifetime, and that no single remote sensing system has a permanent internal or external berth. The open source I4 tool is designed to enable straightforward addition of new datasets as they become available such as ISS-RapidSCAT, Cloud Aerosol Transport System (CATS), and the High Definition Earth Viewing (HDEV) system. Data from other sensor systems, such as those operated by the ISS International Partners or under the auspices of the US National Laboratory program, can also be added to I4 provided sufficient access to enable searching of data or metadata is available. Commercial providers of remotely sensed data from the ISS may be particularly interested in I4 as an additional means of directing potential customers and clients to their products.
Development of a portable multispectral thermal infrared camera
NASA Technical Reports Server (NTRS)
Osterwisch, Frederick G.
1991-01-01
The purpose of this research and development effort was to design and build a prototype instrument designated the 'Thermal Infrared Multispectral Camera' (TIRC). The Phase 2 effort was a continuation of the Phase 1 feasibility study and preliminary design for such an instrument. The completed instrument designated AA465 has application in the field of geologic remote sensing and exploration. The AA465 Thermal Infrared Camera (TIRC) System is a field-portable multispectral thermal infrared camera operating over the 8.0 - 13.0 micron wavelength range. Its primary function is to acquire two-dimensional thermal infrared images of user-selected scenes. Thermal infrared energy emitted by the scene is collected, dispersed into ten 0.5 micron wide channels, and then measured and recorded by the AA465 System. This multispectral information is presented in real time on a color display to be used by the operator to identify spectral and spatial variations in the scenes emissivity and/or irradiance. This fundamental instrument capability has a wide variety of commercial and research applications. While ideally suited for two-man operation in the field, the AA465 System can be transported and operated effectively by a single user. Functionally, the instrument operates as if it were a single exposure camera. System measurement sensitivity requirements dictate relatively long (several minutes) instrument exposure times. As such, the instrument is not suited for recording time-variant information. The AA465 was fabricated, assembled, tested, and documented during this Phase 2 work period. The detailed design and fabrication of the instrument was performed during the period of June 1989 to July 1990. The software development effort and instrument integration/test extended from July 1990 to February 1991. Software development included an operator interface/menu structure, instrument internal control functions, DSP image processing code, and a display algorithm coding program. The instrument was delivered to NASA in March 1991. Potential commercial and research uses for this instrument are in its primary application as a field geologists exploration tool. Other applications have been suggested but not investigated in depth. These are measurements of process control in commercial materials processing and quality control functions which require information on surface heterogeneity.
DiVita, Joseph; Obermayer, Richard; Nugent, William; Linville, James M
2004-01-01
Change blindness occurs when humans are unable to detect significant changes in objects and scenes after their attention is momentarily diverted. Because change blindness is relevant in many applied settings, the current study investigated the phenomenon in the context of tasks performed by naval command and control system personnel. Operators of such systems are often heavily loaded with concurrent visual search, situation assessment, voice communications, and control-display manipulation tasks at large, physically dispersed tactical situation displays. As the operators' attention shifts from one display to another, it creates an opportunity for changes to occur on unattended screens with potentially negative consequences. Our results show that on a display containing 8 objects of interest, considerable change blindness was demonstrated in that participants required 2 or more selections to correctly identify a changed object on nearly 1/3 of the test trials. Further, operator performance on 15% of the trials was equivalent to randomly guessing with replacement after making 3 incorrect selections. This research underscores the need for developing effective countermeasures to the change blindness phenomenon. Actual or potential uses of this research include interface design of computer workstations for military, nuclear power industry, air traffic control, crisis response center, and hospital emergency room applications.
The tongue of the ocean as a remote sensing ocean color calibration range
NASA Technical Reports Server (NTRS)
Strees, L. V.
1972-01-01
In general, terrestrial scenes remain stable in content from both temporal and spacial considerations. Ocean scenes, on the other hand, are constantly changing in content and position. The solar energy that enters the ocean waters undergoes a process of scattering and selective spectral absorption. Ocean scenes are thus characterized as low level radiance with the major portion of the energy in the blue region of the spectrum. Terrestrial scenes are typically of high level radiance with their spectral energies concentrated in the green-red regions of the visible spectrum. It appears that for the evaluation and calibration of ocean color remote sensing instrumentation, an ocean area whose optical ocean and atmospheric properties are known and remain seasonably stable over extended time periods is needed. The Tongue of the Ocean, a major submarine channel in the Bahama Banks, is one ocean are for which a large data base of oceanographic information and a limited amount of ocean optical data are available.
Landsat-7 long-term acquisition plan radiometry - evolution over time
Markham, Brian L; Goward, Samuel; Arvidson, Terry; Barsi, Julia A.; Scaramuzza, Pat
2006-01-01
The Landsat-7 Enhanced Thematic Mapper Plus instrument has two selectable gains for each spectral band. In the acquisition plan, the gains were initially set to maximize the entropy in each scene. One unintended consequence of this strategy was that, at times, dense vegetation saturated band 4 and deserts saturated all bands. A revised strategy, based on a land-cover classification and sun angle thresholds, reduced saturation, but resulted in gain changes occurring within the same scene on multiple overpasses. As the gain changes cause some loss of data and difficulties for some ground processing systems, a procedure was devised to shift the gain changes to the nearest predicted cloudy scenes. The results are still not totally satisfactory as gain changes still impact some scenes and saturation still occurs, particularly in ephemerally snow-covered regions. A primary conclusion of our experience with variable gain on Landsat-7 is that such an approach should not be employed on future global monitoring missions.
Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin
2015-12-01
A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Feature-based attentional modulations in the absence of direct visual stimulation.
Serences, John T; Boynton, Geoffrey M
2007-07-19
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
Everyone knows what is interesting: Salient locations which should be fixated
Masciocchi, Christopher Michael; Mihalas, Stefan; Parkhurst, Derrick; Niebur, Ernst
2010-01-01
Most natural scenes are too complex to be perceived instantaneously in their entirety. Observers therefore have to select parts of them and process these parts sequentially. We study how this selection and prioritization process is performed by humans at two different levels. One is the overt attention mechanism of saccadic eye movements in a free-viewing paradigm. The second is a conscious decision process in which we asked observers which points in a scene they considered the most interesting. We find in a very large participant population (more than one thousand) that observers largely agree on which points they consider interesting. Their selections are also correlated with the eye movement pattern of different subjects. Both are correlated with predictions of a purely bottom–up saliency map model. Thus, bottom–up saliency influences cognitive processes as far removed from the sensory periphery as in the conscious choice of what an observer considers interesting. PMID:20053088
Active visual search in non-stationary scenes: coping with temporal variability and uncertainty
NASA Astrophysics Data System (ADS)
Ušćumlić, Marija; Blankertz, Benjamin
2016-02-01
Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.
Vegetation in transition: the Southwest's dynamic past century
Raymond M. Turner
2005-01-01
Monitoring that follows long-term vegetation changes often requires selection of a temporal baseline. Any such starting point is to some degree artificial, but in some instances there are aids that can be used as guides to baseline selection. Matched photographs duplicating scenes first recorded on film a century or more ago reveal changes that help select the starting...
Robotic vision techniques for space operations
NASA Technical Reports Server (NTRS)
Krishen, Kumar
1994-01-01
Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.
Color appearance and color rendering of HDR scenes: an experiment
NASA Astrophysics Data System (ADS)
Parraman, Carinna; Rizzi, Alessandro; McCann, John J.
2009-01-01
In order to gain a deeper understanding of the appearance of coloured objects in a three-dimensional scene, the research introduces a multidisciplinary experimental approach. The experiment employed two identical 3-D Mondrians, which were viewed and compared side by side. Each scene was subjected to different lighting conditions. First, we used an illumination cube to diffuse the light and illuminate all the objects from each direction. This produced a low-dynamicrange (LDR) image of the 3-D Mondrian scene. Second, in order to make a high-dynamic range (HDR) image of the same objects, we used a directional 150W spotlight and an array of WLEDs assembled in a flashlight. The scenes were significant as each contained exactly the same three-dimensional painted colour blocks that were arranged in the same position in the still life. The blocks comprised 6 hue colours and 5 tones from white to black. Participants from the CREATE project were asked to consider the change in the appearance of a selection of colours according to lightness, hue, and chroma, and to rate how the change in illumination affected appearance. We measured the light coming to the eye from still-life surfaces with a colorimeter (Yxy). We captured the scene radiance using multiple exposures with a number of different cameras. We have begun a programme of digital image processing of these scene capture methods. This multi-disciplinary programme continues until 2010, so this paper is an interim report on the initial phases and a description of the ongoing project.
New technologies for HWIL testing of WFOV, large-format FPA sensor systems
NASA Astrophysics Data System (ADS)
Fink, Christopher
2016-05-01
Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.
3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.
Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S
2015-10-20
Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Conjoint representation of texture ensemble and location in the parahippocampal place area.
Park, Jeongho; Park, Soojin
2017-04-01
Texture provides crucial information about the category or identity of a scene. Nonetheless, not much is known about how the texture information in a scene is represented in the brain. Previous studies have shown that the parahippocampal place area (PPA), a scene-selective part of visual cortex, responds to simple patches of texture ensemble. However, in natural scenes textures exist in spatial context within a scene. Here we tested two hypotheses that make different predictions on how textures within a scene context are represented in the PPA. The Texture-Only hypothesis suggests that the PPA represents texture ensemble (i.e., the kind of texture) as is, irrespective of its location in the scene. On the other hand, the Texture and Location hypothesis suggests that the PPA represents texture and its location within a scene (e.g., ceiling or wall) conjointly. We tested these two hypotheses across two experiments, using different but complementary methods. In experiment 1 , by using multivoxel pattern analysis (MVPA) and representational similarity analysis, we found that the representational similarity of the PPA activation patterns was significantly explained by the Texture-Only hypothesis but not by the Texture and Location hypothesis. In experiment 2 , using a repetition suppression paradigm, we found no repetition suppression for scenes that had the same texture ensemble but differed in location (supporting the Texture and Location hypothesis). On the basis of these results, we propose a framework that reconciles contrasting results from MVPA and repetition suppression and draw conclusions about how texture is represented in the PPA. NEW & NOTEWORTHY This study investigates how the parahippocampal place area (PPA) represents texture information within a scene context. We claim that texture is represented in the PPA at multiple levels: the texture ensemble information at the across-voxel level and the conjoint information of texture and its location at the within-voxel level. The study proposes a working hypothesis that reconciles contrasting results from multivoxel pattern analysis and repetition suppression, suggesting that the methods are complementary to each other but not necessarily interchangeable. Copyright © 2017 the American Physiological Society.
Tobacco imagery on New Zealand television 2002-2004.
McGee, Rob; Ketchel, Juanita
2006-10-01
Considerable emphasis has been placed on the importance of tobacco imagery in the movies as one of the "drivers" of smoking among young people. Findings are presented from a content analysis of 98 hours of prime-time programming on New Zealand television 2004, identifying 152 scenes with tobacco imagery, and selected characteristics of those scenes. About one in four programmes contained tobacco imagery, most of which might be regarded as "neutral or positive". This amounted to about two scenes containing such imagery for every hour of programming. A comparison with our earlier content analysis of programming in 2002 indicated little change in the level of tobacco imagery. The effect of this imagery in contributing to young viewers taking up smoking, and sustaining the addiction among those already smoking, deserves more research attention.
Rapid discrimination of visual scene content in the human brain.
Anokhin, Andrey P; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W; Heath, Andrew C
2006-06-06
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n = 264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200 and 600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline region, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance.
Rapid discrimination of visual scene content in the human brain
Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.
2007-01-01
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815
Li, Rui; Zhang, Xiaodong; Li, Hanzhe; Zhang, Liming; Lu, Zhufeng; Chen, Jiangcheng
2018-08-01
Brain control technology can restore communication between the brain and a prosthesis, and choosing a Brain-Computer Interface (BCI) paradigm to evoke electroencephalogram (EEG) signals is an essential step for developing this technology. In this paper, the Scene Graph paradigm used for controlling prostheses was proposed; this paradigm is based on Steady-State Visual Evoked Potentials (SSVEPs) regarding the Scene Graph of a subject's intention. A mathematic model was built to predict SSVEPs evoked by the proposed paradigm and a sinusoidal stimulation method was used to present the Scene Graph stimulus to elicit SSVEPs from subjects. Then, a 2-degree of freedom (2-DOF) brain-controlled prosthesis system was constructed to validate the performance of the Scene Graph-SSVEP (SG-SSVEP)-based BCI. The classification of SG-SSVEPs was detected via the Canonical Correlation Analysis (CCA) approach. To assess the efficiency of proposed BCI system, the performances of traditional SSVEP-BCI system were compared. Experimental results from six subjects suggested that the proposed system effectively enhanced the SSVEP responses, decreased the degradation of SSVEP strength and reduced the visual fatigue in comparison with the traditional SSVEP-BCI system. The average signal to noise ratio (SNR) of SG-SSVEP was 6.31 ± 2.64 dB, versus 3.38 ± 0.78 dB of traditional-SSVEP. In addition, the proposed system achieved good performances in prosthesis control. The average accuracy was 94.58% ± 7.05%, and the corresponding high information transfer rate (IRT) was 19.55 ± 3.07 bit/min. The experimental results revealed that the SG-SSVEP based BCI system achieves the good performance and improved the stability relative to the conventional approach. Copyright © 2018 Elsevier B.V. All rights reserved.
A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.
Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan
2016-07-01
Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.
Vogelmann, James E.; Xian, George; Homer, Collin G.; Tolk, Brian
2012-01-01
The focus of the study was to assess gradual changes occurring throughout a range of natural ecosystems using decadal Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM +) time series data. Time series data stacks were generated for four study areas: (1) a four scene area dominated by forest and rangeland ecosystems in the southwestern United States, (2) a sagebrush-dominated rangeland in Wyoming, (3) woodland adjacent to prairie in northwestern Nebraska, and (4) a forested area in the White Mountains of New Hampshire. Through analyses of time series data, we found evidence of gradual systematic change in many of the natural vegetation communities in all four areas. Many of the conifer forests in the southwestern US are showing declines related to insects and drought, but very few are showing evidence of improving conditions or increased greenness. Sagebrush communities are showing decreases in greenness related to fire, mining, and probably drought, but very few of these communities are showing evidence of increased greenness or improving conditions. In Nebraska, forest communities are showing local expansion and increased canopy densification in the prairie–woodland interface, and in the White Mountains high elevation understory conifers are showing range increases towards lower elevations. The trends detected are not obvious through casual inspection of the Landsat images. Analyses of time series data using many scenes and covering multiple years are required in order to develop better impressions and representations of the changing ecosystem patterns and trends that are occurring. The approach described in this paper demonstrates that Landsat time series data can be used operationally for assessing gradual ecosystem change across large areas. Local knowledge and available ancillary data are required in order to fully understand the nature of these trends.
Onboard photo:Astro-1 in Cargo Bay
NASA Technical Reports Server (NTRS)
1990-01-01
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against dark space. Parts of the Hopkins Ultraviolet Telescope (HUT), Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
Onboard Photo:Astro-1 Ultraviolet Telescope in Cargo Bay
NASA Technical Reports Server (NTRS)
1990-01-01
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against a blue and white Earth. Parts of the Hopkins Ultraviolet Telescope (HUT), the Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
A walk through the planned CS building. M.S. Thesis
NASA Technical Reports Server (NTRS)
Khorramabadi, Delnaz
1991-01-01
Using the architectural plan views of our future computer science building as test objects, we have completed the first stage of a Building walkthrough system. The inputs to our system are AutoCAD files. An AutoCAD converter translates the geometrical information in these files into a format suitable for 3D rendering. Major model errors, such as incorrect polygon intersections and random face orientations, are detected and fixed automatically. Interactive viewing and editing tools are provided to view the results, to modify and clean the model and to change surface attributes. Our display system provides a simple-to-use user interface for interactive exploration of buildings. Using only the mouse buttons, the user can move inside and outside the building and change floors. Several viewing and rendering options are provided, such as restricting the viewing frustum, avoiding wall collisions, and selecting different rendering algorithms. A plan view of the current floor, with the position of the eye point and viewing direction on it, is displayed at all times. The scene illumination can be manipulated, by interactively controlling intensity values for 5 light sources.
1990-12-02
Onboard the Space Shuttle Orbiter Columbia (STS-35), the various components of the Astro-1 payload are seen backdropped against a blue and white Earth. Parts of the Hopkins Ultraviolet Telescope (HUT), the Ultraviolet Imaging Telescope (UIT), and the Wisconsin Ultraviolet Photo-Polarimetry Experiment (WUPPE) are visible on the Spacelab pallet. The Broad-Band X-Ray Telescope (BBXRT) is behind the pallet and is not visible in this scene. The smaller cylinder in the foreground is the igloo. The igloo was a pressurized container housing the Command Data Management System, that interfaced with the in-cabin controllers to control the Instrument Pointing System (IPS) and the telescopes. The Astro Observatory was designed to explore the universe by observing and measuring the ultraviolet radiation from celestial objects. Astronomical targets of observation selected for Astro missions included planets, stars, star clusters, galaxies, clusters of galaxies, quasars, remnants of exploded stars (supernovae), clouds of gas and dust (nebulae), and the interstellar medium. Managed by the Marshall Space Flight Center, the Astro-1 was launched aboard the Space Shuttle Orbiter Columbia (STS-35) on December 2, 1990.
A Photo Album of Earth Scheduling Landsat 7 Mission Daily Activities
NASA Technical Reports Server (NTRS)
Potter, William; Gasch, John; Bauer, Cynthia
1998-01-01
Landsat7 is a member of a new generation of Earth observation satellites. Landsat7 will carry on the mission of the aging Landsat 5 spacecraft by acquiring high resolution, multi-spectral images of the Earth surface for strategic, environmental, commercial, agricultural and civil analysis and research. One of the primary mission goals of Landsat7 is to accumulate and seasonally refresh an archive of global images with full coverage of Earth's landmass, less the central portion of Antarctica. This archive will enable further research into seasonal, annual and long-range trending analysis in such diverse research areas as crop yields, deforestation, population growth, and pollution control, to name just a few. A secondary goal of Landsat7 is to fulfill imaging requests from our international partners in the mission. Landsat7 will transmit raw image data from the spacecraft to 25 ground stations in 20 subscribing countries. Whereas earlier Landsat missions were scheduled manually (as are the majority of current low-orbit satellite missions), the task of manually planning and scheduling Landsat7 mission activities would be overwhelmingly complex when considering the large volume of image requests, the limited resources available, spacecraft instrument limitations, and the limited ground image processing capacity, not to mention avoidance of foul weather systems. The Landsat7 Mission Operation Center (MOC) includes an image scheduler subsystem that is designed to automate the majority of mission planning and scheduling, including selection of the images to be acquired, managing the recording and playback of the images by the spacecraft, scheduling ground station contacts for downlink of images, and generating the spacecraft commands for controlling the imager, recorder, transmitters and antennas. The image scheduler subsystem autonomously generates 90% of the spacecraft commanding with minimal manual intervention. The image scheduler produces a conflict-free schedule for acquiring images of the "best" 250 scenes daily for refreshing the global archive. It then equitably distributes the remaining resources for acquiring up to 430 scenes to satisfy requests by international subscribers. The image scheduler selects candidate scenes based on priority and age of the requests, and predicted cloud cover and sun angle at each scene. It also selects these scenes to avoid instrument constraint violations and maximizes efficiency of resource usage by encouraging acquisition of scenes in clusters. Of particular interest to the mission planners, it produces the resulting schedule in a reasonable time, typically within 15 minutes.
New technique for simulation of microgravity and variable gravity conditions
NASA Astrophysics Data System (ADS)
de la Rosa, R.; Alonso, A.; Abasolo, D. E.; Hornero, R.; Abasolo, D. E.
2005-08-01
This paper suggests a microgravity or variable gravity conditions simulator based on a Neuromuscular Control System (NCS), working as a man-machine interface. The subject under training lies on an active platform that counteracts his weight. And a Virtual Reality (VR) system displays a simulated environment, where the subject can interact a number of settings: extravehicular activity (EVA), walking on the Moon or training the limb response faced with variable acceleration scenes. Results related to real-time voluntary control have been achieved with neuromuscular interfaces at the Bioengineering Group in the University of Valladolid. It has been employed a custom real-time system to train arm movements. This paper outlines a more complex design that can complement other training facilities, like the buoyancy pool, in the task of microgravity simulation.
Chen, Xuexia; Vogelmann, James E.; Chander, Gyanesh; Ji, Lei; Tolk, Brian; Huang, Chengquan; Rollins, Matthew
2013-01-01
Routine acquisition of Landsat 5 Thematic Mapper (TM) data was discontinued recently and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) has an ongoing problem with the scan line corrector (SLC), thereby creating spatial gaps when covering images obtained during the process. Since temporal and spatial discontinuities of Landsat data are now imminent, it is therefore important to investigate other potential satellite data that can be used to replace Landsat data. We thus cross-compared two near-simultaneous images obtained from Landsat 5 TM and the Indian Remote Sensing (IRS)-P6 Advanced Wide Field Sensor (AWiFS), both captured on 29 May 2007 over Los Angeles, CA. TM and AWiFS reflectances were compared for the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands, as well as the normalized difference vegetation index (NDVI) based on manually selected polygons in homogeneous areas. All R2 values of linear regressions were found to be higher than 0.99. The temporally invariant cluster (TIC) method was used to calculate the NDVI correlation between the TM and AWiFS images. The NDVI regression line derived from selected polygons passed through several invariant cluster centres of the TIC density maps and demonstrated that both the scene-dependent polygon regression method and TIC method can generate accurate radiometric normalization. A scene-independent normalization method was also used to normalize the AWiFS data. Image agreement assessment demonstrated that the scene-dependent normalization using homogeneous polygons provided slightly higher accuracy values than those obtained by the scene-independent method. Finally, the non-normalized and relatively normalized ‘Landsat-like’ AWiFS 2007 images were integrated into 1984 to 2010 Landsat time-series stacks (LTSS) for disturbance detection using the Vegetation Change Tracker (VCT) model. Both scene-dependent and scene-independent normalized AWiFS data sets could generate disturbance maps similar to what were generated using the LTSS data set, and their kappa coefficients were higher than 0.97. These results indicate that AWiFS can be used instead of Landsat data to detect multitemporal disturbance in the event of Landsat data discontinuity.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.
Sohn, Bong-Soo
2017-03-11
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones
Sohn, Bong-Soo
2017-01-01
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487
Elliott, Marc N.; Kerndt, Peter R.; Schuster, Mark A.; Brook, Robert H.; Gelberg, Lillian
2009-01-01
Objectives. We compared the prevalence of condom use during a variety of sexual acts portrayed in adult films produced for heterosexual and homosexual audiences to assess compliance with state Occupational Health and Safety Administration regulations. Methods. We analyzed 50 heterosexual and 50 male homosexual films released between August 1, 2005, and July 31, 2006, randomly selected from the distributor of 85% of the heterosexual adult films released each year in the United States. Results. Penile–vaginal intercourse was protected with condoms in 3% of heterosexual scenes. Penile–anal intercourse, common in both heterosexual (42%) and homosexual (80%) scenes, was much less likely to be protected with condoms in heterosexual than in homosexual scenes (10% vs 78%; P < .001). No penile–oral acts were protected with condoms in any of the selected films. Conclusions. Heterosexual films were much less likely than were homosexual films to portray condom use, raising concerns about transmission of HIV and other sexually transmitted diseases, especially among performers in heterosexual adult films. In addition, the adult film industry, especially the heterosexual industry, is not adhering to state occupational safety regulations. PMID:19218178
Filippi, Massimo; Riccitelli, Gianna; Falini, Andrea; Di Salle, Francesco; Vuilleumier, Patrik; Comi, Giancarlo; Rocca, Maria A.
2010-01-01
Empathy and affective appraisals for conspecifics are among the hallmarks of social interaction. Using functional MRI, we hypothesized that vegetarians and vegans, who made their feeding choice for ethical reasons, might show brain responses to conditions of suffering involving humans or animals different from omnivores. We recruited 20 omnivore subjects, 19 vegetarians, and 21 vegans. The groups were matched for sex and age. Brain activation was investigated using fMRI and an event-related design during observation of negative affective pictures of human beings and animals (showing mutilations, murdered people, human/animal threat, tortures, wounds, etc.). Participants saw negative-valence scenes related to humans and animals, alternating with natural landscapes. During human negative valence scenes, compared with omnivores, vegetarians and vegans had an increased recruitment of the anterior cingulate cortex (ACC) and inferior frontal gyrus (IFG). More critically, during animal negative valence scenes, they had decreased amygdala activation and increased activation of the lingual gyri, the left cuneus, the posterior cingulate cortex and several areas mainly located in the frontal lobes, including the ACC, the IFG and the middle frontal gyrus. Nonetheless, also substantial differences between vegetarians and vegans have been found responding to negative scenes. Vegetarians showed a selective recruitment of the right inferior parietal lobule during human negative scenes, and a prevailing activation of the ACC during animal negative scenes. Conversely, during animal negative scenes an increased activation of the inferior prefrontal cortex was observed in vegans. These results suggest that empathy toward non conspecifics has different neural representation among individuals with different feeding habits, perhaps reflecting different motivational factors and beliefs. PMID:20520767
Pedale, Tiziana; Santangelo, Valerio
2015-01-01
One of the most important issues in the study of cognition is to understand which are the factors determining internal representation of the external world. Previous literature has started to highlight the impact of low-level sensory features (indexed by saliency-maps) in driving attention selection, hence increasing the probability for objects presented in complex and natural scenes to be successfully encoded into working memory (WM) and then correctly remembered. Here we asked whether the probability of retrieving high-saliency objects modulates the overall contents of WM, by decreasing the probability of retrieving other, lower-saliency objects. We presented pictures of natural scenes for 4 s. After a retention period of 8 s, we asked participants to verbally report as many objects/details as possible of the previous scenes. We then computed how many times the objects located at either the peak of maximal or minimal saliency in the scene (as indexed by a saliency-map; Itti et al., 1998) were recollected by participants. Results showed that maximal-saliency objects were recollected more often and earlier in the stream of successfully reported items than minimal-saliency objects. This indicates that bottom-up sensory salience increases the recollection probability and facilitates the access to memory representation at retrieval, respectively. Moreover, recollection of the maximal- (but not the minimal-) saliency objects predicted the overall amount of successfully recollected objects: The higher the probability of having successfully reported the most-salient object in the scene, the lower the amount of recollected objects. These findings highlight that bottom-up sensory saliency modulates the current contents of WM during recollection of objects from natural scenes, most likely by reducing available resources to encode and then retrieve other (lower saliency) objects. PMID:25741266
The use of an image registration technique in the urban growth monitoring
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Foresti, C.; Deoliveira, M. D. L. N.; Niero, M.; Parreira, E. M. D. M. F.
1984-01-01
The use of an image registration program in the studies of urban growth is described. This program permits a quick identification of growing areas with the overlap of the same scene in different periods, and with the use of adequate filters. The city of Brasilia, Brazil, is selected for the test area. The dynamics of Brasilia urban growth are analyzed with the overlap of scenes dated June 1973, 1978 and 1983. The results showed the utilization of the image registration technique for the monitoring of dynamic urban growth.
Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback
NASA Astrophysics Data System (ADS)
Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai
2012-01-01
With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.
A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.
Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip
2014-11-01
This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.
Landsat-8: Status and on-orbit performance
Markham, Brian L; Barsi, Julia A.; Morfitt, Ron; Choate, Michael J.; Montanaro, Matthew; Arvidson, Terry; Irons, James R.
2015-01-01
Landsat 8 and its two Earth imaging sensors, the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) have been operating on-orbit for 2 ½ years. Landsat 8 has been acquiring substantially more images than initially planned, typically around 700 scenes per day versus a 400 scenes per day requirement, acquiring nearly all land scenes. Both the TIRS and OLI instruments are exceeding their SNR requirements by at least a factor of 2 and are very stable, degrading by at most 1% in responsivity over the mission to date. Both instruments have 100% operable detectors covering their cross track field of view using the redundant detectors as necessary. The geometric performance is excellent, meeting or exceeding all performance requirements. One anomaly occurred with the TIRS Scene Select Mirror (SSM) encoder that affected its operation, though by switching to the side B electronics, this was fully recovered. The one challenge is with the TIRS stray light, which affects the flat fielding and absolute calibration of the TIRS data. The error introduced is smaller in TIRS band 10. Band 11 should not currently be used in science applications.
Ward, Emma V; Maylor, Elizabeth A; Poirier, Marie; Korko, Malgorzata; Ruud, Jens C M
2017-11-01
Reinstatement of encoding context facilitates memory for targets in young and older individuals (e.g., a word studied on a particular background scene is more likely to be remembered later if it is presented on the same rather than a different scene or no scene), yet older adults are typically inferior at recalling and recognizing target-context pairings. This study examined the mechanisms of the context effect in normal aging. Age differences in word recognition by context condition (original, switched, none, new), and the ability to explicitly remember target-context pairings were investigated using word-scene pairs (Experiment 1) and word-word pairs (Experiment 2). Both age groups benefited from context reinstatement in item recognition, although older adults were significantly worse than young adults at identifying original pairings and at discriminating between original and switched pairings. In Experiment 3, participants were given a three-alternative forced-choice recognition task that allowed older individuals to draw upon intact familiarity processes in selecting original pairings. Performance was age equivalent. Findings suggest that heightened familiarity associated with context reinstatement is useful for boosting recognition memory in aging.
Efficient structure from motion on large scenes using UAV with position and pose information
NASA Astrophysics Data System (ADS)
Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang
2018-04-01
In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.
Domain Adaptation for Pedestrian Detection Based on Prediction Consistency
Huan-ling, Tang; Zhi-yong, An
2014-01-01
Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene. PMID:25013850
Attention to and Memory for Audio and Video Information in Television Scenes.
ERIC Educational Resources Information Center
Basil, Michael D.
A study investigated whether selective attention to a particular television modality resulted in different levels of attention to and memory for each modality. Two independent variables manipulated selective attention. These were the semantic channel (audio or video) and viewers' instructed focus (audio or video). These variables were fully…
Guest Editor's introduction: Special issue on distributed virtual environments
NASA Astrophysics Data System (ADS)
Lea, Rodger
1998-09-01
Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com
The capture and recreation of 3D auditory scenes
NASA Astrophysics Data System (ADS)
Li, Zhiyun
The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.
NASA Astrophysics Data System (ADS)
Mayr, Andreas; Rutzinger, Martin; Bremer, Magnus; Geitner, Clemens
2016-06-01
In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time.
Napping and the Selective Consolidation of Negative Aspects of Scenes
Payne, Jessica D.; Kensinger, Elizabeth A.; Wamsley, Erin; Spreng, R. Nathan; Alger, Sara; Gibler, Kyle; Schacter, Daniel L.; Stickgold, Robert
2018-01-01
After information is encoded into memory, it undergoes an offline period of consolidation that occurs optimally during sleep. The consolidation process not only solidifies memories, but also selectively preserves aspects of experience that are emotionally salient and relevant for future use. Here, we provide evidence that an afternoon nap is sufficient to trigger preferential memory for emotional information contained in complex scenes. Selective memory for negative emotional information was enhanced after a nap compared to wakefulness in two control conditions designed to carefully address interference and time-of-day confounds. Although prior evidence has connected negative emotional memory formation to rapid eye movement (REM) sleep physiology, we found that non-REM delta activity and the amount of slow wave sleep (SWS) in the nap were robustly related to the selective consolidation of negative information. These findings suggest that the mechanisms underlying memory consolidation benefits associated with napping and nighttime sleep are not always the same. Finally, we provide preliminary evidence that the magnitude of the emotional memory benefit conferred by sleep is equivalent following a nap and a full night of sleep, suggesting that selective emotional remembering can be economically achieved by taking a nap. PMID:25706830
Schettino, Antonio; Keil, Andreas; Porcu, Emanuele; Müller, Matthias M
2016-06-01
The rapid extraction of affective cues from the visual environment is crucial for flexible behavior. Previous studies have reported emotion-dependent amplitude modulations of two event-related potential (ERP) components - the N1 and EPN - reflecting sensory gain control mechanisms in extrastriate visual areas. However, it is unclear whether both components are selective electrophysiological markers of attentional orienting toward emotional material or are also influenced by physical features of the visual stimuli. To address this question, electrical brain activity was recorded from seventeen male participants while viewing original and bright versions of neutral and erotic pictures. Bright neutral scenes were rated as more pleasant compared to their original counterpart, whereas erotic scenes were judged more positively when presented in their original version. Classical and mass univariate ERP analysis showed larger N1 amplitude for original relative to bright erotic pictures, with no differences for original and bright neutral scenes. Conversely, the EPN was only modulated by picture content and not by brightness, substantiating the idea that this component is a unique electrophysiological marker of attention allocation toward emotional material. Complementary topographic analysis revealed the early selective expression of a centro-parietal positivity following the presentation of original erotic scenes only, reflecting the recruitment of neural networks associated with sustained attention and facilitated memory encoding for motivationally relevant material. Overall, these results indicate that neural networks subtending the extraction of emotional information are differentially recruited depending on low-level perceptual features, which ultimately influence affective evaluations. Copyright © 2016 Elsevier Inc. All rights reserved.
Natural scene logo recognition by joint boosting feature selection in salient regions
NASA Astrophysics Data System (ADS)
Fan, Wei; Sun, Jun; Naoi, Satoshi; Minagawa, Akihiro; Hotta, Yoshinobu
2011-01-01
Logos are considered valuable intellectual properties and a key component of the goodwill of a business. In this paper, we propose a natural scene logo recognition method which is segmentation-free and capable of processing images extremely rapidly and achieving high recognition rates. The classifiers for each logo are trained jointly, rather than independently. In this way, common features can be shared across multiple classes for better generalization. To deal with large range of aspect ratio of different logos, a set of salient regions of interest (ROI) are extracted to describe each class. We ensure the selected ROIs to be both individually informative and two-by-two weakly dependant by a Class Conditional Entropy Maximization criteria. Experimental results on a large logo database demonstrate the effectiveness and efficiency of our proposed method.
Atilola, Olayinka; Olayiwola, Funmilayo
2012-09-01
Media depiction of sufferers of mental illness is a widely viewed source of stigmatization and studies have found stigmatizing depictions of mental illness in Nigerian films. With the recent boom in the Nigerian home video industry, there is a need to know how often Nigerians are exposed to films that contain scenes depicting mental illness and how much premium they place on such portrayals as reflecting reality. To assess the popularity of Nigerian home videos among Nigerian community dwellers and the frequency of their exposure to scenes depicting mental illness. A semi-structured questionnaire was designed to obtain socio-demographic data and to find out how often respondents see scenes depicting 'madness' in home videos, as well as their views about the accuracy of such depictions from the orthodox psychiatry point of view. Current home videos available in video rental shops were selected for viewing and content review. All 676 respondents had seen a Nigerian home video in the preceding 30 days: 528 (78%) reported scenes depicting 'mad persons'; 472 (70%) reported that the scenes they saw agreed with their own initial understanding of the cause and treatment of 'madness'. About 20% of the films depicted mental illness. The most commonly depicted cause was sorcery and enchantment by witches and wizards, while the most commonly depicted treatment was magical and spiritual healing by diviners and religious priests. Nigerian home video is a popular electronic media in Nigeria and scenes depicting mental illness are not uncommon. The industry could be harnessed for promoting mental health literacy.
The depiction of protective eyewear use in popular television programs.
Glazier, Robert; Slade, Martin; Mayer, Hylton
2011-04-01
Media portrayal of health related activities may influence health related behaviors in adult and pediatric populations. This study characterizes the depiction of protective eyewear use in the scripted television programs most viewed by the age group that sustains the largest proportion of eye injuries. Viewership ratings data were acquired to assemble a list of the 24 most-watched scripted network broadcast programs for the 13-year-old to 45-year-old age group. The six highest average viewership programs that met the exclusion criteria were selected for analysis. Review of 30 episodes revealed a total of 258 exposure scenes in which an individual was engaged in an activity requiring eye protection (mean, 8.3 exposure scenes per episode; median, 5 exposure scenes per episode). Overall, 66 (26%) of exposure scenes depicted the use of any eye protection, while only 32 (12%) of exposure scenes depicted the use of adequate eye protection. No incidences of eye injuries or infectious exposures were depicted within the exposure scenes in the study set. The depiction of adequate protective eyewear use during eye-risk activities is rare in network scripted broadcast programs. Healthcare professionals and health advocacy groups should continue to work to improve public education about eye injury risks and prevention; these efforts could include working with the television industry to improve the accuracy of the depiction of eye injuries and the proper protective eyewear used for prevention of injuries in scripted programming. Future studies are needed to examine the relationship between media depiction of eye protection use and viewer compliance rates.
Odeleye, Olubunmi; Ajuwon, Ademola J
2015-01-01
Young people in secondary schools who are prone to engage in risky sexual behaviors spend considerable time watching Television (TV) which often presents sex scenes. The influence of exposure to sex scenes on TV (SSTV) has been little researched in Nigeria. This study was therefore designed to determine the perceived influence of exposure to SSTV on the sexual behavior of secondary school students in Ibadan North Local Government Area. A total of 489 randomly selected students were surveyed. Mean age of respondents was 14.1 ± 1.9 years and 53.8% were females. About 91% had ever been exposed to sex scenes. The type of TV program from which most respondents reported exposure to sexual scenes was movies (86.9%). Majority reported exposure to all forms of SSTV from secondary storage devices. Students whose TV watching behavior was not monitored had heavier exposures to SSTV compared with those who were. About 56.3% of females and 26.5% of males affirmed that watching SSTV had affected their sexual behavior. Predictor of sex-related activities was exposure to heavy sex scenes. Peer education and school-based programs should include topics to teach young people on how to evaluate presentations of TV programs. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Application of LC and LCoS in Multispectral Polarized Scene Projector (MPSP)
NASA Astrophysics Data System (ADS)
Yu, Haiping; Guo, Lei; Wang, Shenggang; Lippert, Jack; Li, Le
2017-02-01
A Multispectral Polarized Scene Projector (MPSP) had been developed in the short-wave infrared (SWIR) regime for the test & evaluation (T&E) of spectro-polarimetric imaging sensors. This MPSP generates multispectral and hyperspectral video images (up to 200 Hz) with 512×512 spatial resolution with active spatial, spectral, and polarization modulation with controlled bandwidth. It projects input SWIR radiant intensity scenes from stored memory with user selectable wavelength and bandwidth, as well as polarization states (six different states) controllable on a pixel level. The spectral contents are implemented by a tunable filter with variable bandpass built based on liquid crystal (LC) material, together with one passive visible and one passive SWIR cholesteric liquid crystal (CLC) notch filters, and one switchable CLC notch filter. The core of the MPSP hardware is the liquid-crystal-on-silicon (LCoS) spatial light modulators (SLMs) for intensity control and polarization modulation.
NASA Astrophysics Data System (ADS)
Wajs, Jaroslaw
2018-01-01
The paper presents satellite imagery from active SENTINEL-1A and passive SENTINEL-2A/2B sensors for their application in the monitoring of mining areas focused on detecting land changes. Multispectral scenes of SENTINEL-2A/2B have allowed for detecting changes in land-cover near the region of interest (ROI), i.e. the Szczercow dumping site in the Belchatow open cast lignite mine, central Poland, Europe. Scenes from SENTINEL-1A/1B satellite have also been used in the research. Processing of the SLC signal enabled creating a return intensity map in VV polarization. The obtained SAR scene was reclassified and shows a strong return signal from the dumping site and the open pit. This fact may be used in detection and monitoring of changes occurring within the analysed engineering objects.
ERIC Educational Resources Information Center
Oh, Hwamee; Leung, Hoi-Chung
2010-01-01
In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two…
Adaptive fusion of infrared and visible images in dynamic scene
NASA Astrophysics Data System (ADS)
Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi
2011-11-01
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.
Correlation signatures of wet soils and snows. [algorithm development and computer programming
NASA Technical Reports Server (NTRS)
Phillips, M. R.
1972-01-01
Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.
The Virtual Solar Observatory: Progress and Diversions
NASA Astrophysics Data System (ADS)
Gurman, Joseph B.; Bogart, R. S.; Amezcua, A.; Hill, Frank; Oien, Niles; Davey, Alisdair R.; Hourcle, Joseph; Mansky, E.; Spencer, Jennifer L.
2017-08-01
The Virtual Solar Observatory (VSO) is a known and useful method for identifying and accessing solar physics data online. We review current "behind the scenes" work on the VSO, including the addition of new data providers and the return of access to data sets to which service was temporarily interrupted. We also report on the effect on software development efforts when government IT “security” initiatives impinge on finite resoruces. As always, we invite SPD members to identify data sets, services, and interfaces they would like to see implemented in the VSO.
Cohen-Khait, Ruth; Schreiber, Gideon
2016-01-01
Protein–protein interactions occur via well-defined interfaces on the protein surface. Whereas the location of homologous interfaces is conserved, their composition varies, suggesting that multiple solutions may support high-affinity binding. In this study, we examined the plasticity of the interface of TEM1 β-lactamase with its protein inhibitor BLIP by low-stringency selection of a random TEM1 library using yeast surface display. Our results show that most interfacial residues could be mutated without a loss in binding affinity, protein stability, or enzymatic activity, suggesting plasticity in the interface composition supporting high-affinity binding. Interestingly, many of the selected mutations promoted faster association. Further selection for faster binders was achieved by drastically decreasing the library–ligand incubation time to 30 s. Preequilibrium selection as suggested here is a novel methodology for specifically selecting faster-associating protein complexes. PMID:27956635
NASA Astrophysics Data System (ADS)
Hodgkin, Van A.
2015-05-01
Most mass-produced, commercially available and fielded military reflective imaging systems operate across broad swaths of the visible, near infrared (NIR), and shortwave infrared (SWIR) wavebands without any spectral selectivity within those wavebands. In applications that employ these systems, it is not uncommon to be imaging a scene in which the image contrasts between the objects of interest, i.e., the targets, and the objects of little or no interest, i.e., the backgrounds, are sufficiently low to make target discrimination difficult or uncertain. This can occur even when the spectral distribution of the target and background reflectivity across the given waveband differ significantly from each other, because the fundamental components of broadband image contrast are the spectral integrals of the target and background signatures. Spectral integration by the detectors tends to smooth out any differences. Hyperspectral imaging is one approach to preserving, and thus highlighting, spectral differences across the scene, even when the waveband integrated signatures would be about the same, but it is an expensive, complex, noncompact, and untimely solution. This paper documents a study of how the capability to selectively customize the spectral width and center wavelength with a hypothetical tunable fore-optic filter would allow a broadband reflective imaging sensor to optimize image contrast as a function of scene content and ambient illumination.
Selection of optimal spectral sensitivity functions for color filter arrays.
Parmar, Manu; Reeves, Stanley J
2010-12-01
A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.
Video System for Viewing From a Remote or Windowless Cockpit
NASA Technical Reports Server (NTRS)
Banerjee, Amamath
2009-01-01
A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas
2015-03-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.
Adaptive controller for volumetric display of neuroimaging studies
NASA Astrophysics Data System (ADS)
Bleiberg, Ben; Senseney, Justin; Caban, Jesus
2014-03-01
Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.
Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface
Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele
2017-01-01
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source. PMID:28961198
Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface.
Aleotti, Jacopo; Micconi, Giorgio; Caselli, Stefano; Benassi, Giacomo; Zambelli, Nicola; Bettelli, Manuele; Zappettini, Andrea
2017-09-29
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source.
NASA Technical Reports Server (NTRS)
Myers, W. L.
1981-01-01
The LANDSAT-geographic information system (GIS) interface must summarize the results of the LANDSAT classification over the same cells that serve as geographic referencing units for the GIS, and output these summaries on a cell-by-cell basis in a form that is readable by the input routines of the GIS. The ZONAL interface for cell-oriented systems consists of two primary programs. The PIXCEL program scans the grid of cells and outputs a channel of pixels. Each pixel contains not the reflectance values but the identifier of the cell in which the center of the pixel is located. This file of pixelized cells along with the results of a pixel-by-pixel classification of the scene produced by the LANDSAT analysis system are input to the CELSUM program which then outputs a cell-by-cell summary formatted according to the requirements of the host GIS. Cross-correlation of the LANDSAT layer with the other layers in the data base is accomplished with the analysis and display facilities of the GIS.
Enzyme-based logic gates and circuits-analytical applications and interfacing with electronics.
Katz, Evgeny; Poghossian, Arshak; Schöning, Michael J
2017-01-01
The paper is an overview of enzyme-based logic gates and their short circuits, with specific examples of Boolean AND and OR gates, and concatenated logic gates composed of multi-step enzyme-biocatalyzed reactions. Noise formation in the biocatalytic reactions and its decrease by adding a "filter" system, converting convex to sigmoid response function, are discussed. Despite the fact that the enzyme-based logic gates are primarily considered as components of future biomolecular computing systems, their biosensing applications are promising for immediate practical use. Analytical use of the enzyme logic systems in biomedical and forensic applications is discussed and exemplified with the logic analysis of biomarkers of various injuries, e.g., liver injury, and with analysis of biomarkers characteristic of different ethnicity found in blood samples on a crime scene. Interfacing of enzyme logic systems with modified electrodes and semiconductor devices is discussed, giving particular attention to the interfaces functionalized with signal-responsive materials. Future perspectives in the design of the biomolecular logic systems and their applications are discussed in the conclusion. Graphical Abstract Various applications and signal-transduction methods are reviewed for enzyme-based logic systems.
NASA Technical Reports Server (NTRS)
Everett, J. R. (Principal Investigator)
1983-01-01
Improved delineation of known oil and gas fields in southern Ontario and a spectacularly high amount of structural information on the Owl Creek, Wyoming scene were obtained from analysis of TM data. The use of hue, saturation, and value image processing techniques on a Death Valley, California scene permitted direct comparison of TM processed imagery with existing 1:250,000 scale geological maps of the area and revealed small outcrops of Tertiary volcanic material overlying Paleozoic sections. Analysis of TM data over Lawton, Oklahoma suggests that the reducing chemical environment associated with hydrocarbon seepage change ferric iron to soluble ferrous iron, allowing it to be leached. Results of the band selection algorithm show a suprising consistency, with the 1,4,5 combination selected as optimal in most cases.
NASA Astrophysics Data System (ADS)
Wang, DeLiang; Terman, David
1995-01-01
A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.
Interactive imagery and colour in paired-associate learning.
Wilton, Richard N
2006-01-01
In four experiments participants were instructed to imagine scenes that described either an animal interacting with a coloured object or scenes in which the animal and coloured object were independent of each other. Participants were then given the name of the animal and required to select the name of the object and its colour. The results showed that the classic interactive imagery effect was greater for the selection of the name of the object than it was for colour. In Experiments 2, 3, and 4, additional measures were taken which suggest that the effect for colour is dependent upon the retrieval of other features of the object (e.g., its form). Thus it is argued that there is no primary interactive imagery effect for colour. The results were predicted by a version of the shared information hypothesis. The implications of the results for alternative theories are also considered.
Guilty by his fibers: suspect confession versus textile fibers reconstructed simulation.
Suzuki, Shinichi; Higashikawa, Yoshiyasu; Sugita, Ritsuko; Suzuki, Yasuhiro
2009-08-10
In one particular criminal case involving murder and theft, the arrested suspect admitted to the theft, but denied responsibility for the murder of the inhabitant of the crime scene. In his confession, the suspect stated that he found the victim's body when he broke into the crime scene to commit theft. For this report, the actual crime scene was reconstructed in accordance with the confession obtained during the interrogation of the suspect, and suspect behavior was simulated in accord to the suspect confession. The number of characteristic fibers retrieved from the simulated crime scene was compared with those of retrieved from the actual crime scene. By comparing the distribution and number of characteristic fibers collected in the simulation experiments and the actual investigation, the reliability of the suspect's confession was evaluated. The characteristic dark yellowish-green woolen fibers of the garment that the suspect wore when he entered the crime scene were selected as the target fiber in the reconstruction. The experimental simulations were conducted four times. The distributed target fibers were retrieved using the same type of adhesive tape and the same protocol by the same police officers who conducted the retrieval of the fibers at the actual crime scene. The fibers were identified both through morphological observation and by color comparisons of their ultaviolet-visible transmittance spectra measured with a microspectrophotometer. The fibers collected with the adhesive tape were counted for each area to compare with those collected in the actual crime scene investigation. The numbers of fibers found at each area of the body, mattress and blankets were compared between the simulated experiments and the actual investigation, and a significant difference was found. In particular, the numbers of fibers found near the victim's head were significantly different. As a result, the suspect's confession was not considered to be reliable, as a stronger contact with the victim was demonstrated by our simulations. During the control trial, traditional forensic traces like DNA or fingerprints were mute regarding the suspect's says. At the opposite, the fiber intelligence was highly significant to explain the suspect's behavior at the crime scene. The fiber results and simulations were presented at the court and the man was subsequently found guilty not only of theft and trespassing but also murder.
Database improvements for motor vehicle/bicycle crash analysis
Lusk, Anne C; Asgarzadeh, Morteza; Farvid, Maryam S
2015-01-01
Background Bicycling is healthy but needs to be safer for more to bike. Police crash templates are designed for reporting crashes between motor vehicles, but not between vehicles/bicycles. If written/drawn bicycle-crash-scene details exist, these are not entered into spreadsheets. Objective To assess which bicycle-crash-scene data might be added to spreadsheets for analysis. Methods Police crash templates from 50 states were analysed. Reports for 3350 motor vehicle/bicycle crashes (2011) were obtained for the New York City area and 300 cases selected (with drawings and on roads with sharrows, bike lanes, cycle tracks and no bike provisions). Crashes were redrawn and new bicycle-crash-scene details were coded and entered into the existing spreadsheet. The association between severity of injuries and bicycle-crash-scene codes was evaluated using multiple logistic regression. Results Police templates only consistently include pedal-cyclist and helmet. Bicycle-crash-scene coded variables for templates could include: 4 bicycle environments, 18 vehicle impact-points (opened-doors and mirrors), 4 bicycle impact-points, motor vehicle/bicycle crash patterns, in/out of the bicycle environment and bike/relevant motor vehicle categories. A test of including these variables suggested that, with bicyclists who had minor injuries as the control group, bicyclists on roads with bike lanes riding outside the lane had lower likelihood of severe injuries (OR, 0.40, 95% CI 0.16 to 0.98) compared with bicyclists riding on roads without bicycle facilities. Conclusions Police templates should include additional bicycle-crash-scene codes for entry into spreadsheets. Crash analysis, including with big data, could then be conducted on bicycle environments, motor vehicle potential impact points/doors/mirrors, bicycle potential impact points, motor vehicle characteristics, location and injury. PMID:25835304
Anticipatory scene representation in preschool children's recall and recognition memory.
Kreindel, Erica; Intraub, Helene
2017-09-01
Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4-5-year-olds and adults. In Experiment 2, a new, forced-choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide-angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3-5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age-related differences observed. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
van der Linde, Ian; Rajashekar, Umesh; Cormack, Lawrence K.; Bovik, Alan C.
2005-03-01
Recent years have seen a resurgent interest in eye movements during natural scene viewing. Aspects of eye movements that are driven by low-level image properties are of particular interest due to their applicability to biologically motivated artificial vision and surveillance systems. In this paper, we report an experiment in which we recorded observers" eye movements while they viewed calibrated greyscale images of natural scenes. Immediately after viewing each image, observers were shown a test patch and asked to indicate if they thought it was part of the image they had just seen. The test patch was either randomly selected from a different image from the same database or, unbeknownst to the observer, selected from either the first or last location fixated on the image just viewed. We find that several low-level image properties differed significantly relative to the observers" ability to successfully designate each patch. We also find that the differences between patch statistics for first and last fixations are small compared to the differences between hit and miss responses. The goal of the paper was to, in a non-cognitive natural setting, measure the image properties that facilitate visual memory, additionally observing the role that temporal location (first or last fixation) of the test patch played. We propose that a memorability map of a complex natural scene may be constructed to represent the low-level memorability of local regions in a similar fashion to the familiar saliency map, which records bottom-up fixation attractors.
Robot Teleoperation and Perception Assistance with a Virtual Holographic Display
NASA Technical Reports Server (NTRS)
Goddard, Charles O.
2012-01-01
Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.
Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet
NASA Astrophysics Data System (ADS)
Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay
1999-11-01
The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.
Overt attention in natural scenes: objects dominate features.
Stoll, Josef; Thrun, Michael; Nuthmann, Antje; Einhäuser, Wolfgang
2015-02-01
Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this comparably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection--and by inference object-based attentional guidance--in natural scenes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Ratings for emotion film clips.
Gabert-Quillen, Crystal A; Bartolini, Ellen E; Abravanel, Benjamin T; Sanislow, Charles A
2015-09-01
Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.
NASA Technical Reports Server (NTRS)
Lewis, Steven J.; Palacios, David M.
2013-01-01
This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).
High resolution satellite observations of mesoscale oceanography in the Tasman Sea, 1978 - 1979
NASA Technical Reports Server (NTRS)
Nilsson, C. S.; Andrews, J. C.; Hornibrook, M.; Latham, A. R.; Speechley, G. C.; Scully-Power, P. (Principal Investigator)
1982-01-01
Of the Nearly 1000 standard infrared photographic images received, 273 images were on computer compatible tape. It proved necessary to digitally enhance the scene contrast to cover only a select few degrees K over the photographic grey scale appropriate to the scene-specific range of sea surface temperature (SST). Some 178 images were so enhanced. Comparison with sea truth show that SST, as seen by satellite, provides a good guide to the ocean currents and eddies off East Australia, both in summer and winter. This is in contrast, particularly in summer, to SST mapped by surface survey, which usually lacks the necessary spatial resolution.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
NASA Astrophysics Data System (ADS)
Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin
2006-02-01
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
Le, Thang M; Borghi, John A; Kujawa, Autumn J; Klein, Daniel N; Leung, Hoi-Chung
2017-01-01
The present study examined the impacts of major depressive disorder (MDD) on visual and prefrontal cortical activity as well as their connectivity during visual working memory updating and related them to the core clinical features of the disorder. Impairment in working memory updating is typically associated with the retention of irrelevant negative information which can lead to persistent depressive mood and abnormal affect. However, performance deficits have been observed in MDD on tasks involving little or no demand on emotion processing, suggesting dysfunctions may also occur at the more basic level of information processing. Yet, it is unclear how various regions in the visual working memory circuit contribute to behavioral changes in MDD. We acquired functional magnetic resonance imaging data from 18 unmedicated participants with MDD and 21 age-matched healthy controls (CTL) while they performed a visual delayed recognition task with neutral faces and scenes as task stimuli. Selective working memory updating was manipulated by inserting a cue in the delay period to indicate which one or both of the two memorized stimuli (a face and a scene) would remain relevant for the recognition test. Our results revealed several key findings. Relative to the CTL group, the MDD group showed weaker postcue activations in visual association areas during selective maintenance of face and scene working memory. Across the MDD subjects, greater rumination and depressive symptoms were associated with more persistent activation and connectivity related to no-longer-relevant task information. Classification of postcue spatial activation patterns of the scene-related areas was also less consistent in the MDD subjects compared to the healthy controls. Such abnormalities appeared to result from a lack of updating effects in postcue functional connectivity between prefrontal and scene-related areas in the MDD group. In sum, disrupted working memory updating in MDD was revealed by alterations in activity patterns of the visual association areas, their connectivity with the prefrontal cortex, and their relationship with core clinical characteristics. These results highlight the role of information updating deficits in the cognitive control and symptomatology of depression.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-08-27
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.
Some of the thousand words a picture is worth.
Mandler, J M; Johnson, N S
1976-09-01
The effects of real-world schemata on recognition of complex pictures were studied. Two kinds of pictures were used: pictures of objects forming real-world scenes and unorganized collections of the same objects. The recognition test employed distractors that varied four types of information: inventory, spatial location, descriptive and spatial composition. Results emphasized the selective nature of schemata since superior recognition of one kind of information was offset by loss of another. Spatial location information was better recognized in real-world scenes and spatial composition information was better recognized in unorganized scenes. Organized and unorganized pictures did not differ with respect of inventory and descriptive information. The longer the pictures were studied, the longer subjects took to recognize them. Reaction time for hits, misses, and false alarms increased dramatically as presentation time increased from 5 to 60 sec. It was suggested that detection of a difference in a distractor terminated search, but that when no difference was detected, an exhaustive search of the available information took place.
Photorealistic scene presentation: virtual video camera
NASA Astrophysics Data System (ADS)
Johnson, Michael J.; Rogers, Joel Clark W.
1994-07-01
This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.
fMRI responses to pictures of mutilation and contamination.
Schienle, Anne; Schäfer, Axel; Hermann, Andrea; Walter, Bertram; Stark, Rudolf; Vaitl, Dieter
2006-01-30
Findings from several functional magnetic resonance imaging (fMRI) studies implicate the existence of a distinct neural disgust substrate, whereas others support the idea of distributed and integrative brain systems involved in emotional processing. In the present fMRI experiment 12 healthy females viewed pictures from four emotion categories. Two categories were disgust-relevant and depicted contamination or mutilation. The other scenes showed attacks (fear) or were affectively neutral. The two types of disgust elicitors received comparable ratings for disgust, fear and arousal. Both were associated with activation of the occipitotemporal cortex, the amygdala, and the orbitofrontal cortex; insula activity was nonsignificant in the two disgust conditions. Mutilation scenes induced greater inferior parietal activity than contamination scenes, which might mirror their greater capacity to capture attention. Our results are in disagreement with the idea of selective disgust processing at the insula. They point to a network of brain regions involved in the decoding of stimulus salience and the regulation of attention.
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-04-14
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.
Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning
2015-01-01
This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications. PMID:26343656
Sato, Naoyuki; Yamaguchi, Yoko
2009-06-01
The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.
Rapid Assessment of Agility for Conceptual Design Synthesis
NASA Technical Reports Server (NTRS)
Biezad, Daniel J.
1996-01-01
This project consists of designing and implementing a real-time graphical interface for a workstation-based flight simulator. It is capable of creating a three-dimensional out-the-window scene of the aircraft's flying environment, with extensive information about the aircraft's state displayed in the form of a heads-up-display (HUD) overlay. The code, written in the C programming language, makes calls to Silicon Graphics' Graphics Library (GL) to draw the graphics primitives. Included in this report is a detailed description of the capabilities of the code, including graphical examples, as well as a printout of the code itself
Illicit drug use in the flemish nightlife scene between 2003 and 2009.
Van Havere, Tina; Lammertyn, Jan; Vanderplasschen, Wouter; Bellis, Mark; Rosiers, Johan; Broekaert, Eric
2012-01-01
Given the importance of party people as innovators and early adaptors in the diffusion of substance use, and given the lack of longitudinal scope in studies of the nightlife scene, we explored changes in illicit drug use among young people participating in the nightlife scene in Flanders. A survey among party people selected at dance events, rock festivals and clubs was held in the summer of 2003 and repeated in 2005, 2007 and 2009. In total, 2,812 respondents filled in a questionnaire on the use of cannabis, ecstasy, cocaine, amphetamines, GHB and ketamine. The results of the multiple logistic regression analyses show that in the group of frequent pub visitors, the predicting probability of cannabis use increased over time, while the gap in drug use between dance music lovers and non-lovers of dance music narrowed. For cocaine use during the last year, an increase was found related to the housing situation (alone or with parents) of respondents. While the odds of using ecstasy decreased over the years, the odds of using GHB increased. We can conclude that monitoring emerging trends, which can be quickly observed in the nightlife scene, provides meaningful information for anticipating possible trends. Copyright © 2012 S. Karger AG, Basel.
ViCoMo: visual context modeling for scene understanding in video surveillance
NASA Astrophysics Data System (ADS)
Creusen, Ivo M.; Javanbakhti, Solmaz; Loomans, Marijn J. H.; Hazelhoff, Lykele B.; Roubtsova, Nadejda; Zinger, Svitlana; de With, Peter H. N.
2013-10-01
The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan-tilt-zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2006-01-01
A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
LandEx - Fast, FOSS-Based Application for Query and Retrieval of Land Cover Patterns
NASA Astrophysics Data System (ADS)
Netzel, P.; Stepinski, T.
2012-12-01
The amount of satellite-based spatial data is continuously increasing making a development of efficient data search tools a priority. The bulk of existing research on searching satellite-gathered data concentrates on images and is based on the concept of Content-Based Image Retrieval (CBIR); however, available solutions are not efficient and robust enough to be put to use as deployable web-based search tools. Here we report on development of a practical, deployable tool that searches classified, rather than raw image. LandEx (Landscape Explorer) is a GeoWeb-based tool for Content-Based Pattern Retrieval (CBPR) contained within the National Land Cover Dataset 2006 (NLCD2006). The USGS-developed NLCD2006 is derived from Landsat multispectral images; it covers the entire conterminous U.S. with the resolution of 30 meters/pixel and it depicts 16 land cover classes. The size of NLCD2006 is about 10 Gpixels (161,000 x 100,000 pixels). LandEx is a multi-tier GeoWeb application based on Open Source Software. Main components are: GeoExt/OpenLayers (user interface), GeoServer (OGC WMS, WCS and WPS server), and GRASS (calculation engine). LandEx performs search using query-by-example approach: user selects a reference scene (exhibiting a chosen pattern of land cover classes) and the tool produces, in real time, a map indicating a degree of similarity between the reference pattern and all local patterns across the U.S. Scene pattern is encapsulated by a 2D histogram of classes and sizes of single-class clumps. Pattern similarity is based on the notion of mutual information. The resultant similarity map can be viewed and navigated in a web browser, or it can download as a GeoTiff file for more in-depth analysis. The LandEx is available at http://sil.uc.edu
Animating Preservice Teachers' Noticing
ERIC Educational Resources Information Center
de Araujo, Zandra; Amador, Julie; Estapa, Anne; Weston, Tracy; Aming-Attai, Rachael; Kosko, Karl W.
2015-01-01
The incorporation of animation in mathematics teacher education courses is one method for transforming practices and promoting practice-based education. Animation can be used as an approximation of practice that engages preservice teachers (PSTs) in creating classroom scenes in which they select characters, regulate movement, and construct…
Oberholzer, Nicole; Kaserer, Alexander; Albrecht, Roland; Seifert, Burkhardt; Tissi, Mario; Spahn, Donat R; Maurer, Konrad; Stein, Philipp
2017-07-01
Pain is frequently encountered in the prehospital setting and needs to be treated quickly and sufficiently. However, incidences of insufficient analgesia after prehospital treatment by emergency medical services are reported to be as high as 43%. The purpose of this analysis was to identify modifiable factors in a specific emergency patient cohort that influence the pain suffered by patients when admitted to the hospital. For that purpose, this retrospective observational study included all patients with significant pain treated by a Swiss physician-staffed helicopter emergency service between April and October 2011 with the following characteristics to limit selection bias: Age > 15 years, numerical rating scale (NRS) for pain documented at the scene and at hospital admission, NRS > 3 at the scene, initial Glasgow coma scale > 12, and National Advisory Committee for Aeronautics score < VI. Univariate and multivariable logistic regression analyses were performed to evaluate patient and mission characteristics of helicopter emergency service associated with insufficient pain management. A total of 778 patients were included in the analysis. Insufficient pain management (NRS > 3 at hospital admission) was identified in 298 patients (38%). Factors associated with insufficient pain management were higher National Advisory Committee for Aeronautics scores, high NRS at the scene, nontrauma patients, no analgesic administration, and treatment by a female physician. In 16% (128 patients), despite ongoing pain, no analgesics were administered. Factors associated with this untreated persisting pain were short time at the scene (below 10 minutes), secondary missions of helicopter emergency service, moderate pain at the scene, and nontrauma patients. Sufficient management of severe pain is significantly better if ketamine is combined with an opioid (65%), compared to a ketamine or opioid monotherapy (46%, P = .007). In the studied specific Swiss cohort, nontrauma patients, patients on secondary missions, patients treated only for a short time at the scene before transport, patients who receive no analgesic, and treatment by a female physician may be risk factors for insufficient pain management. Patients suffering pain at the scene (NRS > 3) should receive an analgesic whenever possible. Patients with severe pain at the scene (NRS ≥ 8) may benefit from the combination of ketamine with an opioid. The finding about sex differences concerning analgesic administration is intriguing and possibly worthy of further study.
Perceptual load in different regions of the visual scene and its relevance for driving.
Marciano, Hadas; Yeshurun, Yaffa
2015-06-01
The aim of this study was to better understand the role played by perceptual load, at both central and peripheral regions of the visual scene, in driving safety. Attention is a crucial factor in driving safety, and previous laboratory studies suggest that perceptual load is an important factor determining the efficiency of attentional selectivity. Yet, the effects of perceptual load on driving were never studied systematically. Using a driving simulator, we orthogonally manipulated the load levels at the road (central load) and its sides (peripheral load), while occasionally introducing critical events at one of these regions. Perceptual load affected driving performance at both regions of the visual scene. Critically, the effect was different for central versus peripheral load: Whereas load levels on the road mainly affected driving speed, load levels on its sides mainly affected the ability to detect critical events initiating from the roadsides. Moreover, higher levels of peripheral load impaired performance but mainly with low levels of central load, replicating findings with simple letter stimuli. Perceptual load has a considerable effect on driving, but the nature of this effect depends on the region of the visual scene at which the load is introduced. Given the observed importance of perceptual load, authors of future studies of driving safety should take it into account. Specifically, these findings suggest that our understanding of factors that may be relevant for driving safety would benefit from studying these factors under different levels of load at different regions of the visual scene. © 2014, Human Factors and Ergonomics Society.
Kim, Sungho
2015-01-01
Sea-based infrared search and track (IRST) is important for homeland security by detecting missiles and asymmetric boats. This paper proposes a novel scheme to interpret various infrared scenes by classifying the infrared background types and detecting the coastal regions in omni-directional images. The background type or region-selective small infrared target detector should be deployed to maximize the detection rate and to minimize the number of false alarms. A spatial filter-based small target detector is suitable for identifying stationary incoming targets in remote sea areas with sky only. Many false detections can occur if there is an image sector containing a coastal region, due to ground clutter and the difficulty in finding true targets using the same spatial filter-based detector. A temporal filter-based detector was used to handle these problems. Therefore, the scene type and coastal region information is critical to the success of IRST in real-world applications. In this paper, the infrared scene type was determined using the relationships between the sensor line-of-sight (LOS) and a horizontal line in an image. The proposed coastal region detector can be activated if the background type of the probing sector is determined to be a coastal region. Coastal regions can be detected by fusing the region map and curve map. The experimental results on real infrared images highlight the feasibility of the proposed sea-based scene interpretation. In addition, the effects of the proposed scheme were analyzed further by applying region-adaptive small target detection. PMID:26404308
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Selectivity and Longevity of Peripheral-Nerve and Machine Interfaces: A Review
Ghafoor, Usman; Kim, Sohee; Hong, Keum-Shik
2017-01-01
For those individuals with upper-extremity amputation, a daily normal living activity is no longer possible or it requires additional effort and time. With the aim of restoring their sensory and motor functions, theoretical and technological investigations have been carried out in the field of neuroprosthetic systems. For transmission of sensory feedback, several interfacing modalities including indirect (non-invasive), direct-to-peripheral-nerve (invasive), and cortical stimulation have been applied. Peripheral nerve interfaces demonstrate an edge over the cortical interfaces due to the sensitivity in attaining cortical brain signals. The peripheral nerve interfaces are highly dependent on interface designs and are required to be biocompatible with the nerves to achieve prolonged stability and longevity. Another criterion is the selection of nerves that allows minimal invasiveness and damages as well as high selectivity for a large number of nerve fascicles. In this paper, we review the nerve-machine interface modalities noted above with more focus on peripheral nerve interfaces, which are responsible for provision of sensory feedback. The invasive interfaces for recording and stimulation of electro-neurographic signals include intra-fascicular, regenerative-type interfaces that provide multiple contact channels to a group of axons inside the nerve and the extra-neural-cuff-type interfaces that enable interaction with many axons around the periphery of the nerve. Section Current Prosthetic Technology summarizes the advancements made to date in the field of neuroprosthetics toward the achievement of a bidirectional nerve-machine interface with more focus on sensory feedback. In the Discussion section, the authors propose a hybrid interface technique for achieving better selectivity and long-term stability using the available nerve interfacing techniques. PMID:29163122
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Kaiser, Mary K.
2013-01-01
Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.
Creating and Sustaining Online Professional Learning Communities. Technology, Education--Connections
ERIC Educational Resources Information Center
Falk, Joni K., Ed.; Drayton, Brian, Ed.
2009-01-01
This volume presents the work of trailblazing researchers and developers of electronic communities for professional learning. It illuminates the essential work behind the scenes in building successful online communities and scaffolding site interactions, including content selection, creation and management, administrative structures, tools and…
Text Detection and Translation from Natural Scenes
2001-06-01
is no explicit tags around Chinese words. A module for Chinese word segmentation is included in the system. This segmentor uses a word- frequency ... list to make segmentation decisions. We tested the EBMT based method using randomly selected 50 signs from our database, assuming perfect sign
Cortical systems mediating visual attention to both objects and spatial locations
Shomstein, Sarah; Behrmann, Marlene
2006-01-01
Natural visual scenes consist of many objects occupying a variety of spatial locations. Given that the plethora of information cannot be processed simultaneously, the multiplicity of inputs compete for representation. Using event-related functional MRI, we show that attention, the mechanism by which a subset of the input is selected, is mediated by the posterior parietal cortex (PPC). Of particular interest is that PPC activity is differentially sensitive to the object-based properties of the input, with enhanced activation for those locations bound by an attended object. Of great interest too is the ensuing modulation of activation in early cortical regions, reflected as differences in the temporal profile of the blood oxygenation level-dependent (BOLD) response for within-object versus between-object locations. These findings indicate that object-based selection results from an object-sensitive reorienting signal issued by the PPC. The dynamic circuit between the PPC and earlier sensory regions then enables observers to attend preferentially to objects of interest in complex scenes. PMID:16840559
Regional information guidance system based on hypermedia concept
NASA Astrophysics Data System (ADS)
Matoba, Hiroshi; Hara, Yoshinori; Kasahara, Yutako
1990-08-01
A regional information guidance system has been developed on an image workstation. Two main features of this system are hypermedia data structure and friendly visual interface realized by the full-color frame memory system. As the hypermedia data structure manages regional information such as maps, pictures and explanations of points of interest, users can retrieve those information one by one, next to next according to their interest change. For example, users can retrieve explanation of a picture through the link between pictures and text explanations. Users can also traverse from one document to another by using keywords as cross reference indices. The second feature is to utilize a full-color, high resolution and wide space frame memory for visual interface design. This frame memory system enables real-time operation of image data and natural scene representation. The system also provides half tone representing function which enables fade-in/out presentations. This fade-in/out functions used in displaying and erasing menu and image data, makes visual interface soft for human eyes. The system we have developed is a typical example of multimedia applications. We expect the image workstation will play an important role as a platform for multimedia applications.
Imperatore, Pasquale; Iodice, Antonio; Riccio, Daniele
2017-12-27
A general, approximate perturbation method, able to provide closed-form expressions of scattering from a layered structure with an arbitrary number of rough interfaces, has been recently developed. Such a method provides a unique tool for the characterization of radar response patterns of natural rough multilayers. In order to show that, here, for the first time in a journal paper, we describe the application of the developed perturbation theory to fractal interfaces; we then employ the perturbative method solution to analyze the scattering from real-world layered structures of practical interest in remote sensing applications. We focus on the dependence of normalized radar cross section on geometrical and physical properties of the considered scenarios, and we choose two classes of natural stratifications: wet paleosoil covered by a low-loss dry sand layer and a sea-ice layer above water with dry snow cover. Results are in accordance with the experimental evidence available in the literature for the low-loss dry sand layer, and they may provide useful indications about the actual ability of remote sensing instruments to perform sub-surface sensing for different sensor and scene parameters.
2017-01-01
A general, approximate perturbation method, able to provide closed-form expressions of scattering from a layered structure with an arbitrary number of rough interfaces, has been recently developed. Such a method provides a unique tool for the characterization of radar response patterns of natural rough multilayers. In order to show that, here, for the first time in a journal paper, we describe the application of the developed perturbation theory to fractal interfaces; we then employ the perturbative method solution to analyze the scattering from real-world layered structures of practical interest in remote sensing applications. We focus on the dependence of normalized radar cross section on geometrical and physical properties of the considered scenarios, and we choose two classes of natural stratifications: wet paleosoil covered by a low-loss dry sand layer and a sea-ice layer above water with dry snow cover. Results are in accordance with the experimental evidence available in the literature for the low-loss dry sand layer, and they may provide useful indications about the actual ability of remote sensing instruments to perform sub-surface sensing for different sensor and scene parameters. PMID:29280979
Magnon Mode Selective Spin Transport in Compensated Ferrimagnets.
Cramer, Joel; Guo, Er-Jia; Geprägs, Stephan; Kehlberger, Andreas; Ivanov, Yurii P; Ganzhorn, Kathrin; Della Coletta, Francesco; Althammer, Matthias; Huebl, Hans; Gross, Rudolf; Kosel, Jürgen; Kläui, Mathias; Goennenwein, Sebastian T B
2017-06-14
We investigate the generation of magnonic thermal spin currents and their mode selective spin transport across interfaces in insulating, compensated ferrimagnet/normal metal bilayer systems. The spin Seebeck effect signal exhibits a nonmonotonic temperature dependence with two sign changes of the detected voltage signals. Using different ferrimagnetic garnets, we demonstrate the universality of the observed complex temperature dependence of the spin Seebeck effect. To understand its origin, we systematically vary the interface between the ferrimagnetic garnet and the metallic layer, and by using different metal layers we establish that interface effects play a dominating role. They do not only modify the magnitude of the spin Seebeck effect signal but in particular also alter its temperature dependence. By varying the temperature, we can select the dominating magnon mode and we analyze our results to reveal the mode selective interface transmission probabilities for different magnon modes and interfaces. The comparison of selected systems reveals semiquantitative details of the interfacial coupling depending on the materials involved, supported by the obtained field dependence of the signal.
Improvement of design of a surgical interface using an eye tracking device
2014-01-01
Background Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Methods Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Results Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. Conclusions This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability. PMID:25080176
Improvement of design of a surgical interface using an eye tracking device.
Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz
2014-05-07
Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.
Scene recognition following locomotion around a scene.
Motes, Michael A; Finlay, Cory A; Kozhevnikov, Maria
2006-01-01
Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40 degrees to 360 degrees). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36 degrees to 180 degrees. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.
Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu
2006-06-01
Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.
Postponing the Encyclopedia: Children as Researchers.
ERIC Educational Resources Information Center
Pinsel, Marc I.; Pinsel, Jerry K.
Research is the planned collection, selection, and processing of information that typically takes three forms--historical, descriptive, or experimental. Historical research seeks to uncover facts with respect to events that have already happened, descriptive research seeks to uncover facts with respect to the current scene of events, and…
Higher Education Management. The Key Elements.
ERIC Educational Resources Information Center
Warner, David, Ed.; Palfreyman, David, Ed.
This book presents the views of 15 individual authors on the principles of management in higher education from a British perspective. Preliminary material includes brief biographical sketches of each contributing author and a list of selected abbreviations. Individual chapters are: (1) "Setting the Scene" (David Palfreyman and David…
The Language Testing Cycle: From Inception to Washback. Series S, Number 13.
ERIC Educational Resources Information Center
Wigglesworth, Gillian, Ed.; Elder, Catherine, Ed.
A selection of essays on language testing includes: "Perspectives on the Testing Cycle: Setting the Scene" (Catherine Elder, Gillian Wigglesworth); "The Politicisation of English: The Case of the STEP Test and the Chinese Students" (Lesleyanne Hawthorne); "Developing Language Tests for Specific Populations" (Rosemary…
Blur Detection is Unaffected by Cognitive Load.
Loschky, Lester C; Ringer, Ryan V; Johnson, Aaron P; Larson, Adam M; Neider, Mark; Kramer, Arthur F
2014-03-01
Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N -back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N -back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N -back level on N -back performance, scene recognition memory, and gaze dispersion, but no effect of N -back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N -back task.
Circling motion and screen edges as an alternative input method for on-screen target manipulation.
Ka, Hyun W; Simpson, Richard C
2017-04-01
To investigate a new alternative interaction method, called circling interface, for manipulating on-screen objects. To specify a target, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. To evaluate the circling interface, we conducted an experiment with 16 participants, comparing the performance on pointing tasks with different combinations of selection method (circling interface, physical mouse and dwelling interface) and input device (normal computer mouse, head pointer and joystick mouse emulator). A circling interface is compatible with many types of pointing devices, not requiring physical activation of mouse buttons, and is more efficient than dwell-clicking. Across all common pointing operations, the circling interface had a tendency to produce faster performance with a head-mounted mouse emulator than with a joystick mouse. The performance accuracy of the circling interface outperformed the dwelling interface. It was demonstrated that the circling interface has the potential as another alternative pointing method for selecting and manipulating objects in a graphical user interface. Implications for Rehabilitation A circling interface will improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. The Circling interface can also work with AAC devices.
Hunter, MaryCarol R; Askarinejad, Ali
2015-01-01
It is well-established that the experience of nature produces an array of positive benefits to mental well-being. Much less is known about the specific attributes of green space which produce these effects. In the absence of translational research that links theory with application, it is challenging to design urban green space for its greatest restorative potential. This translational research provides a method for identifying which specific physical attributes of an environmental setting are most likely to influence preference and restoration responses. Attribute identification was based on a triangulation process invoking environmental psychology and aesthetics theories, principles of design founded in mathematics and aesthetics, and empirical research on the role of specific physical attributes of the environment in preference or restoration responses. From this integration emerged a list of physical attributes defining aspects of spatial structure and environmental content found to be most relevant to the perceptions involved with preference and restoration. The physical attribute list offers a starting point for deciphering which scene stimuli dominate or collaborate in preference and restoration responses. To support this, functional definitions and metrics-efficient methods for attribute quantification are presented. Use of these research products and the process for defining place-based metrics can provide (a) greater control in the selection and interpretation of the scenes/images used in tests of preference and restoration and (b) an expanded evidence base for well-being designers of the built environment.
Hunter, MaryCarol R.; Askarinejad, Ali
2015-01-01
It is well-established that the experience of nature produces an array of positive benefits to mental well-being. Much less is known about the specific attributes of green space which produce these effects. In the absence of translational research that links theory with application, it is challenging to design urban green space for its greatest restorative potential. This translational research provides a method for identifying which specific physical attributes of an environmental setting are most likely to influence preference and restoration responses. Attribute identification was based on a triangulation process invoking environmental psychology and aesthetics theories, principles of design founded in mathematics and aesthetics, and empirical research on the role of specific physical attributes of the environment in preference or restoration responses. From this integration emerged a list of physical attributes defining aspects of spatial structure and environmental content found to be most relevant to the perceptions involved with preference and restoration. The physical attribute list offers a starting point for deciphering which scene stimuli dominate or collaborate in preference and restoration responses. To support this, functional definitions and metrics—efficient methods for attribute quantification are presented. Use of these research products and the process for defining place-based metrics can provide (a) greater control in the selection and interpretation of the scenes/images used in tests of preference and restoration and (b) an expanded evidence base for well-being designers of the built environment. PMID:26347691
Oh, Hwamee; Leung, Hoi-Chung
2010-02-01
In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue ("Both") was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.
NASA Astrophysics Data System (ADS)
Sansivero, Fabio; Vilardo, Giuseppe; Caputo, Teresa
2017-04-01
The permanent thermal infrared surveillance network of Osservatorio Vesuviano (INGV) is composed of 6 stations which acquire IR frames of fumarole fields in the Campi Flegrei caldera and inside the Vesuvius crater (Italy). The IR frames are uploaded to a dedicated server in the Surveillance Center of Osservatorio Vesuviano in order to process the infrared data and to excerpt all the information contained. In a first phase the infrared data are processed by an automated system (A.S.I.R.A. Acq- Automated System of IR Analysis and Acquisition) developed in Matlab environment and with a user-friendly graphic user interface (GUI). ASIRA daily generates time-series of residual temperature values of the maximum temperatures observed in the IR scenes after the removal of seasonal effects. These time-series are displayed in the Surveillance Room of Osservatorio Vesuviano and provide information about the evolution of shallow temperatures field of the observed areas. In particular the features of ASIRA Acq include: a) efficient quality selection of IR scenes, b) IR images co-registration in respect of a reference frame, c) seasonal correction by using a background-removal methodology, a) filing of IR matrices and of the processed data in shared archives accessible to interrogation. The daily archived records can be also processed by ASIRA Plot (Matlab code with GUI) to visualize IR data time-series and to help in evaluating inputs parameters for further data processing and analysis. Additional processing features are accomplished in a second phase by ASIRA Tools which is Matlab code with GUI developed to extract further information from the dataset in automated way. The main functions of ASIRA Tools are: a) the analysis of temperature variations of each pixel of the IR frame in a given time interval, b) the removal of seasonal effects from temperature of every pixel in the IR frames by using an analytic approach (removal of sinusoidal long term seasonal component by using a polynomial fit Matlab function - LTFC_SCOREF), c) the export of data in different raster formats (i.e. Surfer grd). An interesting example of elaborations of the data produced by ASIRA Tools is the map of the temperature changing rate, which provide remarkable information about the potential migration of fumarole activity. The high efficiency of Matlab in processing matrix data from IR scenes and the flexibility of this code-developing tool proved to be very useful to produce applications to use in volcanic surveillance aimed to monitor the evolution of surface temperatures field in diffuse degassing volcanic areas.
Best-next-view algorithm for three-dimensional scene reconstruction using range images
NASA Astrophysics Data System (ADS)
Banta, J. E.; Zhien, Yu; Wang, X. Z.; Zhang, G.; Smith, M. T.; Abidi, Mongi A.
1995-10-01
The primary focus of the research detailed in this paper is to develop an intelligent sensing module capable of automatically determining the optimal next sensor position and orientation during scene reconstruction. To facilitate a solution to this problem, we have assembled a system for reconstructing a 3D model of an object or scene from a sequence of range images. Candidates for the best-next-view position are determined by detecting and measuring occlusions to the range camera's view in an image. Ultimately, the candidate which will reveal the greatest amount of unknown scene information is selected as the best-next-view position. Our algorithm uses ray tracing to determine how much new information a given sensor perspective will reveal. We have tested our algorithm successfully on several synthetic range data streams, and found the system's results to be consistent with an intuitive human search. The models recovered by our system from range data compared well with the ideal models. Essentially, we have proven that range information of physical objects can be employed to automatically reconstruct a satisfactory dynamic 3D computer model at a minimal computational expense. This has obvious implications in the contexts of robot navigation, manufacturing, and hazardous materials handling. The algorithm we developed takes advantage of no a priori information in finding the best-next-view position.
Constructing, Perceiving, and Maintaining Scenes: Hippocampal Activity and Connectivity
Zeidman, Peter; Mullally, Sinéad L.; Maguire, Eleanor A.
2015-01-01
In recent years, evidence has accumulated to suggest the hippocampus plays a role beyond memory. A strong hippocampal response to scenes has been noted, and patients with bilateral hippocampal damage cannot vividly recall scenes from their past or construct scenes in their imagination. There is debate about whether the hippocampus is involved in the online processing of scenes independent of memory. Here, we investigated the hippocampal response to visually perceiving scenes, constructing scenes in the imagination, and maintaining scenes in working memory. We found extensive hippocampal activation for perceiving scenes, and a circumscribed area of anterior medial hippocampus common to perception and construction. There was significantly less hippocampal activity for maintaining scenes in working memory. We also explored the functional connectivity of the anterior medial hippocampus and found significantly stronger connectivity with a distributed set of brain areas during scene construction compared with scene perception. These results increase our knowledge of the hippocampus by identifying a subregion commonly engaged by scenes, whether perceived or constructed, by separating scene construction from working memory, and by revealing the functional network underlying scene construction, offering new insights into why patients with hippocampal lesions cannot construct scenes. PMID:25405941
Parametric Coding of the Size and Clutter of Natural Scenes in the Human Brain
Park, Soojin; Konkle, Talia; Oliva, Aude
2015-01-01
Estimating the size of a space and its degree of clutter are effortless and ubiquitous tasks of moving agents in a natural environment. Here, we examine how regions along the occipital–temporal lobe respond to pictures of indoor real-world scenes that parametrically vary in their physical “size” (the spatial extent of a space bounded by walls) and functional “clutter” (the organization and quantity of objects that fill up the space). Using a linear regression model on multivoxel pattern activity across regions of interest, we find evidence that both properties of size and clutter are represented in the patterns of parahippocampal cortex, while the retrosplenial cortex activity patterns are predominantly sensitive to the size of a space, rather than the degree of clutter. Parametric whole-brain analyses confirmed these results. Importantly, this size and clutter information was represented in a way that generalized across different semantic categories. These data provide support for a property-based representation of spaces, distributed across multiple scene-selective regions of the cerebral cortex. PMID:24436318
NASA Astrophysics Data System (ADS)
Kutulakos, Kyros N.; O'Toole, Matthew
2015-03-01
Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang
2007-12-01
In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.
AIS-2 radiometry and a comparison of methods for the recovery of ground reflectance
NASA Technical Reports Server (NTRS)
Conel, James E.; Green, Robert O.; Vane, Gregg; Bruegge, Carol J.; Alley, Ronald E.; Curtiss, Brian J.
1987-01-01
A field experiment and its results involving Airborne Imaging Spectrometer-2 data are described. The radiometry and spectral calibration of the instrument are critically examined in light of laboratory and field measurements. Three methods of compensating for the atmosphere in the search for ground reflectance are compared. It was found that laboratory determined responsitivities are 30 to 50 percent less than expected for conditions of the flight for both short and long wavelength observations. The combined system atmosphere surface signal to noise ratio, as indexed by the mean response divided by the standard deviation for selected areas, lies between 40 and 110, depending upon how scene averages are taken, and is 30 percent less for flight conditions than for laboratory. Atmospheric and surface variations may contribute to this difference. It is not possible to isolate instrument performance from the present data. As for methods of data reduction, the so-called scene average or log-residual method fails to recover any feature present in the surface reflectance, probably because of the extreme homogeneity of the scene.
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-01-01
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189
NASA Astrophysics Data System (ADS)
Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei
2017-09-01
The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.
Web Map Services (WMS) Global Mosaic
NASA Technical Reports Server (NTRS)
Percivall, George; Plesea, Lucian
2003-01-01
The WMS Global Mosaic provides access to imagery of the global landmass using an open standard for web mapping. The seamless image is a mosaic of Landsat 7 scenes; geographically-accurate with 30 and 15 meter resolutions. By using the OpenGIS Web Map Service (WMS) interface, any organization can use the global mosaic as a layer in their geospatial applications. Based on a trade study, an implementation approach was chosen that extends a previously developed WMS hosting a Landsat 5 CONUS mosaic developed by JPL. The WMS Global Mosaic supports the NASA Geospatial Interoperability Office goal of providing an integrated digital representation of the Earth, widely accessible for humanity's critical decisions.
Enabling model customization and integration
NASA Astrophysics Data System (ADS)
Park, Minho; Fishwick, Paul A.
2003-09-01
Until fairly recently, the idea of dynamic model content and presentation were treated synonymously. For example, if one was to take a data flow network, which captures the dynamics of a target system in terms of the flow of data through nodal operators, then one would often standardize on rectangles and arrows for the model display. The increasing web emphasis on XML, however, suggests that the network model can have its content specified in an XML language, and then the model can be represented in a number of ways depending on the chosen style. We have developed a formal method, based on styles, that permits a model to be specified in XML and presented in 1D (text), 2D, and 3D. This method allows for customization and personalization to exert their benefits beyond e-commerce, to the area of model structures used in computer simulation. This customization leads naturally to solving the bigger problem of model integration - the act of taking models of a scene and integrating them with that scene so that there is only one unified modeling interface. This work focuses mostly on customization, but we address the integration issue in the future work section.
A comparison of viewer reactions to outdoor scenes and photographs of those scenes
Elwood, Jr. Shafer; Thomas A. Richards; Thomas A. Richards
1974-01-01
A color-slide projection or photograph can be used to determine reactions to an actual scene if the presentation adequately includes most of the elements in the scene. Eight kinds of scenes were subjected to three different types of presentation: (A) viewing. the actual scenes, (B) viewing color slides of the scenes, and (C) viewing color photographs of the scenes. For...
2003-05-01
these 10 pilot fatalities were analgesics, sympathomimetics, diphenhydramine, and/or tramadol . Ethanol was found in 3 cases wherein no other drugs...health care providers at accident scenes, or at hospitals, for resuscitation, pain reduction, and/or surgical procedures. Whereas, other drugs—such as
The Thing's the Play: Doing "Hamlet."
ERIC Educational Resources Information Center
Sowder, Wilbur H., Jr.
1993-01-01
Argues for the use of film in the teaching of William Shakespeare's "Hamlet" because the play was meant to be seen and heard and not just read. Outlines a method of teaching the play by which students select a scene and perform it. Gives an example of a successful student performance. (HB)
Australia and New Zealand Applied Linguistics (ANZAL): Taking Stock
ERIC Educational Resources Information Center
Kleinsasser, Robert C.
2004-01-01
This paper reviews some emerging trends in applied linguistics in both Australia and New Zealand. It sketches the current scene of (selected) postgraduate applied linguistics programs in higher education and considers how various university programs define applied linguistics through the classes (titles) they have postgraduate students complete to…
Grasp Preparation Improves Change Detection for Congruent Objects
ERIC Educational Resources Information Center
Symes, Ed; Tucker, Mike; Ellis, Rob; Vainio, Lari; Ottoboni, Giovanni
2008-01-01
A series of experiments provided converging support for the hypothesis that action preparation biases selective attention to action-congruent object features. When visual transients are masked in so-called "change-blindness scenes," viewers are blind to substantial changes between 2 otherwise identical pictures that flick back and forth. The…
Conducting a wildland visual resources inventory
James F. Palmer
1979-01-01
This paper describes a procedure for systematically inventorying the visual resources of wildland environments. Visual attributes are recorded photographically using two separate sampling methods: one based on professional judgment and the other on random selection. The location and description of each inventoried scene are recorded on U.S. Geological Survey...
Report Card. Functional Models of Institutional Research and Other Selected Papers.
ERIC Educational Resources Information Center
Brown, Charles I., Ed.
Presentations are on the five topics of functional models of institutional research: (1) and the political scene; (2) at public and private colleges and universities; (3) for improving communications between institutional researchers and data processors; (4) for deriving qualitative decisions from quantitative data; and (5) for special interest…
NASA Technical Reports Server (NTRS)
Thorne, J. F.
1977-01-01
State agencies need rapid, synoptic and inexpensive methods for lake assessment to comply with the 1972 Amendments to the Federal Water Pollution Control Act. Low altitude aerial photography may be useful in providing information on algal type and quantity. Photography must be calibrated properly to remove sources of error including airlight, surface reflectance and scene-to-scene illumination differences. A 550-nm narrow wavelength band black and white photographic exposure provided a better correlation to algal biomass than either red or infrared photographic exposure. Of all the biomass parameters tested, depth-integrated chlorophyll a concentration correlated best to remote sensing data. Laboratory-measured reflectance of selected algae indicate that different taxonomic classes of algae may be discriminated on the basis of their reflectance spectra.
Enhancement of time images for photointerpretation
NASA Technical Reports Server (NTRS)
Gillespie, A. R.
1986-01-01
The Thermal Infrared Multispectral Scanner (TIMS) images consist of six channels of data acquired in bands between 8 and 12 microns, thus they contain information about both temperature and emittance. Scene temperatures are controlled by reflectivity of the surface, but also by its geometry with respect to the Sun, time of day, and other factors unrelated to composition. Emittance is dependent upon composition alone. Thus the photointerpreter may wish to enhance emittance information selectively. Because thermal emittances in real scenes vary but little, image data tend to be highly correlated along channels. Special image processing is required to make this information available for the photointerpreter. Processing includes noise removal, construction of model emittance images, and construction of false-color pictures enhanced by decorrelation techniques.
Automatic video segmentation and indexing
NASA Astrophysics Data System (ADS)
Chahir, Youssef; Chen, Liming
1999-08-01
Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.
Magnon mode selective spin transport in compensated ferrimagnets
Cramer, Joel; Guo, Er -Jia; Geprags, Stephan; ...
2017-04-13
We investigate the generation of magnonic thermal spin currents and their mode selective spin transport across interfaces in insulating, compensated ferrimagnet/normal metal bilayer systems. The spin Seebeck effect signal exhibits a nonmonotonic temperature dependence with two sign changes of the detected voltage signals. Using different ferrimagnetic garnets, we demonstrate the universality of the observed complex temperature dependence of the spin Seebeck effect. To understand its origin, we systematically vary the interface between the ferrimagnetic garnet and the metallic layer, and by using different metal layers we establish that interface effects play a dominating role. They do not only modify themore » magnitude of the spin Seebeck effect signal but in particular also alter its temperature dependence. By varying the temperature, we can select the dominating magnon mode and we analyze our results to reveal the mode selective interface transmission probabilities for different magnon modes and interfaces. As a result, the comparison of selected systems reveals semiquantitative details of the interfacial coupling depending on the materials involved, supported by the obtained field dependence of the signal.« less
The effect of interface properties on nickel base alloy composites
NASA Technical Reports Server (NTRS)
Groves, M.; Grossman, T.; Senemeier, M.; Wright, K.
1995-01-01
This program was performed to assess the extent to which mechanical behavior models can predict the properties of sapphire fiber/nickel aluminide matrix composites and help guide their development by defining improved combinations of matrix and interface coating. The program consisted of four tasks: 1) selection of the matrices and interface coating constituents using a modeling-based approach; 2) fabrication of the selected materials; 3) testing and evaluation of the materials; and 4) evaluation of the behavior models to develop recommendations. Ni-50Al and Ni-20AI-30Fe (a/o) matrices were selected which gave brittle and ductile behavior, respectively, and an interface coating of PVD YSZ was selected which provided strong bonding to the sapphire fiber. Significant fiber damage and strength loss was observed in the composites which made straightforward comparison of properties with models difficult. Nevertheless, the models selected generally provided property predictions which agreed well with results when fiber degradation was incorporated. The presence of a strong interface bond was felt to be detrimental in the NiAI MMC system where low toughness and low strength were observed.
Magnon mode selective spin transport in compensated ferrimagnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cramer, Joel; Guo, Er -Jia; Geprags, Stephan
We investigate the generation of magnonic thermal spin currents and their mode selective spin transport across interfaces in insulating, compensated ferrimagnet/normal metal bilayer systems. The spin Seebeck effect signal exhibits a nonmonotonic temperature dependence with two sign changes of the detected voltage signals. Using different ferrimagnetic garnets, we demonstrate the universality of the observed complex temperature dependence of the spin Seebeck effect. To understand its origin, we systematically vary the interface between the ferrimagnetic garnet and the metallic layer, and by using different metal layers we establish that interface effects play a dominating role. They do not only modify themore » magnitude of the spin Seebeck effect signal but in particular also alter its temperature dependence. By varying the temperature, we can select the dominating magnon mode and we analyze our results to reveal the mode selective interface transmission probabilities for different magnon modes and interfaces. As a result, the comparison of selected systems reveals semiquantitative details of the interfacial coupling depending on the materials involved, supported by the obtained field dependence of the signal.« less
NASA Technical Reports Server (NTRS)
Chretien, Jean-Loup (Inventor); Lu, Edward T. (Inventor)
2005-01-01
A dynamic optical filtration system and method effectively blocks bright light sources without impairing view of the remainder of the scene. A sensor measures light intensity and position so that selected cells of a shading matrix may interrupt the view of the bright light source by a receptor. A beamsplitter may be used so that the sensor may be located away from the receptor. The shading matrix may also be replaced by a digital micromirror device, which selectively sends image data to the receptor.
NASA Technical Reports Server (NTRS)
Chretien, Jean-Loup (Inventor); Lu, Edward T. (Inventor)
2005-01-01
A dynamic optical filtration system and method effectively blocks bright light sources without impairing view of the remainder of the scene. A sensor measures light intensity and position so that selected cells of a shading matrix may interrupt the view of the bright light source by a receptor. A beamsplitter may be used so that the sensor may be located away from the receptor. The shading matrix may also be replaced by a digital micromirror device, which selectively sends image data to the receptor.
Ryals, Anthony J.; Wang, Jane X.; Polnaszek, Kelly L.; Voss, Joel L.
2015-01-01
Although hippocampus unequivocally supports explicit/ declarative memory, fewer findings have demonstrated its role in implicit expressions of memory. We tested for hippocampal contributions to an implicit expression of configural/relational memory for complex scenes using eye-movement tracking during functional magnetic resonance imaging (fMRI) scanning. Participants studied scenes and were later tested using scenes that resembled study scenes in their overall feature configuration but comprised different elements. These configurally similar scenes were used to limit explicit memory, and were intermixed with new scenes that did not resemble studied scenes. Scene configuration memory was expressed through eye movements reflecting exploration overlap (EO), which is the viewing of the same scene locations at both study and test. EO reliably discriminated similar study-test scene pairs from study-new scene pairs, was reliably greater for similarity-based recognition hits than for misses, and correlated with hippocampal fMRI activity. In contrast, subjects could not reliably discriminate similar from new scenes by overt judgments, although ratings of familiarity were slightly higher for similar than new scenes. Hippocampal fMRI correlates of this weak explicit memory were distinct from EO-related activity. These findings collectively suggest that EO was an implicit expression of scene configuration memory associated with hippocampal activity. Visual exploration can therefore reflect implicit hippocampal-related memory processing that can be observed in eye-movement behavior during naturalistic scene viewing. PMID:25620526
An Analysis of the Max-Min Texture Measure.
1982-01-01
PANC 33 D2 Confusion Matrices for Scene A, IR 34 D3 Confusion Matrices for Scene B, PANC 35 D4 Confusion Matrices for Scene B, IR 36 D5 Confusion...Matrices for Scene C, PANC 37 D6 Confusion Matrices for Scene C, IR 38 D7 Confusion Matrices for Scene E, PANC 39 D8 Confusion Matrices for Scene E, IR 40...D9 Confusion Matrices for Scene H, PANC 41 DIO Confusion Matrices for Scene H, JR 42 3 .D 10CnuinMtie o cn ,IR4 AN ANALYSIS OF THE MAX-MIN TEXTURE
Improving Nocturnal Fire Detection with the VIIRS Day-Night Band
NASA Technical Reports Server (NTRS)
Polivka, Thomas N.; Wang, Jun; Ellison, Luke T.; Hyer, Edward J.; Ichoku, Charles M.
2016-01-01
Building on existing techniques for satellite remote sensing of fires, this paper takes advantage of the day-night band (DNB) aboard the Visible Infrared Imaging Radiometer Suite (VIIRS) to develop the Firelight Detection Algorithm (FILDA), which characterizes fire pixels based on both visible-light and infrared (IR) signatures at night. By adjusting fire pixel selection criteria to include visible-light signatures, FILDA allows for significantly improved detection of pixels with smaller and/or cooler subpixel hotspots than the operational Interface Data Processing System (IDPS) algorithm. VIIRS scenes with near-coincident Advanced Spaceborne Thermal Emission and Reflection (ASTER) overpasses are examined after applying the operational VIIRS fire product algorithm and including a modified "candidate fire pixel selection" approach from FILDA that lowers the 4-µm brightness temperature (BT) threshold but includes a minimum DNB radiance. FILDA is shown to be effective in detecting gas flares and characterizing fire lines during large forest fires (such as the Rim Fire in California and High Park fire in Colorado). Compared with the operational VIIRS fire algorithm for the study period, FILDA shows a large increase (up to 90%) in the number of detected fire pixels that can be verified with the finer resolution ASTER data (90 m). Part (30%) of this increase is likely due to a combined use of DNB and lower 4-µm BT thresholds for fire detection in FILDA. Although further studies are needed, quantitative use of the DNB to improve fire detection could lead to reduced response times to wildfires and better estimate of fire characteristics (smoldering and flaming) at night.
Search of the Deep and Dark Web via DARPA Memex
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2015-12-01
Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and plotting techniques for exploring crawled information. The platform classifies, identify, and thwart predators, help to find victims and to identify buyers in human trafficking and will deliver technological superiority in search engines for DARPA. We are already transitioning the technologies into Geo and Planetary Science, and Bioinformatics.
NASA Astrophysics Data System (ADS)
Yoon, Jayoung; Kim, Gerard J.
2003-04-01
Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.
Crime scene investigation, reporting, and reconstuction (CSIRR)
NASA Astrophysics Data System (ADS)
Booth, John F.; Young, Jeffrey M.; Corrigan, Paul
1997-02-01
Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.
Bar, Moshe; Aminoff, Elissa; Schacter, Daniel L.
2009-01-01
The parahippocampal cortex (PHC) has been implicated both in episodic memory and in place/scene processing. We proposed that this region should instead be seen as intrinsically mediating contextual associations, and not place/scene processing or episodic memory exclusively. Given that place/scene processing and episodic memory both rely on associations, this modified framework provides a platform for reconciling what seemed like different roles assigned to the same region. Comparing scenes with scenes, we show here that the PHC responds significantly more strongly to scenes with rich contextual associations compared with scenes of equal visual qualities but less associations. This result provides the strongest support to the view that the PHC mediates contextual associations in general, rather than places or scenes proper, and necessitates a revision of current views such as that the PHC contains a dedicated place/scenes “module.” PMID:18716212
NASA VERVE: Interactive 3D Visualization Within Eclipse
NASA Technical Reports Server (NTRS)
Cohen, Tamar; Allan, Mark B.
2014-01-01
At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.
Understanding Recovery from Object Substitution Masking
ERIC Educational Resources Information Center
Goodhew, Stephanie C.; Dux, Paul E.; Lipp, Ottmar V.; Visser, Troy A. W.
2012-01-01
When we look at a scene, we are conscious of only a small fraction of the available visual information at any given point in time. This raises profound questions regarding how information is selected, when awareness occurs, and the nature of the mechanisms underlying these processes. One tool that may be used to probe these issues is…
This study examined inter-analyst classification variability based on training site signature selection only for six classifications from a 10 km2 Landsat ETM+ image centered over a highly heterogeneous area in south-central Virginia. Six analysts classified the image...
Using Humorous Sitcom Clips in Teaching Federal Income Taxes
ERIC Educational Resources Information Center
Cecil, H. Wayne
2014-01-01
This article shares the motivation, process, and outcomes of using humorous scenes from television comedies to teach the real world of tax practice. The article advances the literature by reviewing the use of video clips in a previously unexplored discipline, discussing the process of identifying and selecting appropriate clips, and introducing…
Preservice Teachers Experience Reading Response Pedagogy in a Multi-User Virtual Environment
ERIC Educational Resources Information Center
Dooley, Caitlin McMunn; Calandra, Brendan; Harmon, Stephen
2014-01-01
This qualitative case study describes how 18 preservice teachers learned to nurture literary meaning-making via activities based on Louise Rosenblatt's Reader Response Theory within a multi-user virtual environment (MUVE). Participants re-created and responded to scenes from selected works of children's literature in Second Life as a way to…
NASA Astrophysics Data System (ADS)
Cudennec, Christophe
2016-04-01
The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y., Schumann A., Post D., Taniguchi M., Boegh E., Hubert P., Harman C., Thompson S., Rogger M., Hipsey M., Toth E., Viglione A., Di Baldassarre G., Schaefli B., McMillan H., Schymanski S., Characklis G., Yu B., Pang Z., Belyaev V., 2013. "Panta Rhei - Everything Flows": Change in hydrology and society - The IAHS Scientific Decade 2013-2022. Hydrological Sciences Journal, 58, 6, 1256-1275, DOI: 10.1080/02626667.2013.809088
Higher-order scene statistics of breast images
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.
2009-02-01
Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.
Predicting the Valence of a Scene from Observers’ Eye Movements
R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne
2015-01-01
Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322
Detecting temporal changes in acoustic scenes: The variable benefit of selective attention.
Demany, Laurent; Bayle, Yann; Puginier, Emilie; Semal, Catherine
2017-09-01
Four experiments investigated change detection in acoustic scenes consisting of a sum of five amplitude-modulated pure tones. As the tones were about 0.7 octave apart and were amplitude-modulated with different frequencies (in the range 2-32 Hz), they were perceived as separate streams. Listeners had to detect a change in the frequency (experiments 1 and 2) or the shape (experiments 3 and 4) of the modulation of one of the five tones, in the presence of an informative cue orienting selective attention either before the scene (pre-cue) or after it (post-cue). The changes left intensity unchanged and were not detectable in the spectral (tonotopic) domain. Performance was much better with pre-cues than with post-cues. Thus, change deafness was manifest in the absence of an appropriate focusing of attention when the change occurred, even though the streams and the changes to be detected were acoustically very simple (in contrast to the conditions used in previous demonstrations of change deafness). In one case, the results were consistent with a model based on the assumption that change detection was possible if and only if attention was endogenously focused on a single tone. However, it was also found that changes resulting in a steepening of amplitude rises were to some extent able to draw attention exogenously. Change detection was not markedly facilitated when the change produced a discontinuity in the modulation domain, contrary to what could be expected from the perspective of predictive coding. Copyright © 2017 Elsevier B.V. All rights reserved.
Azizi, Elham; Abel, Larry A; Stainer, Matthew J
2017-02-01
Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.
Automated Selection Of Pictures In Sequences
NASA Technical Reports Server (NTRS)
Rorvig, Mark E.; Shelton, Robert O.
1995-01-01
Method of automated selection of film or video motion-picture frames for storage or examination developed. Beneficial in situations in which quantity of visual information available exceeds amount stored or examined by humans in reasonable amount of time, and/or necessary to reduce large number of motion-picture frames to few conveying significantly different information in manner intermediate between movie and comic book or storyboard. For example, computerized vision system monitoring industrial process programmed to sound alarm when changes in scene exceed normal limits.
NASA Astrophysics Data System (ADS)
Rengarajan, Rajagopalan; Goodenough, Adam A.; Schott, John R.
2016-10-01
Many remote sensing applications rely on simulated scenes to perform complex interaction and sensitivity studies that are not possible with real-world scenes. These applications include the development and validation of new and existing algorithms, understanding of the sensor's performance prior to launch, and trade studies to determine ideal sensor configurations. The accuracy of these applications is dependent on the realism of the modeled scenes and sensors. The Digital Image and Remote Sensing Image Generation (DIRSIG) tool has been used extensively to model the complex spectral and spatial texture variation expected in large city-scale scenes and natural biomes. In the past, material properties that were used to represent targets in the simulated scenes were often assumed to be Lambertian in the absence of hand-measured directional data. However, this assumption presents a limitation for new algorithms that need to recognize the anisotropic behavior of targets. We have developed a new method to model and simulate large-scale high-resolution terrestrial scenes by combining bi-directional reflectance distribution function (BRDF) products from Moderate Resolution Imaging Spectroradiometer (MODIS) data, high spatial resolution data, and hyperspectral data. The high spatial resolution data is used to separate materials and add textural variations to the scene, and the directional hemispherical reflectance from the hyperspectral data is used to adjust the magnitude of the MODIS BRDF. In this method, the shape of the BRDF is preserved since it changes very slowly, but its magnitude is varied based on the high resolution texture and hyperspectral data. In addition to the MODIS derived BRDF, target/class specific BRDF values or functions can also be applied to features of specific interest. The purpose of this paper is to discuss the techniques and the methodology used to model a forest region at a high resolution. The simulated scenes using this method for varying view angles show the expected variations in the reflectance due to the BRDF effects of the Harvard forest. The effectiveness of this technique to simulate real sensor data is evaluated by comparing the simulated data with the Landsat 8 Operational Land Image (OLI) data over the Harvard forest. Regions of interest were selected from the simulated and the real data for different targets and their Top-of-Atmospheric (TOA) radiance were compared. After adjusting for scaling correction due to the difference in atmospheric conditions between the simulated and the real data, the TOA radiance is found to agree within 5 % in the NIR band and 10 % in the visible bands for forest targets under similar illumination conditions. The technique presented in this paper can be extended for other biomes (e.g. desert regions and agricultural regions) by using the appropriate geographic regions. Since the entire scene is constructed in a simulated environment, parameters such as BRDF or its effects can be analyzed for general or target specific algorithm improvements. Also, the modeling and simulation techniques can be used as a baseline for the development and comparison of new sensor designs and to investigate the operational and environmental factors that affects the sensor constellations such as Sentinel and Landsat missions.
Radiometric calibration of wide-field camera system with an application in astronomy
NASA Astrophysics Data System (ADS)
Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika
2017-09-01
Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.
Identification, definition and mapping of terrestrial ecosystems in interior Alaska
NASA Technical Reports Server (NTRS)
Anderson, J. H. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Two new, as yet unfinished vegetation maps are presented. These tend further to substantiate the belief that ERTS-1 imagery is a valuable mapping tool. Newly selected scenes show that vegetation interpretations can be refined through use of non-growing season imagery, particularly through the different spectral characteristics of vegetation lacking foliage and through the effect of vegetation structure on apparent snow cover. Scenes now are available for all test area north of the Alaska Range except Mt. McKinley National Park. No support was obtained for the hypothesis that similar interband ratios, from two areas apparently different spectrally because of different sun angles, would indicate similar surface features. However, attempts to test this hypothesis have so far been casual.
A view not to be missed: Salient scene content interferes with cognitive restoration
Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.
2017-01-01
Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975
A view not to be missed: Salient scene content interferes with cognitive restoration.
Van der Jagt, Alexander P N; Craig, Tony; Brewer, Mark J; Pearson, David G
2017-01-01
Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration.
ERIC Educational Resources Information Center
Champoux, Joseph E.
2005-01-01
Live-action and animated film remake scenes can show many topics typically taught in organizational behaviour and management courses. This article discusses, analyses and compares such scenes to identify parallel film scenes useful for teaching. The analysis assesses the scenes to decide which scene type, animated or live-action, more effectively…
Shultz, Mary
2006-01-01
Introduction: Given the common use of acronyms and initialisms in the health sciences, searchers may be entering these abbreviated terms rather than full phrases when searching online systems. The purpose of this study is to evaluate how various MEDLINE Medical Subject Headings (MeSH) interfaces map acronyms and initialisms to the MeSH vocabulary. Methods: The interfaces used in this study were: the PubMed MeSH database, the PubMed Automatic Term Mapping feature, the NLM Gateway Term Finder, and Ovid MEDLINE. Acronyms and initialisms were randomly selected from 2 print sources. The test data set included 415 randomly selected acronyms and initialisms whose related meanings were found to be MeSH terms. Each acronym and initialism was entered into each MEDLINE MeSH interface to determine if it mapped to the corresponding MeSH term. Separately, 46 commonly used acronyms and initialisms were tested. Results: While performance differed widely, the success rates were low across all interfaces for the randomly selected terms. The common acronyms and initialisms tested at higher success rates across the interfaces, but the differences between the interfaces remained. Conclusion: Online interfaces do not always map medical acronyms and initialisms to their corresponding MeSH phrases. This may lead to inaccurate results and missed information if acronyms and initialisms are used in search strategies. PMID:17082832
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.
1975-01-01
An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.
Monteverdi, B
2001-01-01
The explosive growth of handheld personal digital assistants (PDAs) in health care has been nothing short of amazing. What applications--business and clinical--do these devices have in medicine, and what is their potential? PDAs are simple and intuitive; their applications require minimal interaction time, so they have minimal impact on work flow: the investment is small; and the lightweight form is relatively nonintrusive during a patient encounter. The devices are being used to capture charges for medical services at the point of care. Encounter capture, online prescription writing and other applications will soon come on the scene. This article discusses current and possible future uses for PDAs in health care, interfaces with other technologies and security concerns.
Some trends in aircraft design: Structures
NASA Technical Reports Server (NTRS)
Brooks, G. W.
1975-01-01
Trends and programs currently underway on the national scene to improve the structural interface in the aircraft design process are discussed. The National Aeronautics and Space Administration shares a partnership with the educational and industrial community in the development of the tools, the criteria, and the data base essential to produce high-performance and cost-effective vehicles. Several thrusts to build the technology in materials, structural concepts, analytical programs, and integrated design procedures essential for performing the trade-offs required to fashion competitive vehicles are presented. The application of advanced fibrous composites, improved methods for structural analysis, and continued attention to important peripheral problems of aeroelastic and thermal stability are among the topics considered.
Is There a Chance for a Standardised User Interface?
ERIC Educational Resources Information Center
Fletcher, Liz
1993-01-01
Issues concerning the implementation of standard user interfaces for CD-ROMs are discussed, including differing perceptions of the ideal interface, graphical user interfaces, user needs, and the standard protocols. It is suggested users should be able to select from a variety of user interfaces on each CD-ROM. (EA)
Thesaurus-Enhanced Search Interfaces.
ERIC Educational Resources Information Center
Shiri, Ali Asghar; Revie, Crawford; Chowdhury, Gobinda
2002-01-01
Discussion of user interfaces to information retrieval systems focuses on interfaces that incorporate thesauri as part of their searching and browsing facilities. Discusses research literature related to information searching behavior, information retrieval interface evaluation, search term selection, and query expansion; and compares thesaurus…
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Differential Visual Processing of Animal Images, with and without Conscious Awareness
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106
Differential Visual Processing of Animal Images, with and without Conscious Awareness.
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.
Neural codes of seeing architectural styles
Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.
2017-01-01
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture. PMID:28071765
Neural codes of seeing architectural styles.
Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B
2017-01-10
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.
Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas
2015-01-01
Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949
Cant, Jonathan S; Xu, Yaoda
2017-02-01
Our visual system can extract summary statistics from large collections of objects without forming detailed representations of the individual objects in the ensemble. In a region in ventral visual cortex encompassing the collateral sulcus and the parahippocampal gyrus and overlapping extensively with the scene-selective parahippocampal place area (PPA), we have previously reported fMRI adaptation to object ensembles when ensemble statistics repeated, even when local image features differed across images (e.g., two different images of the same strawberry pile). We additionally showed that this ensemble representation is similar to (but still distinct from) how visual texture patterns are processed in this region and is not explained by appealing to differences in the color of the elements that make up the ensemble. To further explore the nature of ensemble representation in this brain region, here we used PPA as our ROI and investigated in detail how the shape and surface properties (i.e., both texture and color) of the individual objects constituting an ensemble affect the ensemble representation in anterior-medial ventral visual cortex. We photographed object ensembles of stone beads that varied in shape and surface properties. A given ensemble always contained beads of the same shape and surface properties (e.g., an ensemble of star-shaped rose quartz beads). A change to the shape and/or surface properties of all the beads in an ensemble resulted in a significant release from adaptation in PPA compared with conditions in which no ensemble feature changed. In contrast, in the object-sensitive lateral occipital area (LO), we only observed a significant release from adaptation when the shape of the ensemble elements varied, and found no significant results in additional scene-sensitive regions, namely, the retrosplenial complex and occipital place area. Together, these results demonstrate that the shape and surface properties of the individual objects comprising an ensemble both contribute significantly to object ensemble representation in anterior-medial ventral visual cortex and further demonstrate a functional dissociation between object- (LO) and scene-selective (PPA) visual cortical regions and within the broader scene-processing network itself.
A first proposal for a general description model of forensic traces
NASA Astrophysics Data System (ADS)
Lindauer, Ina; Schäler, Martin; Vielhauer, Claus; Saake, Gunter; Hildebrandt, Mario
2012-06-01
In recent years, the amount of digitally captured traces at crime scenes increased rapidly. There are various kinds of such traces, like pick marks on locks, latent fingerprints on various surfaces as well as different micro traces. Those traces are different from each other not only in kind but also in which information they provide. Every kind of trace has its own properties (e.g., minutiae for fingerprints, or raking traces for locks) but there are also large amounts of metadata which all traces have in common like location, time and other additional information in relation to crime scenes. For selected types of crime scene traces, type-specific databases already exist, such as the ViCLAS for sexual offences, the IBIS for ballistic forensics or the AFIS for fingerprints. These existing forensic databases strongly differ in the trace description models. For forensic experts it would be beneficial to work with only one database capable of handling all possible forensic traces acquired at a crime scene. This is especially the case when different kinds of traces are interrelated (e.g., fingerprints and ballistic marks on a bullet casing). Unfortunately, current research on interrelated traces as well as general forensic data models and structures is not mature enough to build such an encompassing forensic database. Nevertheless, recent advances in the field of contact-less scanning make it possible to acquire different kinds of traces with the same device. Therefore the data of these traces is structured similarly what simplifies the design of a general forensic data model for different kinds of traces. In this paper we introduce a first common description model for different forensic trace types. Furthermore, we apply for selected trace types from the well established database schema development process the phases of transferring expert knowledge in the corresponding forensic fields into an extendible, database-driven, generalised forensic description model. The trace types considered here are fingerprint traces, traces at locks, micro traces and ballistic traces. Based on these basic trace types, also combined traces (multiple or overlapped fingerprints, fingerprints on bullet casings, etc) and partial traces are considered.
Colour agnosia impairs the recognition of natural but not of non-natural scenes.
Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F
2007-03-01
Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.
ERIC Educational Resources Information Center
Van Brunt, Thomas
1975-01-01
Lists books into general categories of those useful for advanced technicians, and those useful for the hobbiest or sculptor-painter. Available from: Theatre Design and Technology, Journal of the U.S. Institute for Theatre Technology, 1 Hillside Road, Newark, Delaware 19711. Subscriptions: subscription to Theatre Design and Technology is a…
The Film. The Bobbs-Merrill Series in Composition and Rhetoric.
ERIC Educational Resources Information Center
Sarris, Andrew, Ed.
Prefaced by a brief discussion of early films and film criticism, 10 essays treat selected modern directors and their works. Essays on Stanley Kubrick's "Lolita," the early works of Elia Kazan, and the response of French critics to Jerry Lewis explore the American scene, while Francois Truffaut's "Jules and Jim," the early work of Robert Bressen,…
Image Enhancement for Astronomical Scenes
2013-09-01
address this problem in the context of natural scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also...scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also mostly empty space. We compare two classes of
Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415
Reducing wrong patient selection errors: exploring the design space of user interface techniques.
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.
Suomi-NPP VIIRS Day-Night Band On-Orbit Calibration and Performance
NASA Technical Reports Server (NTRS)
Chen, Hongda; Xiong, Xiaoxiong; Sun, Chengbo; Chen, Xuexia; Chiang, Kwofu
2017-01-01
The Suomi national polar-orbiting partnership Visible Infrared Imaging Radiometer Suite (VIIRS) instrument has successfully operated since its launch in October 2011. The VIIRS day-night band (DNB) is a panchromatic channel covering wavelengths from 0.5 to 0.9 microns that is capable of observing Earth scenes during both daytime and nighttime at a spatial resolution of 750 m. To cover the large dynamic range, the DNB operates at low-, middle-, and high-gain stages, and it uses an on-board solar diffuser (SD) for its low-gain stage calibration. The SD observations also provide a means to compute the gain ratios of low-to-middle and middle-to-high gain stages. This paper describes the DNB on-orbit calibration methodology used by the VIIRS characterization support team in supporting the NASA Earth science community with consistent VIIRS sensor data records made available by the land science investigator-led processing systems. It provides an assessment and update of the DNB on-orbit performance, including the SD degradation in the DNB spectral range, detector gain and gain ratio trending, and stray-light contamination and its correction. Also presented in this paper are performance validations based on Earth scenes and lunar observations, and comparisons to the calibration methodology used by the operational interface data processing segment.
Brockmole, James R; Henderson, John M
2006-07-01
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-01-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703
Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan
2016-07-01
A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.
Remembering faces and scenes: The mixed-category advantage in visual working memory.
Jiang, Yuhong V; Remington, Roger W; Asaad, Anthony; Lee, Hyejin J; Mikkalson, Taylor C
2016-09-01
We examined the mixed-category memory advantage for faces and scenes to determine how domain-specific cortical resources constrain visual working memory. Consistent with previous findings, visual working memory for a display of 2 faces and 2 scenes was better than that for a display of 4 faces or 4 scenes. This pattern was unaffected by manipulations of encoding duration. However, the mixed-category advantage was carried solely by faces: Memory for scenes was not better when scenes were encoded with faces rather than with other scenes. The asymmetry between faces and scenes was found when items were presented simultaneously or sequentially, centrally, or peripherally, and when scenes were drawn from a narrow category. A further experiment showed a mixed-category advantage in memory for faces and bodies, but not in memory for scenes and objects. The results suggest that unique category-specific interactions contribute significantly to the mixed-category advantage in visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Consolidation of a WSN and Minimax Method to Rapidly Neutralise Intruders in Strategic Installations
Conesa-Muñoz, Jesus; Ribeiro, Angela
2012-01-01
Due to the sensitive international situation caused by still-recent terrorist attacks, there is a common need to protect the safety of large spaces such as government buildings, airports and power stations. To address this problem, developments in several research fields, such as video and cognitive audio, decision support systems, human interface, computer architecture, communications networks and communications security, should be integrated with the goal of achieving advanced security systems capable of checking all of the specified requirements and spanning the gap that presently exists in the current market. This paper describes the implementation of a decision system for crisis management in infrastructural building security. Specifically, it describes the implementation of a decision system in the management of building intrusions. The positions of the unidentified persons are reported with the help of a Wireless Sensor Network (WSN). The goal is to achieve an intelligent system capable of making the best decision in real time in order to quickly neutralise one or more intruders who threaten strategic installations. It is assumed that the intruders’ behaviour is inferred through sequences of sensors’ activations and their fusion. This article presents a general approach to selecting the optimum operation from the available neutralisation strategies based on a Minimax algorithm. The distances among different scenario elements will be used to measure the risk of the scene, so a path planning technique will be integrated in order to attain a good performance. Different actions to be executed over the elements of the scene such as moving a guard, blocking a door or turning on an alarm will be used to neutralise the crisis. This set of actions executed to stop the crisis is known as the neutralisation strategy. Finally, the system has been tested in simulations of real situations, and the results have been evaluated according to the final state of the intruders. In 86.5% of the cases, the system achieved the capture of the intruders, and in 59.25% of the cases, they were intercepted before they reached their objective. PMID:22737008
Neural correlates of contextual cueing are modulated by explicit learning.
Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A
2011-10-01
Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.
Neural correlates of contextual cueing are modulated by explicit learning
Westerberg, Carmen E.; Miller, Brennan B.; Reber, Paul J.; Cohen, Neal J.; Paller, Ken A.
2011-01-01
Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer’s knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. PMID:21889947
Guillery-Girard, Bérengère; Clochon, Patrice; Giffard, Bénédicte; Viard, Armelle; Egler, Pierre-Jean; Baleyte, Jean-Marc; Eustache, Francis; Dayan, Jacques
2013-09-01
"Travelling in time," a central feature of episodic memory is severely affected among individuals with Post Traumatic Stress Disorder (PTSD) with two opposite effects: vivid traumatic memories are unorganized in temporality (bottom-up processes), non-traumatic personal memories tend to lack spatio-temporal details and false recognitions occur more frequently that in the general population (top-down processes). To test the effect of these two types of processes (i.e. bottom-up and top-down) on emotional memory, we conducted two studies in healthy and traumatized adolescents, a period of life in which vulnerability to emotion is particularly high. Using negative and neutral images selected from the international affective picture system (IAPS), stimuli were divided into perceptual images (emotion generated by perceptual details) and conceptual images (emotion generated by the general meaning of the material). Both categories of stimuli were then used, along with neutral pictures, in a memory task with two phases (encoding and recognition). In both populations, we reported a differential effect of the emotional material on encoding and recognition. Negative perceptual scenes induced an attentional capture effect during encoding and enhanced the recollective distinctiveness. Conversely, the encoding of conceptual scenes was similar to neutral ones, but the conceptual relatedness induced false memories at retrieval. However, among individuals with PTSD, two subgroups of patients were identified. The first subgroup processed the scenes faster than controls, except for the perceptual scenes, and obtained similar performances to controls in the recognition task. The second subgroup group desmonstrated an attentional deficit in the encoding task with no benefit from the distinctiveness associated with negative perceptual scenes on memory performances. These findings provide a new perspective on how negative emotional information may have opposite influences on memory in normal and traumatized individuals. It also gives clues to understand how intrusive memories and overgeneralization takes place in PTSD. Copyright © 2013 Elsevier Ltd. All rights reserved.
Initial Verification of GEOS-4 Aerosols Using CALIPSO and MODIS: Scene Classification
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Colarco, Peter R.; Hlavka, Dennis; Levy, Robert C.; Vaughan, Mark A.; daSilva, Arlindo
2007-01-01
A-train sensors such as MODIS and MISR provide column aerosol properties, and in the process a means of estimating aerosol type (e.g. smoke vs. dust). Correct classification of aerosol type is important because retrievals are often dependent upon selection of the right aerosol model. In addition, aerosol scene classification helps place the retrieved products in context for comparisons and analysis with aerosol transport models. The recent addition of CALIPSO to the A-train now provides a means of classifying aerosol distribution with altitude. CALIPSO level 1 products include profiles of attenuated backscatter at 532 and 1064 nm, and depolarization at 532 nm. Backscatter intensity, wavelength ratio, and depolarization provide information on the vertical profile of aerosol concentration, size, and shape. Thus similar estimates of aerosol type using MODIS or MISR are possible with CALIPSO, and the combination of data from all sensors provides a means of 3D aerosol scene classification. The NASA Goddard Earth Observing System general circulation model and data assimilation system (GEOS-4) provides global 3D aerosol mass for sulfate, sea salt, dust, and black and organic carbon. A GEOS-4 aerosol scene classification algorithm has been developed to provide estimates of aerosol mixtures along the flight track for NASA's Geoscience Laser Altimeter System (GLAS) satellite lidar. GLAS launched in 2003 and did not have the benefit of depolarization measurements or other sensors from the A-train. Aerosol typing from GLAS data alone was not possible, and the GEOS-4 aerosol classifier has been used to identify aerosol type and improve the retrieval of GLAS products. Here we compare 3D aerosol scene classification using CALIPSO and MODIS with the GEOS-4 aerosol classifier. Dust, smoke, and pollution examples will be discussed in the context of providing an initial verification of the 3D GEOS-4 aerosol products. Prior model verification has only been attempted with surface mass comparisons and column optical depth from AERONET and MODIS.
Satellite image maps of Pakistan
,
1997-01-01
Georeferenced Landsat satellite image maps of Pakistan are now being made available for purchase from the U.S. Geological Survey (USGS). The first maps to be released are a series of Multi-Spectral Scanner (MSS) color image maps compiled from Landsat scenes taken before 1979. The Pakistan image maps were originally developed by USGS as an aid for geologic and general terrain mapping in support of the Coal Resource Exploration and Development Program in Pakistan (COALREAP). COALREAP, a cooperative program between the USGS, the United States Agency for International Development, and the Geological Survey of Pakistan, was in effect from 1985 through 1994. The Pakistan MSS image maps (bands 1, 2, and 4) are available as a full-country mosaic of 72 Landsat scenes at a scale of 1:2,000,000, and in 7 regional sheets covering various portions of the entire country at a scale of 1:500,000. The scenes used to compile the maps were selected from imagery available at the Eros Data Center (EDC), Sioux Falls, S. Dak. Where possible, preference was given to cloud-free and snow-free scenes that displayed similar stages of seasonal vegetation development. The data for the MSS scenes were resampled from the original 80-meter resolution to 50-meter picture elements (pixels) and digitally transformed to a geometrically corrected Lambert conformal conic projection. The cubic convolution algorithm was used during rotation and resampling. The 50-meter pixel size allows for such data to be imaged at a scale of 1:250,000 without degradation; for cost and convenience considerations, however, the maps were printed at 1:500,000 scale. The seven regional sheets have been named according to the main province or area covered. The 50-meter data were averaged to 150-meter pixels to generate the country image on a single sheet at 1:2,000,000 scale
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.
2017-10-01
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
Scene incongruity and attention.
Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John
2017-02-01
Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.
ERBE Geographic Scene and Monthly Snow Data
NASA Technical Reports Server (NTRS)
Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.
1997-01-01
The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Updating representations of learned scenes.
Finlay, Cory A; Motes, Michael A; Kozhevnikov, Maria
2007-05-01
Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0 degrees -360 degrees in 36 degrees increments) around the scene, and participants judged whether the objects' positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.
Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory
NASA Technical Reports Server (NTRS)
Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.
2005-01-01
Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.
Cornelissen, Tim H W; Võ, Melissa L-H
2017-01-01
People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics-let alone their semantic congruity-processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.
Visual search for changes in scenes creates long-term, incidental memory traces.
Utochkin, Igor S; Wolfe, Jeremy M
2018-05-01
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
The Advanced Linked Extended Reconnaissance & Targeting Technology Demonstration project
NASA Astrophysics Data System (ADS)
Edwards, Mark
2008-04-01
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing many operational needs of the future Canadian Army's Surveillance and Reconnaissance forces. Using the surveillance system of the Coyote reconnaissance vehicle as an experimental platform, the ALERT TD project aims to significantly enhance situational awareness by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. The project is exploiting important advances made in computer processing capability, displays technology, digital communications, and sensor technology since the design of the original surveillance system. As the major research area within the project, concepts are discussed for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as from beyond line-of-sight systems such as mini-UAVs and unattended ground sensors. Video-rate image processing has been developed to assist the operator to detect poorly visible targets. As a second major area of research, automatic target cueing capabilities have been added to the system. These include scene change detection, automatic target detection and aided target recognition algorithms processing both IR and visible-band images to draw the operator's attention to possible targets. The merits of incorporating scene change detection algorithms are also discussed. In the area of multi-sensor data fusion, up to Joint Defence Labs level 2 has been demonstrated. The human factors engineering aspects of the user interface in this complex environment are presented, drawing upon multiple user group sessions with military surveillance system operators. The paper concludes with Lessons Learned from the project. The ALERT system has been used in a number of C4ISR field trials, most recently at Exercise Empire Challenge in China Lake CA, and at Trial Quest in Norway. Those exercises provided further opportunities to investigate operator interactions. The paper concludes with recommendations for future work in operator interface design.
Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets
ERIC Educational Resources Information Center
Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin
2011-01-01
Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…
Nielsen, Thomas N; Sevcencu, Cristian; Struijk, Johannes J
2014-01-01
Previous studies have indicated that electrodes placed between fascicles can provide nerve recruitment with high topological selectivity if the areas of interest in the nerve are separated with passive elements. In this study, we investigated if this separation of fascicles also can provide topologically selective nerve recordings and compared the performance of mono-, bi-, and tripolar configurations for stimulation and recording with an intra-neural interface. The interface was implanted in the sciatic nerve of 10 rabbits and achieved a median selectivity of Ŝ=0.98-0.99 for all stimulation configurations, while recording selectivity configurations was in the range of Ŝ=0.70-0.80 with the monopolar configuration providing the lowest and the average reference configuration the highest recording selectivity. Interfascicular electrodes could provide an interesting addition to the bulk of peripheral nerve interfaces available for neural prosthetic devices. The separation of the nerve into chambers by the passive elements of the electrode could ensure a higher selectivity than comparable cuff electrodes and the intra-neural location could provide an option of targeting mainly central fascicles. Further studies are, however, still required to develop biocompatible electrodes and test their stability and safety in chronic experiments.
Operator interface for vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bissontz, Jay E
2015-03-10
A control interface for drivetrain braking provided by a regenerative brake and a non-regenerative brake is implemented using a combination of switches and graphic interface elements. The control interface comprises a control system for allocating drivetrain braking effort between the regenerative brake and the non-regenerative brake, a first operator actuated control for enabling operation of the drivetrain braking, and a second operator actuated control for selecting a target braking effort for drivetrain braking. A graphic display displays to an operator the selected target braking effort and can be used to further display actual braking effort achieved by drivetrain braking.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
A Supplementary Clear-Sky Snow and Ice Recognition Technique for CERES Level 2 Products
NASA Technical Reports Server (NTRS)
Radkevich, Alexander; Khlopenkov, Konstantin; Rutan, David; Kato, Seiji
2013-01-01
Identification of clear-sky snow and ice is an important step in the production of cryosphere radiation budget products, which are used in the derivation of long-term data series for climate research. In this paper, a new method of clear-sky snow/ice identification for Moderate Resolution Imaging Spectroradiometer (MODIS) is presented. The algorithm's goal is to enhance the identification of snow and ice within the Clouds and the Earth's Radiant Energy System (CERES) data after application of the standard CERES scene identification scheme. The input of the algorithm uses spectral radiances from five MODIS bands and surface skin temperature available in the CERES Single Scanner Footprint (SSF) product. The algorithm produces a cryosphere rating from an aggregated test: a higher rating corresponds to a more certain identification of the clear-sky snow/ice-covered scene. Empirical analysis of regions of interest representing distinctive targets such as snow, ice, ice and water clouds, open waters, and snow-free land selected from a number of MODIS images shows that the cryosphere rating of snow/ice targets falls into 95% confidence intervals lying above the same confidence intervals of all other targets. This enables recognition of clear-sky cryosphere by using a single threshold applied to the rating, which makes this technique different from traditional branching techniques based on multiple thresholds. Limited tests show that the established threshold clearly separates the cryosphere rating values computed for the cryosphere from those computed for noncryosphere scenes, whereas individual tests applied consequently cannot reliably identify the cryosphere for complex scenes.
Navigating the auditory scene: an expert role for the hippocampus.
Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D
2012-08-29
Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
Contextual descriptors and neural networks for scene analysis in VHR SAR images
NASA Astrophysics Data System (ADS)
Del Frate, Fabio; Picchiani, Matteo; Falasco, Alessia; Schiavon, Giovanni
2016-10-01
The development of SAR technology during the last decade has made it possible to collect a huge amount of data over many regions of the world. In particular, the availability of SAR images from different sensors, with metric or sub-metric spatial resolution, offers novel opportunities in different fields as land cover, urban monitoring, soil consumption etc. On the other hand, automatic approaches become crucial for the exploitation of such a huge amount of information. In such a scenario, especially if single polarization images are considered, the main issue is to select appropriate contextual descriptors, since the backscattering coefficient of a single pixel may not be sufficient to classify an object on the scene. In this paper a comparison among three different approaches for contextual features definition is presented so as to design optimum procedures for VHR SAR scene understanding. The first approach is based on Gray Level Co- Occurrence Matrix since it is widely accepted and several studies have used it for land cover classification with SAR data. The second approach is based on the Fourier spectra and it has been already proposed with positive results for this kind of problems, the third one is based on Auto-associative Neural Networks which have been already proven effective for features extraction from polarimetric SAR images. The three methods are evaluated in terms of the accuracy of the classified scene when the features extracted using each method are considered as input to a neural network classificator and applied on different Cosmo-SkyMed spotlight products.
Radiometric consistency assessment of hyperspectral infrared sounders
NASA Astrophysics Data System (ADS)
Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.
2015-07-01
The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark datasets for both inter-calibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and -B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through one year of simultaneous nadir overpass (SNO) observations to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the longwave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both Polar and Tropical SNOs. The combined global SNO datasets indicate that, the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 comparison spectral regions and they range from 0.15 to 0.21 K in the remaining 4 spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.
Radiometric consistency assessment of hyperspectral infrared sounders
NASA Astrophysics Data System (ADS)
Wang, L.; Han, Y.; Jin, X.; Chen, Y.; Tremblay, D. A.
2015-11-01
The radiometric and spectral consistency among the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) is fundamental for the creation of long-term infrared (IR) hyperspectral radiance benchmark data sets for both intercalibration and climate-related studies. In this study, the CrIS radiance measurements on Suomi National Polar-orbiting Partnership (SNPP) satellite are directly compared with IASI on MetOp-A and MetOp-B at the finest spectral scale and with AIRS on Aqua in 25 selected spectral regions through simultaneous nadir overpass (SNO) observations in 2013, to evaluate radiometric consistency of these four hyperspectral IR sounders. The spectra from different sounders are paired together through strict spatial and temporal collocation. The uniform scenes are selected by examining the collocated Visible Infrared Imaging Radiometer Suite (VIIRS) pixels. Their brightness temperature (BT) differences are then calculated by converting the spectra onto common spectral grids. The results indicate that CrIS agrees well with IASI on MetOp-A and IASI on MetOp-B at the long-wave IR (LWIR) and middle-wave IR (MWIR) bands with 0.1-0.2 K differences. There are no apparent scene-dependent patterns for BT differences between CrIS and IASI for individual spectral channels. CrIS and AIRS are compared at the 25 spectral regions for both polar and tropical SNOs. The combined global SNO data sets indicate that the CrIS-AIRS BT differences are less than or around 0.1 K among 21 of 25 spectral regions and they range from 0.15 to 0.21 K in the remaining four spectral regions. CrIS-AIRS BT differences in some comparison spectral regions show weak scene-dependent features.
2012-01-01
Background In research on event-related potentials (ERP) to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention) is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN) and the late positive potential (LPP). Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes. PMID:22607397
Xia, Xinxing; Zheng, Zhenrong; Liu, Xu; Li, Haifeng; Yan, Caijie
2010-09-10
We utilized a high-frame-rate projector, a rotating mirror, and a cylindrical selective-diffusing screen to present a novel three-dimensional (3D) omnidirectional-view display system without the need for any special viewing aids. The display principle and image size are analyzed, and the common display zone is proposed. The viewing zone for one observation place is also studied. The experimental results verify this method, and a vivid color 3D scene with occlusion and smooth parallax is also demonstrated with the system.
Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-06-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Seek and you shall remember: Scene semantics interact with visual search to build better memories
Draschkow, Dejan; Wolfe, Jeremy M.; Võ, Melissa L.-H.
2014-01-01
Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. PMID:25015385
A novel scene management technology for complex virtual battlefield environment
NASA Astrophysics Data System (ADS)
Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan
2018-04-01
The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.
Attachment of second harmonic-active moiety to molecules for detection of molecules at interfaces
Salafsky, Joshua S.; Eisenthal, Kenneth B.
2005-10-11
This invention provides methods of detecting molecules at an interface, which comprise labeling the molecules with a second harmonic-active moiety and detecting the labeled molecules at the interface using a surface selective technique. The invention also provides methods for detecting a molecule in a medium and for determining the orientation of a molecular species within a planar surface using a second harmonic-active moiety and a surface selective technique.
Post Accident Procedures for Chemicals and Propellants.
1982-09-01
METHODS AND PROCEDURES ............ 4-1 4.1 Overview of Emergency Response Procedures " and Resources Available .......................... 4-1 L1 TABLE...7-1 7.1 Criteria forTwelve Critical Operations ........................ 7-1 7.1.1 On-Scene Methods for Identifying the Ingredients...Establishing A Protocol for Selecting the Hazards Mitigation and Cleanup Methods for Single Material Spills and Multiple Materials Mixing
Non-parametric analysis of LANDSAT maps using neural nets and parallel computers
NASA Technical Reports Server (NTRS)
Salu, Yehuda; Tilton, James
1991-01-01
Nearest neighbor approaches and a new neural network, the Binary Diamond, are used for the classification of images of ground pixels obtained by LANDSAT satellite. The performances are evaluated by comparing classifications of a scene in the vicinity of Washington DC. The problem of optimal selection of categories is addressed as a step in the classification process.
ERIC Educational Resources Information Center
Sun, Chyng; Bridges, Ana; Wosnitzer, Robert; Scharrer, Erica; Liberman, Rachael
2008-01-01
Pornography is a lucrative business. Increasingly, women have participated in both its production, direction, and consumption. This study investigated how the content in popular pornographic videos created by female directors differs from that of their male counterparts. We conducted a quantitative analysis of 122 randomly selected scenes from 44…
ERIC Educational Resources Information Center
Matsuoka, Rieko; Poole, Gregory
2015-01-01
This paper examines the ways in which healthcare professionals interact with patients' family members, and/or colleagues. The data are from healthcare discourses at difficult times found in the manga series entitled Nurse AOI. As the first step, we selected several communication scenes for analysis in terms of politeness strategies. From these…
ERIC Educational Resources Information Center
Surlis, Mary
2012-01-01
This paper introduces Living Scenes, and intergenerational programme of learning which has been in operation in selected schools in Ireland for the last thirteen years. An overview of the programme is followed by a description of the hidden curriculum and the transmission of arbitrary culture in an educational context. This is followed by an…
IR characteristic simulation of city scenes based on radiosity model
NASA Astrophysics Data System (ADS)
Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu
2013-09-01
Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.
Scene-Based Contextual Cueing in Pigeons
Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.
2014-01-01
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098
Baber, Chris; Butler, Mark
2012-06-01
The strategies of novice and expert crime scene examiners were compared in searching crime scenes. Previous studies have demonstrated that experts frame a scene through reconstructing the likely actions of a criminal and use contextual cues to develop hypotheses that guide subsequent search for evidence. Novice (first-year undergraduate students of forensic sciences) and expert (experienced crime scene examiners) examined two "simulated" crime scenes. Performance was captured through a combination of concurrent verbal protocol and own-point recording, using head-mounted cameras. Although both groups paid attention to the likely modus operandi of the perpetrator (in terms of possible actions taken), the novices paid more attention to individual objects, whereas the experts paid more attention to objects with "evidential value." Novices explore the scene in terms of the objects that it contains, whereas experts consider the evidence analysis that can be performed as a consequence of the examination. The suggestion is that the novices are putting effort into detailing the scene in terms of its features, whereas the experts are putting effort into the likely actions that can be performed as a consequence of the examination. The findings have helped in developing the expertise of novice crime scene examiners and approaches to training of expertise within this population.
Brand, John; Johnson, Aaron P
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks.
Individual predictions of eye-movements with dynamic scenes
NASA Astrophysics Data System (ADS)
Barth, Erhardt; Drewes, Jan; Martinetz, Thomas
2003-06-01
We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches in two ways: (1) we deal with dynamic scenes, and (2) we provide means of adapting the model to a particular observer. As an indicator for the degree of saliency we evaluate the intrinsic dimension of the image sequence within a geometric approach implemented by using the structure tensor. Out of these candidate saliency-based locations, the currently attended location is selected according to a strategy found by supervised learning. The data are obtained with an eye-tracker and subjects who view video sequences. The selection algorithm receives candidate locations of current and past frames and a limited history of locations attended in the past. We use a linear mapping that is obtained by minimizing the quadratic difference between the predicted and the actually attended location by gradient descent. Being linear, the learned mapping can be quickly adapted to the individual observer.
Brand, John; Johnson, Aaron P.
2014-01-01
In four experiments, we investigated how attention to local and global levels of hierarchical Navon figures affected the selection of diagnostic spatial scale information used in scene categorization. We explored this issue by asking observers to classify hybrid images (i.e., images that contain low spatial frequency (LSF) content of one image, and high spatial frequency (HSF) content from a second image) immediately following global and local Navon tasks. Hybrid images can be classified according to either their LSF, or HSF content; thus, making them ideal for investigating diagnostic spatial scale preference. Although observers were sensitive to both spatial scales (Experiment 1), they overwhelmingly preferred to classify hybrids based on LSF content (Experiment 2). In Experiment 3, we demonstrated that LSF based hybrid categorization was faster following global Navon tasks, suggesting that LSF processing associated with global Navon tasks primed the selection of LSFs in hybrid images. In Experiment 4, replicating Experiment 3 but suppressing the LSF information in Navon letters by contrast balancing the stimuli examined this hypothesis. Similar to Experiment 3, observers preferred to classify hybrids based on LSF content; however and in contrast, LSF based hybrid categorization was slower following global than local Navon tasks. PMID:25520675
Reflectance of vegetation, soil, and water
NASA Technical Reports Server (NTRS)
Wiegand, C. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The ability to read the 24-channel MSS CCT tapes, select specified agricultural land use areas from the CCT, and perform multivariate statistical and pattern recognition analyses has been demonstrated. The 5 optimum channels chosen for classifying an agricultural scene were, in the order of their selection the far red visible, short reflective IR, visible blue, thermal infrared, and ultraviolet portions of the electromagnetic spectrum, respectively. Although chosen by a training set containing only vegetal categories, the optimum 4 channels discriminated pavement, water, bare soil, and building roofs, as well as the vegetal categories. Among the vegetal categories, sugar cane and cotton had distinctive signatures that distinguished them from grass and citrus. Acreages estimated spectrally by the computer for the test scene were acceptably close to acreages estimated from aerial photographs for cotton, sugar cane, and water. Many nonfarmable land resolution elements representing drainage ditch, field road, and highway right-of-way as well as farm headquarters area fell into the grass, bare soil plus weeds, and citrus categories and lessened the accuracy of the farmable acreage estimates in these categories. The expertise developed using the 24-channel data will be applied to the ERTS-1 data.
ERIC Educational Resources Information Center
Henderson, John M.; Nuthmann, Antje; Luke, Steven G.
2013-01-01
Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation…
Initial Scene Representations Facilitate Eye Movement Guidance in Visual Search
ERIC Educational Resources Information Center
Castelhano, Monica S.; Henderson, John M.
2007-01-01
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a…
Iconic memory for the gist of natural scenes.
Clarke, Jason; Mack, Arien
2014-11-01
Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.
Adolescent Characters and Alcohol Use Scenes in Brazilian Movies, 2000-2008.
Castaldelli-Maia, João Mauricio; de Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh
2016-04-01
Quantitative structured assessment of 193 scenes depicting substance use from a convenience sample of 50 Brazilian movies was performed. Logistic regression and analysis of variance or multivariate analysis of variance models were employed to test for two different types of outcome regarding alcohol appearance: The mean length of alcohol scenes in seconds and the prevalence of alcohol use scenes. The presence of adolescent characters was associated with a higher prevalence of alcohol use scenes compared to nonalcohol use scenes. The presence of adolescents was also associated with a higher than average length of alcohol use scenes compared to the nonalcohol use scenes. Alcohol use was negatively associated with cannabis, cocaine, and other drugs use. However, when the use of cannabis, cocaine, or other drugs was present in the alcohol use scenes, a higher average length was found. This may mean that most vulnerable group may see drinking as a more attractive option leading to higher alcohol use. © The Author(s) 2016.
How many pixels make a memory? Picture memory for small pictures.
Wolfe, Jeremy M; Kuzmova, Yoana I
2011-06-01
Torralba (Visual Neuroscience, 26, 123-131, 2009) showed that, if the resolution of images of scenes were reduced to the information present in very small "thumbnail images," those scenes could still be recognized. The objects in those degraded scenes could be identified, even though it would be impossible to identify them if they were removed from the scene context. Can tiny and/or degraded scenes be remembered, or are they like brief presentations, identified but not remembered. We report that memory for tiny and degraded scenes parallels the recognizability of those scenes. You can remember a scene to approximately the degree to which you can classify it. Interestingly, there is a striking asymmetry in memory when scenes are not the same size on their initial appearance and subsequent test. Memory for a large, full-resolution stimulus can be tested with a small, degraded stimulus. However, memory for a small stimulus is not retrieved when it is tested with a large stimulus.
Viral genome analysis and knowledge management.
Kuiken, Carla; Yoon, Hyejin; Abfalterer, Werner; Gaschen, Brian; Lo, Chienchi; Korber, Bette
2013-01-01
One of the challenges of genetic data analysis is to combine information from sources that are distributed around the world and accessible through a wide array of different methods and interfaces. The HIV database and its footsteps, the hepatitis C virus (HCV) and hemorrhagic fever virus (HFV) databases, have made it their mission to make different data types easily available to their users. This involves a large amount of behind-the-scenes processing, including quality control and analysis of the sequences and their annotation. Gene and protein sequences are distilled from the sequences that are stored in GenBank; to this end, both submitter annotation and script-generated sequences are used. Alignments of both nucleotide and amino acid sequences are generated, manually curated, distilled into an alignment model, and regenerated in an iterative cycle that results in ever better new alignments. Annotation of epidemiological and clinical information is parsed, checked, and added to the database. User interfaces are updated, and new interfaces are added based upon user requests. Vital for its success, the database staff are heavy users of the system, which enables them to fix bugs and find opportunities for improvement. In this chapter we describe some of the infrastructure that keeps these heavily used analysis platforms alive and vital after nearly 25 years of use. The database/analysis platforms described in this chapter can be accessed at http://hiv.lanl.gov http://hcv.lanl.gov http://hfv.lanl.gov.
A lysinated thiophene-based semiconductor as a multifunctional neural bioorganic interface.
Bonetti, Simone; Pistone, Assunta; Brucale, Marco; Karges, Saskia; Favaretto, Laura; Zambianchi, Massimo; Posati, Tamara; Sagnella, Anna; Caprini, Marco; Toffanin, Stefano; Zamboni, Roberto; Camaioni, Nadia; Muccini, Michele; Melucci, Manuela; Benfenati, Valentina
2015-06-03
Lysinated molecular organic semiconductors are introduced as valuable multifunctional platforms for neural cells growth and interfacing. Cast films of quaterthiophene (T4) semiconductor covalently modified with lysine-end moieties (T4Lys) are fabricated and their stability, morphology, optical/electrical, and biocompatibility properties are characterized. T4Lys films exhibit fluorescence and electronic transport as generally observed for unsubstituted oligothiophenes combined to humidity-activated ionic conduction promoted by the charged lysine-end moieties. The Lys insertion in T4 enables adhesion of primary culture of rat dorsal root ganglion (DRG), which is not achievable by plating cells on T4. Notably, on T4Lys, the number on adhering neurons/area is higher and displays a twofold longer neurite length than neurons plated on glass coated with poly-l-lysine. Finally, by whole-cell patch-clamp, it is shown that the biofunctionality of neurons cultured on T4Lys is preserved. The present study introduces an innovative concept for organic material neural interface that combines optical and iono-electronic functionalities with improved biocompatibility and neuron affinity promoted by Lys linkage and the softness of organic semiconductors. Lysinated organic semiconductors could set the scene for the fabrication of simplified bioorganic devices geometry for cells bidirectional communication or optoelectronic control of neural cells biofunctionality. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ndengu, Masimba; Matope, Gift; de Garine-Wichatitsky, Michel; Tivapasi, Musavengana; Scacchia, Massimo; Bonfini, Barbara; Pfukenyi, Davis Mubika
2017-10-01
A study was conducted to investigate seroprevalence and risk factors for Brucella species infection in cattle and some wildlife species in communities living at the periphery of the Great Limpopo Transfrontier Conservation Area in south eastern Zimbabwe. Three study sites were selected based on the type of livestock-wildlife interface: porous livestock-wildlife interface (unrestricted); non-porous livestock-wildlife interface (restricted by fencing); and livestock-wildlife non-interface (totally absent or control). Sera were collected from cattle aged≥2years representing both female and intact male animals. Sera were also collected from selected wild ungulates from Mabalauta (porous interface) and Chipinda (non-interface) areas of the Gonarezhou National Park. Samples were screened for Brucellaantibodies using the Rose Bengal plate test and confirmed by the complement fixation test. Data were analysed by descriptive statistics and multivariate logistic regression modelling. In cattle, brucellosis seroprevalence from all areas was 16.7% (169/1011; 95% CI: 14.5-19.2%). The porous interface recorded a significantly (p=0.03) higher seroprevalence (19.5%; 95% CI: 16.1-23.4%) compared to the non-interface area (13.0%; 95% CI: 9.2-19.9%).The odds of Brucellaseropositivity increased progressively with parity of animals and were also three times higher (OR=3.0, 2.0
Multiple scene attitude estimator performance for LANDSAT-1
NASA Technical Reports Server (NTRS)
Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.
1979-01-01
Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.
FAST - A multiprocessed environment for visualization of computational fluid dynamics
NASA Technical Reports Server (NTRS)
Bancroft, Gordon V.; Merritt, Fergus J.; Plessel, Todd C.; Kelaita, Paul G.; Mccabe, R. Kevin
1991-01-01
The paper presents the Flow Analysis Software Toolset (FAST) to be used for fluid-mechanics analysis. The design criteria for FAST including the minimization of the data path in the computational fluid-dynamics (CFD) process, consistent user interface, extensible software architecture, modularization, and the isolation of three-dimensional tasks from the application programmer are outlined. Each separate process communicates through the FAST Hub, while other modules such as FAST Central, NAS file input, CFD calculator, surface extractor and renderer, titler, tracer, and isolev might work together to generate the scene. An interprocess communication package making it possible for FAST to operate as a modular environment where resources could be shared among different machines as well as a single host is discussed.
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu
2004-05-01
Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.
Increasing situation awareness of the CBRNE robot operators
NASA Astrophysics Data System (ADS)
Jasiobedzki, Piotr; Ng, Ho-Kong; Bondy, Michel; McDiarmid, Carl H.
2010-04-01
Situational awareness of CBRN robot operators is quite limited, as they rely on images and measurements from on-board detectors. This paper describes a novel framework that enables a uniform and intuitive access to live and recent data via 2D and 3D representations of visited sites. These representations are created automatically and augmented with images, models and CBRNE measurements. This framework has been developed for CBRNE Crime Scene Modeler (C2SM), a mobile CBRNE mapping system. The system creates representations (2D floor plans and 3D photorealistic models) of the visited sites, which are then automatically augmented with CBRNE detector measurements. The data stored in a database is accessed using a variety of user interfaces providing different perspectives and increasing operators' situational awareness.
When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes
ERIC Educational Resources Information Center
Vo, Melissa L. -H.; Wolfe, Jeremy M.
2012-01-01
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…
Effects of memory colour on colour constancy for unknown coloured objects.
Granzier, Jeroen J M; Gegenfurtner, Karl R
2012-01-01
The perception of an object's colour remains constant despite large variations in the chromaticity of the illumination-colour constancy. Hering suggested that memory colours, the typical colours of objects, could help in estimating the illuminant's colour and therefore be an important factor in establishing colour constancy. Here we test whether the presence of objects with diagnostical colours (fruits, vegetables, etc) within a scene influence colour constancy for unknown coloured objects in the scene. Subjects matched one of four Munsell papers placed in a scene illuminated under either a reddish or a greenish lamp with the Munsell book of colour illuminated by a neutral lamp. The Munsell papers were embedded in four different scenes-one scene containing diagnostically coloured objects, one scene containing incongruent coloured objects, a third scene with geometrical objects of the same colour as the diagnostically coloured objects, and one scene containing non-diagnostically coloured objects (eg, a yellow coffee mug). All objects were placed against a black background. Colour constancy was on average significantly higher for the scene containing the diagnostically coloured objects compared with the other scenes tested. We conclude that the colours of familiar objects help in obtaining colour constancy for unknown objects.
Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.
Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng
2013-10-24
Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.
Busettini, C; Miles, F A; Schwarz, U; Carl, J R
1994-01-01
Recent experiments on monkeys have indicated that the eye movements induced by brief translation of either the observer or the visual scene are a linear function of the inverse of the viewing distance. For the movements of the observer, the room was dark and responses were attributed to a translational vestibulo-ocular reflex (TVOR) that senses the motion through the otolith organs; for the movements of the scene, which elicit ocular following, the scene was projected and adjusted in size and speed so that the retinal stimulation was the same at all distances. The shared dependence on viewing distance was consistent with the hypothesis that the TVOR and ocular following are synergistic and share central pathways. The present experiments looked for such dependencies on viewing distance in human subjects. When briefly accelerated along the interaural axis in the dark, human subjects generated compensatory eye movements that were also a linear function of the inverse of the viewing distance to a previously fixated target. These responses, which were attributed to the TVOR, were somewhat weaker than those previously recorded from monkeys using similar methods. When human subjects faced a tangent screen onto which patterned images were projected, brief motion of those images evoked ocular following responses that showed statistically significant dependence on viewing distance only with low-speed stimuli (10 degrees/s). This dependence was at best weak and in the reverse direction of that seen with the TVOR, i.e., responses increased as viewing distance increased. We suggest that in generating an internal estimate of viewing distance subjects may have used a confounding cue in the ocular-following paradigm--the size of the projected scene--which was varied directly with the viewing distance in these experiments (in order to preserve the size of the retinal image). When movements of the subject were randomly interleaved with the movements of the scene--to encourage the expectation of ego-motion--the dependence of ocular following on viewing distance altered significantly: with higher speed stimuli (40 degrees/s) many responses (63%) now increased significantly as viewing distance decreased, though less vigorously than the TVOR. We suggest that the expectation of motion results in the subject placing greater weight on cues such as vergence and accommodation that provide veridical distance information in our experimental situation: cue selection is context specific.
Selective electrical interfaces with the nervous system.
Rutten, Wim L C
2002-01-01
To achieve selective electrical interfacing to the neural system it is necessary to approach neuronal elements on a scale of micrometers. This necessitates microtechnology fabrication and introduces the interdisciplinary field of neurotechnology, lying at the juncture of neuroscience with microtechnology. The neuroelectronic interface occurs where the membrane of a cell soma or axon meets a metal microelectrode surface. The seal between these may be narrow or may be leaky. In the latter case the surrounding volume conductor becomes part of the interface. Electrode design for successful interfacing, either for stimulation or recording, requires good understanding of membrane phenomena, natural and evoked action potential generation, volume conduction, and electrode behavior. Penetrating multimicroelectrodes have been produced as one-, two-, and three-dimensional arrays, mainly in silicon, glass, and metal microtechnology. Cuff electrodes circumvent a nerve; their selectivity aims at fascicles more than at nerve fibers. Other types of electrodes are regenerating sieves and cone-ingrowth electrodes. The latter may play a role in brain-computer interfaces. Planar substrate-embedded electrode arrays with cultured neural cells on top are used to study the activity and plasticity of developing neural networks. They also serve as substrates for future so-called cultured probes.
Smith, Tim J; Mital, Parag K
2013-07-17
Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.
Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics
NASA Astrophysics Data System (ADS)
Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu
2007-11-01
In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.
Knoblauch, Andreas; Palm, Günther
2002-09-01
To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation-selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with an enlarged synchronization range (fast state). When presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast states, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (designated as tower, castle, and hill peaks). On the fast time scale (tower peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either flat or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings, whereas standard phase-coding models would predict shifted peaks in the case of different objects.
Effects of Aesthetic Chills on a Cardiac Signature of Emotionality.
Sumpf, Maria; Jentschke, Sebastian; Koelsch, Stefan
2015-01-01
Previous studies have shown that a cardiac signature of emotionality (referred to as EK, which can be computed from the standard 12 lead electrocardiogram, ECG), predicts inter-individual differences in the tendency to experience and express positive emotion. Here, we investigated whether EK values can be transiently modulated during stimulation with participant-selected music pieces and film scenes that elicit strongly positive emotion. The phenomenon of aesthetic chills, as indicated by measurable piloerection on the forearm, was used to accurately locate moments of peak emotional responses during stimulation. From 58 healthy participants, continuous EK values, heart rate, and respiratory frequency were recorded during stimulation with film scenes and music pieces, and were related to the aesthetic chills. EK values, as well as heart rate, increased significantly during moments of peak positive emotion accompanied by piloerection. These results are the first to provide evidence for an influence of momentary psychological state on a cardiac signature of emotional personality (as reflected in EK values). The possibility to modulate ECG amplitude signatures via stimulation with emotionally significant music pieces and film scenes opens up new perspectives for the use of emotional peak experiences in the therapy of disorders characterized by flattened emotionality, such as depression or schizoid personality disorder.
Einhäuser, Wolfgang; Nuthmann, Antje
2016-09-01
During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.
Kaya, Emine Merve
2017-01-01
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012
Effects of Aesthetic Chills on a Cardiac Signature of Emotionality
Sumpf, Maria; Jentschke, Sebastian; Koelsch, Stefan
2015-01-01
Background Previous studies have shown that a cardiac signature of emotionality (referred to as EK, which can be computed from the standard 12 lead electrocardiogram, ECG), predicts inter-individual differences in the tendency to experience and express positive emotion. Here, we investigated whether EK values can be transiently modulated during stimulation with participant-selected music pieces and film scenes that elicit strongly positive emotion. Methodology/Principal Findings The phenomenon of aesthetic chills, as indicated by measurable piloerection on the forearm, was used to accurately locate moments of peak emotional responses during stimulation. From 58 healthy participants, continuous EK values, heart rate, and respiratory frequency were recorded during stimulation with film scenes and music pieces, and were related to the aesthetic chills. EK values, as well as heart rate, increased significantly during moments of peak positive emotion accompanied by piloerection. Conclusions/Significance These results are the first to provide evidence for an influence of momentary psychological state on a cardiac signature of emotional personality (as reflected in EK values). The possibility to modulate ECG amplitude signatures via stimulation with emotionally significant music pieces and film scenes opens up new perspectives for the use of emotional peak experiences in the therapy of disorders characterized by flattened emotionality, such as depression or schizoid personality disorder. PMID:26083383
Pishyareh, Ebrahim; Tehrani-Doost, Mehdi; Mahmoodi-Gharaie, Javad; Khorrami, Anahita; Rahmdar, Saeid Reza
2015-01-01
ADHD children have anomalous and negative behavior especially in emotionally related fields when compared to other. Evidence indicates that attention has an impact on emotional processing. The present study evaluates the effect of emotional processing on the sustained attention of children with ADHD type C. Sixty participants form two equal groups (each with 30 children) of normal and ADHD children) and each subject met the required selected criterion as either a normal or an ADHD child. Both groups were aged from 6-11-years-old. All pictures were chosen from the International Affective Picture System (IAPS) and presented paired emotional and neutral scenes in the following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral-neutral. Sustained attention was evaluated based on the number and duration of total fixation and was compared between the groups with MANOVA analysis. The duration of sustained attention on pleasant in the pleasant-unpleasant pair was significant. Bias in duration of sustained attention on pleasant scenes in pleasant-neutral pairs is significantly different between the groups. Such significant differences might be indicative of ADHD children deficiencies in emotional processing. It seems that the highly deep effect of emotionally unpleasant scenes to gain the focus of ADHD children's attention is responsible for impulsiveness and abnormal processing of emotional stimuli.
NASA Technical Reports Server (NTRS)
1978-01-01
Low energy conceptual stage designs and adaptations to existing/planned shuttle upper stages were developed and their performance established. Selected propulsion modes and subsystems were used as a basis to develop airborne support equipment (ASE) design concepts. Orbiter installation and integration (both physical and electrical interfaces) were defined. Low energy stages were adapted to the orbiter and ASE interfaces. Selected low energy stages were then used to define and describe typical ground and flight operations.
Interaction between scene-based and array-based contextual cueing.
Rosenbaum, Gail M; Jiang, Yuhong V
2013-07-01
Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene-target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene-target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.
The roles of scene priming and location priming in object-scene consistency effects
Heise, Nils; Ansorge, Ulrich
2014-01-01
Presenting consistent objects in scenes facilitates object recognition as compared to inconsistent objects. Yet the mechanisms by which scenes influence object recognition are still not understood. According to one theory, consistent scenes facilitate visual search for objects at expected places. Here, we investigated two predictions following from this theory: If visual search is responsible for consistency effects, consistency effects could be weaker (1) with better-primed than less-primed object locations, and (2) with less-primed than better-primed scenes. In Experiments 1 and 2, locations of objects were varied within a scene to a different degree (one, two, or four possible locations). In addition, object-scene consistency was studied as a function of progressive numbers of repetitions of the backgrounds. Because repeating locations and backgrounds could facilitate visual search for objects, these repetitions might alter the object-scene consistency effect by lowering of location uncertainty. Although we find evidence for a significant consistency effect, we find no clear support for impacts of scene priming or location priming on the size of the consistency effect. Additionally, we find evidence that the consistency effect is dependent on the eccentricity of the target objects. These results point to only small influences of priming to object-scene consistency effects but all-in-all the findings can be reconciled with a visual-search explanation of the consistency effect. PMID:24910628
Runway Texture and Grid Pattern Effects on Rate-of-Descent Perception
NASA Technical Reports Server (NTRS)
Schroeder, J. A.; Dearing, M. G.; Sweet, B. T.; Kaiser, M. K.; Rutkowski, Mike (Technical Monitor)
2001-01-01
To date, perceptual errors occur in determining descent rate from a computer-generated image in flight simulation. Pilots tend to touch down twice as hard in simulation than in flight, and more training time is needed in simulation before reaching steady-state performance. Barnes suggested that recognition of range may be the culprit, and he cited that problems such as collimated objects, binocular vision, and poor resolution lead to poor estimation of the velocity vector. Brown's study essentially ruled out that the lack of binocular vision is the problem. Dorfel added specificity to the problem by showing that pilots underestimated range in simulated scenes by 50% when 800 ft from the runway threshold. Palmer and Petitt showed that pilots are able to distinguish between a 1.7 ft/sec and 2.9 ft/sec sink rate when passively observing sink rates in a night scene. Platform motion also plays a role, as previous research has shown that the addition of substantial platform motion improves pilot estimates of vertical velocity and results in simulated touchdown rates more closely resembling flight. This experiment examined how some specific variations in the visual scene properties affect a pilot's perception of sink rate. It extended another experiment that focused on the visual and motion cues necessary for helicopter autorotations. In that experiment, pilots performed steep approaches to a runway. The visual content of the runway and its surroundings varied in two ways: texture and rectangular grid spacing. Four textures, included a no-texture case, were evaluated. Three grid spacings, including a no-grid case, were evaluated. The results showed that pilot better controlled their vertical descent rates when good texture cues were present. No significant differences were found for the grid manipulation. Using those visual scenes a simple psychophysics, experiment was performed. The purpose was to determine if the variations in the visual scenes allowed pilots to better perceive vertical velocity. To determine that answer, pilots passively viewed a particular visual scene in which the vehicle was descending at two different rates. Pilots had to select which of the two rates they thought was the fastest rate. The difference between the two rates changed using a staircase method, depending on whether or not the pilot was correct, until a minimum threshold between the two descent rates was reached. This process was repeated for all of the visual scenes to decide whether or not the visual scenes did allow pilots to perceive vertical velocity better among them. All of the data have yet to be analyzed; however, neither the effects of grid nor texture revealed any statistically significant trends. On further examination of the staircase method employed, a possibility exists that the lack of an evident trend may be due to the exit criterion used during the study. As such, the experiment will be repeated with an improved exit criterion in February. Results of this study will be presented in the submitted paper.
Hierarchical video summarization based on context clustering
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Smith, John R.
2003-11-01
A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.
Basic level scene understanding: categories, attributes and structures
Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude
2013-01-01
A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590
Advanced display object selection methods for enhancing user-computer productivity
NASA Technical Reports Server (NTRS)
Osga, Glenn A.
1993-01-01
The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.
ERIC Educational Resources Information Center
Henderson, John M.; Larson, Christine L.; Zhu, David C.
2008-01-01
We used fMRI to directly compare activation in two cortical regions previously identified as relevant to real-world scene processing: retrosplenial cortex and a region of posterior parahippocampal cortex functionally defined as the parahippocampal place area (PPA). We compared activation in these regions to full views of scenes from a global…
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Goris, Robbe L.T.; Simoncelli, Eero P.; Movshon, J. Anthony
2016-01-01
SUMMARY Neurons in visual cortex vary in their orientation selectivity. We measured responses of V1 and V2 cells to orientation mixtures and fit them with a model whose stimulus selectivity arises from the combined effects of filtering, suppression, and response nonlinearity. The model explains the diversity of orientation selectivity with neuron-to-neuron variability in all three mechanisms, of which variability in the orientation bandwidth of linear filtering is the most important. The model also accounts for the cells’ diversity of spatial frequency selectivity. Tuning diversity is matched to the needs of visual encoding. The orientation content found in natural scenes is diverse, and neurons with different selectivities are adapted to different stimulus configurations. Single orientations are better encoded by highly selective neurons, while orientation mixtures are better encoded by less selective neurons. A diverse population of neurons therefore provides better overall discrimination capabilities for natural images than any homogeneous population. PMID:26549331
Ball, Felix; Elzemann, Anne; Busch, Niko A
2014-09-01
The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.
Smith, Christine N.; Squire, Larry R.
2017-01-01
Eye movements can reflect memory. For example, participants make fewer fixations and sample fewer regions when viewing old versus new scenes (the repetition effect). It is unclear whether the repetition effect requires that participants have knowledge (awareness) of the old–new status of the scenes or if it can occur independent of knowledge about old–new status. It is also unclear whether the repetition effect is hippocampus-dependent or hippocampus-independent. A complication is that testing conscious memory for the scenes might interfere with the expression of unconscious (unaware), experience-dependent eye movements. In experiment 1, 75 volunteers freely viewed old and new scenes without knowledge that memory for the scenes would later be tested. Participants then made memory judgments and confidence judgments for each scene during a surprise recognition memory test. Participants exhibited the repetition effect regardless of the accuracy or confidence associated with their memory judgments (i.e., the repetition effect was independent of their awareness of the old–new status of each scene). In experiment 2, five memory-impaired patients with medial temporal lobe damage and six controls also viewed old and new scenes without expectation of memory testing. Both groups exhibited the repetition effect, even though the patients were impaired at recognizing which scenes were old and which were new. Thus, when participants viewed scenes without expectation of memory testing, eye movements associated with old and new scenes reflected unconscious, hippocampus-independent memory. These findings are consistent with the formulation that, when memory is expressed independent of awareness, memory is hippocampus-independent. PMID:28096499
Direct versus indirect processing changes the influence of color in natural scene categorization.
Otsuka, Sachio; Kawaguchi, Jun
2009-10-01
We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.
HYDICE postflight data processing
NASA Astrophysics Data System (ADS)
Aldrich, William S.; Kappus, Mary E.; Resmini, Ronald G.; Mitchell, Peter A.
1996-06-01
The hyperspectral digital imagery collection experiment (HYDICE) sensor records instrument counts for scene data, in-flight spectral and radiometric calibration sequences, and dark current levels onto an AMPEX DCRsi data tape. Following flight, the HYDICE ground data processing subsystem (GDPS) transforms selected scene data from digital numbers (DN) to calibrated radiance levels at the sensor aperture. This processing includes: dark current correction, spectral and radiometric calibration, conversion to radiance, and replacement of bad detector elements. A description of the algorithms for post-flight data processing is presented. A brief analysis of the original radiometric calibration procedure is given, along with a description of the development of the modified procedure currently used. Example data collected during the 1995 flight season, but uncorrected and processed, are shown to demonstrate the removal of apparent sensor artifacts (e.g., non-uniformities in detector response over the array) as a result of this transformation.
An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc
2010-12-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.
Big Sky and Greenhorn Drilling Area on Mount Sharp
2015-12-17
This view from the Mast Camera (Mastcam) on NASA's Curiosity Mars rover covers an area in "Bridger Basin" that includes the locations where the rover drilled a target called "Big Sky" on the mission's Sol 1119 (Sept. 29, 2015) and a target called "Greenhorn" on Sol 1137 (Oct. 18, 2015). The scene combines portions of several observations taken from sols 1112 to 1126 (Sept. 22 to Oct. 6, 2015) while Curiosity was stationed at Big Sky drilling site. The Big Sky drill hole is visible in the lower part of the scene. The Greenhorn target, in a pale fracture zone near the center of the image, had not yet been drilled when the component images were taken. Researchers selected this pair of drilling sites to investigate the nature of silica enrichment in the fracture zones of the area. http://photojournal.jpl.nasa.gov/catalog/PIA20270
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
The Landsat Image Mosaic of Antarctica
Bindschadler, Robert; Vornberger, P.; Fleming, A.; Fox, A.; Mullins, J.; Binnie, D.; Paulsen, S.J.; Granneman, Brian J.; Gorodetzky, D.
2008-01-01
The Landsat Image Mosaic of Antarctica (LIMA) is the first true-color, high-spatial-resolution image of the seventh continent. It is constructed from nearly 1100 individually selected Landsat-7 ETM+ scenes. Each image was orthorectified and adjusted for geometric, sensor and illumination variations to a standardized, almost seamless surface reflectance product. Mosaicing to avoid clouds produced a high quality, nearly cloud-free benchmark data set of Antarctica for the International Polar Year from images collected primarily during 1999-2003. Multiple color composites and enhancements were generated to illustrate additional characteristics of the multispectral data including: the true appearance of the surface; discrimination between snow and bare ice; reflectance variations within bright snow; recovered reflectance values in regions of sensor saturation; and subtle topographic variations associated with ice flow. LIMA is viewable and individual scenes or user defined portions of the mosaic are downloadable at http://lima.usgs.gov. Educational materials associated with LIMA are available at http://lima.nasa.gov.
Valero, Enrique; Adan, Antonio; Cerrada, Carlos
2012-01-01
This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results. PMID:22778609
Programmable in vivo selection of arbitrary DNA sequences.
Ben Yehezkel, Tuval; Biezuner, Tamir; Linshiz, Gregory; Mazor, Yair; Shapiro, Ehud
2012-01-01
The extraordinary fidelity, sensory and regulatory capacity of natural intracellular machinery is generally confined to their endogenous environment. Nevertheless, synthetic bio-molecular components have been engineered to interface with the cellular transcription, splicing and translation machinery in vivo by embedding functional features such as promoters, introns and ribosome binding sites, respectively, into their design. Tapping and directing the power of intracellular molecular processing towards synthetic bio-molecular inputs is potentially a powerful approach, albeit limited by our ability to streamline the interface of synthetic components with the intracellular machinery in vivo. Here we show how a library of synthetic DNA devices, each bearing an input DNA sequence and a logical selection module, can be designed to direct its own probing and processing by interfacing with the bacterial DNA mismatch repair (MMR) system in vivo and selecting for the most abundant variant, regardless of its function. The device provides proof of concept for programmable, function-independent DNA selection in vivo and provides a unique example of a logical-functional interface of an engineered synthetic component with a complex endogenous cellular system. Further research into the design, construction and operation of synthetic devices in vivo may lead to other functional devices that interface with other complex cellular processes for both research and applied purposes.
Event selection services in ATLAS
NASA Astrophysics Data System (ADS)
Cranshaw, J.; Cuhadar-Donszelmann, T.; Gallas, E.; Hrivnac, J.; Kenyon, M.; McGlone, H.; Malon, D.; Mambelli, M.; Nowak, M.; Viegas, F.; Vinek, E.; Zhang, Q.
2010-04-01
ATLAS has developed and deployed event-level selection services based upon event metadata records ("TAGS") and supporting file and database technology. These services allow physicists to extract events that satisfy their selection predicates from any stage of data processing and use them as input to later analyses. One component of these services is a web-based Event-Level Selection Service Interface (ELSSI). ELSSI supports event selection by integrating run-level metadata, luminosity-block-level metadata (e.g., detector status and quality information), and event-by-event information (e.g., triggers passed and physics content). The list of events that survive after some selection criterion is returned in a form that can be used directly as input to local or distributed analysis; indeed, it is possible to submit a skimming job directly from the ELSSI interface using grid proxy credential delegation. ELSSI allows physicists to explore ATLAS event metadata as a means to understand, qualitatively and quantitatively, the distributional characteristics of ATLAS data. In fact, the ELSSI service provides an easy interface to see the highest missing ET events or the events with the most leptons, to count how many events passed a given set of triggers, or to find events that failed a given trigger but nonetheless look relevant to an analysis based upon the results of offline reconstruction, and more. This work provides an overview of ATLAS event-level selection services, with an emphasis upon the interactive Event-Level Selection Service Interface.
Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F
2013-06-04
A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.
2013-01-01
A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor. PMID:23590163
Henderson, John M; Choi, Wonil
2015-06-01
During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.
NASA Astrophysics Data System (ADS)
Ludert, Erin Edkins
While evidence of non-baryonic dark matter has been accumulating for decades, its exact nature continues to remain a mystery. Weakly Interacting Massive Particles (WIMPs) are a well motivated candidate which appear in certain extensions of the Standard Model, independently of dark matter theory. If such particles exist, they should occasionally interact with particles of normal matter, producing a signal which may be detected. The DarkSide-50 direct dark matter experiment aims to detect the energy of recoiling argon atoms due to the elastic scattering of postulated WIMPs. In order to make such a discovery, a clear understanding of both the background and signal region is essential. This understanding requires a careful study of the detector's response to radioactive sources, which in turn requires such sources may be safely introduced into or near the detector volume and reliably removed. The CALibration Insertaion System (CALIS) was designed and built for this purpose in a joint effort between Fermi National Laboratory and the University of Hawaii. This work describes the design and testing of CALIS, its installation and commissioning at the Laboratori Nazionali del Gran Sasso (LNGS) and the multiple calibration campaigns which have successfully employed it. As nuclear recoils produced by WIMPs are indistinguishable from those produced by neutrons, radiogenic neutrons are both the most dangerous class of background and a vital calibration source for the study of the potential WIMP signal. Prior to the calibration of DarkSide-50 with radioactive neutron sources, the acceptance region was determined by the extrapolation of nuclear recoil data from a separate, dedicated experiment, ScENE, which measured the distribution of the pulse shape discrimination parameter, f 90, for nuclear recoils of known energies. This work demonstrates the validity of the extrapolation of ScENE values to DarkSide-50, by direct comparison of the f90 distribution of nuclear recoils from ScENE and an AmBe calibration source. The combined acceptance as defined by ScENE and the in-situ AmBe calibration were used to establish the best WIMP exclusion limit on an argon target. Unfortunately, radioactive sources used for the calibration of DarkSide-50 are universally accompanied by gamma decays, which obscure the low energy region where most WIMP interactions are expected to occur and seem to make continuing dependence on an external measurement such as ScENE inevitable. However, this work presents a novel method of nuclear recoil calibration employing event selection, unique to the design of DarkSide-50, which produces a nearly pure sample of nuclear recoils. Further, it describes the execution of a neutron calibration campaign, from planning to analysis, which yielded a valuable data set for defining the acceptance region. Together with the event selection techniques, this allows for the definition of the acceptance region independent of ScENE values. Two analytical models of the f90 distribution are described and their results for nuclear recoils are compared. Finally, a detailed study of integrated noise in nuclear and electron recoil events is presented, which demonstrates a difference between these classes of events for the first time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edkins, Erin Elisabeth
While evidence of non-baryonic dark matter has been accumulating for decades, its exact nature continues to remain a mystery. Weakly Interacting Massive Particles (WIMPs) are a well motivated candidate which appear in certain extensions of the Standard Model, independently of dark matter theory. If such particles exist, they should occasionally interact with particles of normal matter, producing a signal which may be detected. The DarkSide-50 direct dark matter experiment aims to detect the energy of recoiling argon atoms due to the elastic scattering of postulated WIMPs. In order to make such a discovery, a clear understanding of both the background and signal region is essential. This understanding requires a careful study of the detector's response to radioactive sources, which in turn requires such sources may be safely introduced into or near the detector volume and reliably removed. The CALibration Insertaion System (CALIS) was designed and built for this purpose in a j oint effort between Fermi National Laboratory and the University of Hawaii. This work describes the design and testing of CALIS, its installation and commissioning at the Laboratori Nazionali del Gran Sasso (LNGS) and the multiple calibration campaigns which have successfully employed it. As nuclear recoils produced by WIMPs are indistinguishable from those produced by neutrons, radiogenic neutrons are both the most dangerous class of background and a vital calibration source for the study of the potential WIMP signal. Prior to the calibration of DarkSide-50 with radioactive neutron sources, the acceptance region was determined by the extrapolation of nuclear recoil data from a separate, dedicated experiment, ScENE, which measured the distribution of the pulse shape discrimination parameter,more » $$f_{90}$$, for nuclear recoils of known energies. This work demonstrates the validity of the extrapolation of ScENE values to DarkSide-50, by direct comparison of the $$f_{90}$$ distributio n of nuclear recoils from ScENE and an AmBe calibration sour! ce. The combined acceptance as defined by ScENE and the \\textit{in-situ} AmBe calibration were used to establish the best WIMP exclusion limit on an argon target. Unfortunately, radioactive sources used for the calibration of DarkSide-50 are universally accompanied by gamma decays, which obscure the low energy region where most WIMP interactions are expected to occur and seem to make continuing dependence on an external measurement such as ScENE inevitable. However, this work presents a novel method of nuclear recoil calibration employing event selection, unique to the design of DarkSide-50, which produces a nearly pure sample of nuclear recoils. Further, it describes the execution of a neutron calibration campaign, from planning to analysis, which yielded a valuable data set for defining the acceptance region. Together with the event selection techniques, this allows for the definition of the acceptance region independent of ScENE values. Two analytical models of the $$f_{90 }$$ distribution are described and their results for nuclear recoils are compared. Finally, a detailed study of integrated noise in nuclear and electron recoil« less
Nuthmann, Antje; Einhäuser, Wolfgang; Schütz, Immo
2017-01-01
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.
Development of an ultra-high temperature infrared scene projector at Santa Barbara Infrared Inc.
NASA Astrophysics Data System (ADS)
Franks, Greg; Laveigne, Joe; Danielson, Tom; McHugh, Steve; Lannon, John; Goodwin, Scott
2015-05-01
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.
Looking Toward Curiosity Study Areas, Spring 2015
2015-05-08
This detailed panorama from the Mast Camera (Mastcam) on NASA's Curiosity Mars rover shows a view toward two areas on lower Mount Sharp chosen for close-up inspection: "Mount Shields" and "Logan Pass." The scene is a mosaic of images taken with Mastcam's right-eye camera, which has a telephoto lens, on April 16, 2015, during the 957th Martian day, or sol, of Curiosity's work on Mars, before that sol's drive. The view spans from southwest, at left, to west-northwest. The color has been approximately white-balanced to resemble how the scene would appear under daytime lighting conditions on Earth. By 10 sols later, Curiosity had driven about 328 meters (1,076 feet) from the location where it made this observation to an outcrop at the base of "Mount Shields." A 5-meter scale bar has been superimposed near the center of this scene beside the outcrop that the rover then examined in detail. (Five meters is 16.4 feet.) This study location was chosen on the basis of Mount Shields displaying a feature that geologists recognized from images like this as likely to be a site where an ancient valley was incised into bedrock, then refilled with other sediment. After a few sols examining the outcrop at the base of Mount Shields, Curiosity resumed driving toward a study area at Logan Pass, near the 5-meter scale bar in the left half of this scene. That location was selected earlier, on the basis of images from orbit indicating contact there between two different geological units. The rover's route from Mount Shields to Logan Pass runs behind "Jocko Butte" from the viewpoint where this panorama was taken. http://photojournal.jpl.nasa.gov/catalog/PIA19398
Injury prevention practices as depicted in G-rated and PG-rated movies.
Pelletier, A R; Quinlan, K P; Sacks, J J; Van Gilder, T J; Gilchrist, J; Ahluwalia, H K
2000-03-01
Previous studies on alcohol, tobacco, and violence suggest that children's behavior can be influenced by mass media; however, little is known about the effect of media on unintentional injuries, the leading cause of death among young persons in the United States. To determine how injury prevention practices are depicted in G-rated (general audience) and PG-rated (parental guidance recommended) movies. Observational study. The 25 movies with the highest domestic box-office grosses and a rating of G or PG for each year from 1995 through 1997. Movies that were predominantly animated or not set in the present day were excluded from analysis. Movie characters with speaking roles. Safety belt use by motor vehicle occupants, use of a crosswalk and looking both ways by pedestrians crossing a street, helmet use by bicyclists, personal flotation device use by boaters, and selected other injury prevention practices. Fifty nonanimated movies set in the present day were included in the study. A total of 753 person-scenes involving riding in a motor vehicle, crossing the street, bicycling, and boating were shown (median, 13.5 person-scenes per movie). Forty-two person-scenes (6%) involved falls or crashes, which resulted in 4 injuries and 2 deaths. Overall, 119 (27%) of 447 motor vehicle occupants wore safety belts, 20 (18%) of 109 pedestrians looked both ways before crossing the street and 25 (16%) of 160 used a crosswalk, 4 (6%) of 64 bicyclists wore helmets, and 14 (17%) of 82 boaters wore personal flotation devices. In scenes depicting everyday life in popular movies likely to be seen by children, characters were infrequently portrayed practicing recommended safe behaviors. The consequences of unsafe behaviors were rarely shown. The entertainment industry should improve its depiction of injury prevention practices in G-rated and PG-rated movies.
Integrating UAV Flight outputs in Esri's CityEngine for semi-urban areas
NASA Astrophysics Data System (ADS)
Anca, Paula; Vasile, Alexandru; Sandric, Ionut
2016-04-01
One of the most pervasive technologies of recent years, which has crossed over into consumer products due to its lowering prince, is the UAV, commonly known as drones. Besides its ever-more accessible prices and growing functionality, what is truly impressive is the drastic reduction in processing time, from days to ours: from the initial flight preparation to the final output. This paper presents such a workflow and goes further by integrating the outputs into another growing technology: 3D. The software used for this purpose is Esri's CityEngine, which was developed for modeling 3D urban environments using existing 2D GIS data and computer generated architecture (CGA) rules, instead of modeling each feature individually. A semi-urban areas was selected for this study and captured using the E-Bee from Parrot. The output point cloud elevation from the E-Bee flight was transformed into a raster in order to be used as an elevation surface in CityEngine, and the mosaic raster dataset was draped over this surface. In order to model the buildings in this area CGA rules were written using the building footprints, as inputs, in the form of Feature Classes. The extrusion heights for the buildings were also extracted from the point cloud, and realistic textures were draped over the 3D building models. Finally the scene was shared as a 3D web-scene which can be accessed by anyone through a link, without any software besides an internet browser. This can serve as input for Smart City development through further analysis for urban ecology Keywords: 3D, drone, CityEngine, E-Bee, Esri, scene, web-scene
New approaches for the design and the fabrication of pixelated filters
NASA Astrophysics Data System (ADS)
Lumeau, J.; Lemarquis, F.; Begou, T.; Mathieu, K.; Savin De Larclause, I.; Berthon, J.
2017-09-01
Multispectral or hyperspectral images allow acquiring new information that could not be acquired using colored images and, for example, identifying chemical species on an observed scene using specific highly selective thin film filters. Those images are commonly used in numerous fields, e.g. in agriculture or homeland security and are of prime interest for imaging systems for onboard scientific applications (e.g. for planetology).
Selection and Storage of Perceptual Groups Is Constrained by a Discrete Resource in Working Memory
ERIC Educational Resources Information Center
Anderson, David E.; Vogel, Edward K.; Awh, Edward
2013-01-01
Perceptual grouping can lead observers to perceive a multielement scene as a smaller number of hierarchical units. Past work has shown that grouping enables more elements to be stored in visual working memory (WM). Although this may appear to contradict so-called discrete resource models that argue for fixed item limits in WM storage, it is also…
ERIC Educational Resources Information Center
Cave, Kyle R.; Bush, William S.; Taylor, Thalia G. G.
2010-01-01
Jans, Peters, and De Weerd (2010) examined the studies demonstrating that spatial attention can be split across 2 noncontiguous target locations. They find all these studies to be flawed and conclude that spatial attention only selects a single location at any given time. They do, however, suggest that there could be exceptional circumstances that…
Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex.
Kim, Yee-Joon; Tsai, Jeffrey J; Ojemann, Jeffrey; Verghese, Preeti
2017-05-10
Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping. SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two stimuli increases the interaction component that is a hallmark for perceptual integration of stimuli. Furthermore, this stimulus-specific interaction is represented in prefrontal and parietal cortex in a task-dependent manner. Copyright © 2017 the authors 0270-6474/17/374942-12$15.00/0.
Scene construction in developmental amnesia: An fMRI study☆
Mullally, Sinéad L.; Vargha-Khadem, Faraneh; Maguire, Eleanor A.
2014-01-01
Amnesic patients with bilateral hippocampal damage sustained in adulthood are generally unable to construct scenes in their imagination. By contrast, patients with developmental amnesia (DA), where hippocampal damage was acquired early in life, have preserved performance on this task, although the reason for this sparing is unclear. One possibility is that residual function in remnant hippocampal tissue is sufficient to support basic scene construction in DA. Such a situation was found in the one amnesic patient with adult-acquired hippocampal damage (P01) who could also construct scenes. Alternatively, DA patients’ scene construction might not depend on the hippocampus, perhaps being instead reliant on non-hippocampal regions and mediated by semantic knowledge. To adjudicate between these two possibilities, we examined scene construction during functional MRI (fMRI) in Jon, a well-characterised patient with DA who has previously been shown to have preserved scene construction. We found that when Jon constructed scenes he activated many of the regions known to be associated with imagining scenes in control participants including ventromedial prefrontal cortex, posterior cingulate, retrosplenial and posterior parietal cortices. Critically, however, activity was not increased in Jon's remnant hippocampal tissue. Direct comparisons with a group of control participants and patient P01, confirmed that they activated their right hippocampus more than Jon. Our results show that a type of non-hippocampal dependent scene construction is possible and occurs in DA, perhaps mediated by semantic memory, which does not appear to involve the vivid visualisation of imagined scenes. PMID:24231038
Scene construction in developmental amnesia: an fMRI study.
Mullally, Sinéad L; Vargha-Khadem, Faraneh; Maguire, Eleanor A
2014-01-01
Amnesic patients with bilateral hippocampal damage sustained in adulthood are generally unable to construct scenes in their imagination. By contrast, patients with developmental amnesia (DA), where hippocampal damage was acquired early in life, have preserved performance on this task, although the reason for this sparing is unclear. One possibility is that residual function in remnant hippocampal tissue is sufficient to support basic scene construction in DA. Such a situation was found in the one amnesic patient with adult-acquired hippocampal damage (P01) who could also construct scenes. Alternatively, DA patients' scene construction might not depend on the hippocampus, perhaps being instead reliant on non-hippocampal regions and mediated by semantic knowledge. To adjudicate between these two possibilities, we examined scene construction during functional MRI (fMRI) in Jon, a well-characterised patient with DA who has previously been shown to have preserved scene construction. We found that when Jon constructed scenes he activated many of the regions known to be associated with imagining scenes in control participants including ventromedial prefrontal cortex, posterior cingulate, retrosplenial and posterior parietal cortices. Critically, however, activity was not increased in Jon's remnant hippocampal tissue. Direct comparisons with a group of control participants and patient P01, confirmed that they activated their right hippocampus more than Jon. Our results show that a type of non-hippocampal dependent scene construction is possible and occurs in DA, perhaps mediated by semantic memory, which does not appear to involve the vivid visualisation of imagined scenes. © 2013 Published by Elsevier Ltd.
Does object view influence the scene consistency effect?
Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko
2015-04-01
Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.
NASA Technical Reports Server (NTRS)
Catalina, Adrian V.; Sen, S.; Rose, M. Franklin (Technical Monitor)
2001-01-01
The evolution of cellular solid/liquid interfaces from an initially unstable planar front was studied by means of a two-dimensional computer simulation. The developed numerical model makes use of an interface tracking procedure and has the capability to describe the dynamics of the interface morphology based on local changes of the thermodynamic conditions. The fundamental physics of this formulation was validated against experimental microgravity results and the predictions of the analytical linear stability theory. The performed simulations revealed that in certain conditions, based on a competitive growth mechanism, an interface could become unstable to random perturbations of infinitesimal amplitude even at wavelengths smaller than the neutral wavelength, lambda(sub c), predicted by the linear stability theory. Furthermore, two main stages of spacing selection have been identified. In the first stage, at low perturbations amplitude, the selection mechanism is driven by the maximum growth rate of instabilities while in the second stage the selection is influenced by nonlinear phenomena caused by the interactions between the neighboring cells. Comparison of these predictions with other existing theories of pattern formation and experimental results will be discussed.
The influence of behavioral relevance on the processing of global scene properties: An ERP study.
Hansen, Natalie E; Noesen, Birken T; Nador, Jeffrey D; Harel, Assaf
2018-05-02
Recent work studying the temporal dynamics of visual scene processing (Harel et al., 2016) has found that global scene properties (GSPs) modulate the amplitude of early Event-Related Potentials (ERPs). It is still not clear, however, to what extent the processing of these GSPs is influenced by their behavioral relevance, determined by the goals of the observer. To address this question, we investigated how behavioral relevance, operationalized by the task context impacts the electrophysiological responses to GSPs. In a set of two experiments we recorded ERPs while participants viewed images of real-world scenes, varying along two GSPs, naturalness (manmade/natural) and spatial expanse (open/closed). In Experiment 1, very little attention to scene content was required as participants viewed the scenes while performing an orthogonal fixation-cross task. In Experiment 2 participants saw the same scenes but now had to actively categorize them, based either on their naturalness or spatial expense. We found that task context had very little impact on the early ERP responses to the naturalness and spatial expanse of the scenes: P1, N1, and P2 could distinguish between open and closed scenes and between manmade and natural scenes across both experiments. Further, the specific effects of naturalness and spatial expanse on the ERP components were largely unaffected by their relevance for the task. A task effect was found at the N1 and P2 level, but this effect was manifest across all scene dimensions, indicating a general effect rather than an interaction between task context and GSPs. Together, these findings suggest that the extraction of global scene information reflected in the early ERP components is rapid and very little influenced by top-down observer-based goals. Copyright © 2018 Elsevier Ltd. All rights reserved.
Phase 1 Development Report for the SESSA Toolkit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knowlton, Robert G.; Melton, Brad J; Anderson, Robert J.
The Site Exploitation System for Situational Awareness ( SESSA ) tool kit , developed by Sandia National Laboratories (SNL) , is a comprehensive de cision support system for crime scene data acquisition and Sensitive Site Exploitation (SSE). SESSA is an outgrowth of another SNL developed decision support system , the Building R estoration Operations Optimization Model (BROOM), a hardware/software solution for data acquisition, data management, and data analysis. SESSA was designed to meet forensic crime scene needs as defined by the DoD's Military Criminal Investigation Organiza tion (MCIO) . SESSA is a very comprehensive toolki t with a considerable amountmore » of database information managed through a Microsoft SQL (Structured Query Language) database engine, a Geographical Information System (GIS) engine that provides comprehensive m apping capabilities, as well as a an intuitive Graphical User Interface (GUI) . An electronic sketch pad module is included. The system also has the ability to efficiently generate necessary forms for forensic crime scene investigations (e.g., evidence submittal, laboratory requests, and scene notes). SESSA allows the user to capture photos on site, and can read and generate ba rcode labels that limit transcription errors. SESSA runs on PC computers running Windows 7, but is optimized for touch - screen tablet computers running Windows for ease of use at crime scenes and on SSE deployments. A prototype system for 3 - dimensional (3 D) mapping and measur e ments was also developed to complement the SESSA software. The mapping system employs a visual/ depth sensor that captures data to create 3D visualizations of an interior space and to make distance measurements with centimeter - level a ccuracy. Output of this 3D Model Builder module provides a virtual 3D %22walk - through%22 of a crime scene. The 3D mapping system is much less expensive and easier to use than competitive systems. This document covers the basic installation and operation of th e SESSA tool kit in order to give the user enough information to start using the tool kit . SESSA is currently a prototype system and this documentation covers the initial release of the tool kit . Funding for SESSA was provided by the Department of Defense (D oD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL) . ACKNOWLEDGEMENTS The authors wish to acknowledge the funding support for the development of the Site Exploitation System for Situational Awareness (SESSA) toolkit from the Department of Defense (DoD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL). Special thanks to Mr. Garold Warner, of DFSC, who served as the Project Manager. Individuals that worked on the design, functional attributes, algorithm development, system arc hitecture, and software programming include: Robert Knowlton, Brad Melton, Robert Anderson, and Wendy Amai.« less
NASA Fundamental Remote Sensing Science Research Program
NASA Technical Reports Server (NTRS)
1984-01-01
The NASA Fundamental Remote Sensing Research Program is described. The program provides a dynamic scientific base which is continually broadened and from which future applied research and development can draw support. In particular, the overall objectives and current studies of the scene radiation and atmospheric effect characterization (SRAEC) project are reviewed. The SRAEC research can be generically structured into four types of activities including observation of phenomena, empirical characterization, analytical modeling, and scene radiation analysis and synthesis. The first three activities are the means by which the goal of scene radiation analysis and synthesis is achieved, and thus are considered priority activities during the early phases of the current project. Scene radiation analysis refers to the extraction of information describing the biogeophysical attributes of the scene from the spectral, spatial, and temporal radiance characteristics of the scene including the atmosphere. Scene radiation synthesis is the generation of realistic spectral, spatial, and temporal radiance values for a scene with a given set of biogeophysical attributes and atmospheric conditions.
Johnson, Matthew R; Johnson, Marcia K
2009-12-01
Recent research has demonstrated top-down attentional modulation of activity in extrastriate category-selective visual areas while stimuli are in view (perceptual attention) and after they are removed from view (reflective attention). Perceptual attention is capable of both enhancing and suppressing activity in category-selective areas relative to a passive viewing baseline. In this study, we demonstrate that a brief, simple act of reflective attention ("refreshing") is also capable of both enhancing and suppressing activity in some scene-selective areas (the parahippocampal place area [PPA]) but not others (refreshing resulted in enhancement but not in suppression in the middle occipital gyrus [MOG]). This suggests that different category-selective extrastriate areas preferring the same class of stimuli may contribute differentially to reflective processing of one's internal representations of such stimuli.
NASA Astrophysics Data System (ADS)
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
Human machine interface by using stereo-based depth extraction
NASA Astrophysics Data System (ADS)
Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan
2014-03-01
The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.
Fast cat-eye effect target recognition based on saliency extraction
NASA Astrophysics Data System (ADS)
Li, Li; Ren, Jianlin; Wang, Xingbin
2015-09-01
Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.
NASA Technical Reports Server (NTRS)
1982-01-01
A gallery of what might be called the ""Best of HCMM'' imagery is presented. These 100 images, consisting mainly of Day-VIS, Day-IR, and Night-IR scenes plus a few thermal inertia images, were selected from the collection accrued in the Missions Utilization Office (Code 902) at the Goddard Space Flight Center. They were selected because of both their pictorial quality and their information or interest content. Nearly all the images are the computer processed and contrast stretched products routinely produced by the image processing facility at GSFC. Several LANDSAT images, special HCMM images made by HCMM investigators, and maps round out the input.
Delineation of soil temperature regimes from HCMM data
NASA Technical Reports Server (NTRS)
Day, R. L.; Petersen, G. W. (Principal Investigator)
1981-01-01
Supplementary data including photographs as well as topographic, geologic, and soil maps were obtained and evaluated for ground truth purposes and control point selection. A study area (approximately 450 by 450 pixels) was subset from LANDSAT scene No. 2477-17142. Geometric corrections and scaling were performed. Initial enhancement techniques were initiated to aid control point selection and soils interpretation. The SUBSET program was modified to read HCMM tapes and HCMM data were reformated so that they are compatible with the ORSER system. Initial NMAP products of geometrically corrected and scaled raw data tapes (unregistered) of the study were produced.
Cheng, Timothy C; Bandyopadhyay, Biswajit; Mosley, Jonathan D; Duncan, Michael A
2012-08-08
The structure of ions in water at a hydrophobic interface influences important processes throughout chemistry and biology. However, experiments to measure these structures are limited by the distribution of configurations present and the inability to selectively probe the interfacial region. Here, protonated nanoclusters containing benzene and water are produced in the gas phase, size-selected, and investigated with infrared laser spectroscopy. Proton stretch, free OH, and hydrogen-bonding vibrations uniquely define protonation sites and hydrogen-bonding networks. The structures consist of protonated water clusters binding to the hydrophobic interface of neutral benzene via one or more π-hydrogen bonds. Comparison to the spectra of isolated hydronium, zundel, or eigen ions reveals the inductive effects and local ordering induced by the interface. The structures and interactions revealed here represent key features expected for aqueous hydrophobic interfaces.
Bulk silicon as photonic dynamic infrared scene projector
NASA Astrophysics Data System (ADS)
Malyutenko, V. K.; Bogatyrenko, V. V.; Malyutenko, O. Yu.
2013-04-01
A Si-based fast (frame rate >1 kHz), large-scale (scene area 100 cm2), broadband (3-12 μm), dynamic contactless infrared (IR) scene projector is demonstrated. An IR movie appears on a scene because of the conversion of a visible scenario projected at a scene kept at elevated temperature. Light down conversion comes as a result of free carrier generation in a bulk Si scene followed by modulation of its thermal emission output in the spectral band of free carrier absorption. The experimental setup, an IR movie, figures of merit, and the process's advantages in comparison to other projector technologies are discussed.
Effects of memory colour on colour constancy for unknown coloured objects
Granzier, Jeroen J M; Gegenfurtner, Karl R
2012-01-01
The perception of an object's colour remains constant despite large variations in the chromaticity of the illumination—colour constancy. Hering suggested that memory colours, the typical colours of objects, could help in estimating the illuminant's colour and therefore be an important factor in establishing colour constancy. Here we test whether the presence of objects with diagnostical colours (fruits, vegetables, etc) within a scene influence colour constancy for unknown coloured objects in the scene. Subjects matched one of four Munsell papers placed in a scene illuminated under either a reddish or a greenish lamp with the Munsell book of colour illuminated by a neutral lamp. The Munsell papers were embedded in four different scenes—one scene containing diagnostically coloured objects, one scene containing incongruent coloured objects, a third scene with geometrical objects of the same colour as the diagnostically coloured objects, and one scene containing non-diagnostically coloured objects (eg, a yellow coffee mug). All objects were placed against a black background. Colour constancy was on average significantly higher for the scene containing the diagnostically coloured objects compared with the other scenes tested. We conclude that the colours of familiar objects help in obtaining colour constancy for unknown objects. PMID:23145282
Cleary, Anne M; Ryals, Anthony J; Nomi, Jason S
2009-12-01
The strange feeling of having been somewhere or done something before--even though there is evidence to the contrary--is called déjà vu. Although déjà vu is beginning to receive attention among scientists (Brown, 2003, 2004), few studies have empirically investigated the phenomenon. We investigated the hypothesis that déjà vu is related to feelings of familiarity and that it can result from similarity between a novel scene and that of a scene experienced in one's past. We used a variation of the recognition-without-recall method of studying familiarity (Cleary, 2004) to examine instances in which participants failed to recall a studied scene in response to a configurally similar novel test scene. In such instances, resemblance to a previously viewed scene increased both feelings of familiarity and of déjà vu. Furthermore, in the absence of recall, resemblance of a novel scene to a previously viewed scene increased the probability of a reported déjà vu state for the novel scene, and feelings of familiarity with a novel scene were directly related to feelings of being in a déjà vu state.
Research and Technology Development for Construction of 3d Video Scenes
NASA Astrophysics Data System (ADS)
Khlebnikova, Tatyana A.
2016-06-01
For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.
Bacterial diversity and composition of an alkaline uranium mine tailings-water interface.
Khan, Nurul H; Bondici, Viorica F; Medihala, Prabhakara G; Lawrence, John R; Wolfaardt, Gideon M; Warner, Jeff; Korber, Darren R
2013-10-01
The microbial diversity and biogeochemical potential associated with a northern Saskatchewan uranium mine water-tailings interface was examined using culture-dependent and -independent techniques. Morphologically-distinct colonies from uranium mine water-tailings and a reference lake (MC) obtained using selective and non-selective media were selected for 16S rRNA gene sequencing and identification, revealing that culturable organisms from the uranium tailings interface were dominated by Firmicutes and Betaproteobacteria; whereas, MC organisms mainly consisted of Bacteroidetes and Gammaproteobacteria. Ion Torrent (IT) 16S rRNA metagenomic analysis carried out on extracted DNA from tailings and MC interfaces demonstrated the dominance of Firmicutes in both of the systems. Overall, the tailings-water interface environment harbored a distinct bacterial community relative to the MC, reflective of the ambient conditions (i.e., total dissolved solids, pH, salinity, conductivity, heavy metals) dominating the uranium tailings system. Significant correlations among the physicochemical data and the major bacterial groups present in the tailings and MC were also observed. Presence of sulfate reducing bacteria demonstrated by culture-dependent analyses and the dominance of Desulfosporosinus spp. indicated by Ion Torrent analyses within the tailings-water interface suggests the existence of anaerobic microenvironments along with the potential for reductive metabolic processes.
NASA Astrophysics Data System (ADS)
Christie, Dane; Register, Richard; Priestley, Rodney
Interfaces play a determinant role in the size dependence of the glass transition temperature (Tg) of polymers confined to nanometric length scales. Interfaces are intrinsic in diblock copolymers, which, depending on their molecular weight and composition, are periodically nanostructured in the bulk. As a result diblock copolymers are model systems for characterizing the effect of interfaces on Tg in bulk nanostructured materials. Investigating the effect of intrinsic interfaces on Tg in diblock copolymers has remained unexplored due to their small periodic length scale. By selectively incorporating trace amounts of a fluorescent probe into a diblock copolymer, Tg can be characterized relative to the diblock copolymer's intrinsic interface using fluorescence spectroscopy. Here, pyrene is selectively incorporated into the poly(methyl methacrylate) (PMMA) block of lamellar forming diblock copolymers of poly(butyl- b-methyl methacrylate) (PBMA-PMMA). Preliminary results show a correlation of Tg as measured by fluorescence with the onset of Tg as measured by calorimetry in labeled homopolymers of PMMA. This result is consistent with previous characterizations of Tg using fluorescence spectroscopy. In selectively labeled diblock copolymers Tg is found to vary systematically depending on the distance of the probe from the PBMA-PMMA interface. We acknowledge funding from the Princeton Center for Complex Materials, a MRSEC supported by NSF Grant DMR 1420541.
Measurements of scene spectral radiance variability
NASA Astrophysics Data System (ADS)
Seeley, Juliette A.; Wack, Edward C.; Mooney, Daniel L.; Muldoon, Michael; Shey, Shen; Upham, Carolyn A.; Harvey, John M.; Czerwinski, Richard N.; Jordan, Michael P.; Vallières, Alexandre; Chamberland, Martin
2006-05-01
Detection performance of LWIR passive standoff chemical agent sensors is strongly influenced by various scene parameters, such as atmospheric conditions, temperature contrast, concentration-path length product (CL), agent absorption coefficient, and scene spectral variability. Although temperature contrast, CL, and agent absorption coefficient affect the detected signal in a predictable manner, fluctuations in background scene spectral radiance have less intuitive consequences. The spectral nature of the scene is not problematic in and of itself; instead it is spatial and temporal fluctuations in the scene spectral radiance that cannot be entirely corrected for with data processing. In addition, the consequence of such variability is a function of the spectral signature of the agent that is being detected and is thus different for each agent. To bracket the performance of background-limited (low sensor NEDN), passive standoff chemical sensors in the range of relevant conditions, assessment of real scene data is necessary1. Currently, such data is not widely available2. To begin to span the range of relevant scene conditions, we have acquired high fidelity scene spectral radiance measurements with a Telops FTIR imaging spectrometer 3. We have acquired data in a variety of indoor and outdoor locations at different times of day and year. Some locations include indoor office environments, airports, urban and suburban scenes, waterways, and forest. We report agent-dependent clutter measurements for three of these backgrounds.
Mickley Steinmetz, Katherine R; Sturkie, Charlee M; Rochester, Nina M; Liu, Xiaodong; Gutchess, Angela H
2018-07-01
After viewing a scene, individuals differ in what they prioritise and remember. Culture may be one factor that influences scene memory, as Westerners have been shown to be more item-focused than Easterners (see Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 922-934). However, cultures may differ in their sensitivity to scene incongruences and emotion processing, which may account for cross-cultural differences in scene memory. The current study uses hierarchical linear modeling (HLM) to examine scene memory while controlling for scene congruency and the perceived emotional intensity of the images. American and East Asian participants encoded pictures that included a positive, negative, or neutral item placed on a neutral background. After a 20-min delay, participants were shown the item and background separately along with similar and new items and backgrounds to assess memory specificity. Results indicated that even when congruency and emotional intensity were controlled, there was evidence that Americans had better item memory than East Asians. Incongruent scenes were better remembered than congruent scenes. However, this effect did not differ by culture. This suggests that Americans' item focus may result in memory changes that are robust despite variations in scene congruency and perceived emotion.
2017-01-01
The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner’s office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types. PMID:28604832
Sanford, Michelle R
2017-01-01
The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner's office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types.
Protein interface classification by evolutionary analysis
2012-01-01
Background Distinguishing biologically relevant interfaces from lattice contacts in protein crystals is a fundamental problem in structural biology. Despite efforts towards the computational prediction of interface character, many issues are still unresolved. Results We present here a protein-protein interface classifier that relies on evolutionary data to detect the biological character of interfaces. The classifier uses a simple geometric measure, number of core residues, and two evolutionary indicators based on the sequence entropy of homolog sequences. Both aim at detecting differential selection pressure between interface core and rim or rest of surface. The core residues, defined as fully buried residues (>95% burial), appear to be fundamental determinants of biological interfaces: their number is in itself a powerful discriminator of interface character and together with the evolutionary measures it is able to clearly distinguish evolved biological contacts from crystal ones. We demonstrate that this definition of core residues leads to distinctively better results than earlier definitions from the literature. The stringent selection and quality filtering of structural and sequence data was key to the success of the method. Most importantly we demonstrate that a more conservative selection of homolog sequences - with relatively high sequence identities to the query - is able to produce a clearer signal than previous attempts. Conclusions An evolutionary approach like the one presented here is key to the advancement of the field, which so far was missing an effective method exploiting the evolutionary character of protein interfaces. Its coverage and performance will only improve over time thanks to the incessant growth of sequence databases. Currently our method reaches an accuracy of 89% in classifying interfaces of the Ponstingl 2003 datasets and it lends itself to a variety of useful applications in structural biology and bioinformatics. We made the corresponding software implementation available to the community as an easy-to-use graphical web interface at http://www.eppic-web.org. PMID:23259833
Raila, Hannah; Scholl, Brian J; Gruber, June
2015-08-01
Given the many benefits conferred by trait happiness and life satisfaction, a primary goal is to determine how these traits relate to underlying cognitive processes. For example, visual attention acts as a gateway to awareness, raising the question of whether happy and satisfied people attend to (and therefore see) the world differently. Previous work suggests that biases in selective attention are associated with both trait negativity and with positive affect states, but to our knowledge, no previous work has explored whether trait-happy individuals attend to the world differently. Here, we employed eye tracking as a continuous measure of sustained overt attention during passive viewing of displays containing positive and neutral photographs to determine whether selective attention to positive scenes is associated with measures of trait happiness and life satisfaction. Both trait measures were significantly correlated with selective attention for positive (vs. neutral) scenes, and this general pattern was robust across several types of positive stimuli (achievement, social, and primary reward), and not because of positive or negative state affect. Such effects were especially prominent during the later phases of sustained viewing. This suggests that people who are happy and satisfied with life may literally see the world in a more positive light, as if through rose-colored glasses. Future work should investigate the causal relationship between such attention biases and one's happiness and life satisfaction. (c) 2015 APA, all rights reserved).
Distributed and Dynamic Storage of Working Memory Stimulus Information in Extrastriate Cortex
Sreenivasan, Kartik K.; Vytlacil, Jason; D'Esposito, Mark
2015-01-01
The predominant neurobiological model of working memory (WM) posits that stimulus information is stored via stable elevated activity within highly selective neurons. Based on this model, which we refer to as the canonical model, the storage of stimulus information is largely associated with lateral prefrontal cortex (lPFC). A growing number of studies describe results that cannot be fully explained by the canonical model, suggesting that it is in need of revision. In the present study, we directly test key elements of the canonical model. We analyzed functional MRI data collected as participants performed a task requiring WM for faces and scenes. Multivariate decoding procedures identified patterns of activity containing information about the items maintained in WM (faces, scenes, or both). While information about WM items was identified in extrastriate visual cortex (EC) and lPFC, only EC exhibited a pattern of results consistent with a sensory representation. Information in both regions persisted even in the absence of elevated activity, suggesting that elevated population activity may not represent the storage of information in WM. Additionally, we observed that WM information was distributed across EC neural populations that exhibited a broad range of selectivity for the WM items rather than restricted to highly selective EC populations. Finally, we determined that activity patterns coding for WM information were not stable, but instead varied over the course of a trial, indicating that the neural code for WM information is dynamic rather than static. Together, these findings challenge the canonical model of WM. PMID:24392897
Modulation of V1 Spike Response by Temporal Interval of Spatiotemporal Stimulus Sequence
Kim, Taekjun; Kim, HyungGoo R.; Kim, Kayeon; Lee, Choongkil
2012-01-01
The spike activity of single neurons of the primary visual cortex (V1) becomes more selective and reliable in response to wide-field natural scenes compared to smaller stimuli confined to the classical receptive field (RF). However, it is largely unknown what aspects of natural scenes increase the selectivity of V1 neurons. One hypothesis is that modulation by surround interaction is highly sensitive to small changes in spatiotemporal aspects of RF surround. Such a fine-tuned modulation would enable single neurons to hold information about spatiotemporal sequences of oriented stimuli, which extends the role of V1 neurons as a simple spatiotemporal filter confined to the RF. In the current study, we examined the hypothesis in the V1 of awake behaving monkeys, by testing whether the spike response of single V1 neurons is modulated by temporal interval of spatiotemporal stimulus sequence encompassing inside and outside the RF. We used two identical Gabor stimuli that were sequentially presented with a variable stimulus onset asynchrony (SOA): the preceding one (S1) outside the RF and the following one (S2) in the RF. This stimulus configuration enabled us to examine the spatiotemporal selectivity of response modulation from a focal surround region. Although S1 alone did not evoke spike responses, visual response to S2 was modulated for SOA in the range of tens of milliseconds. These results suggest that V1 neurons participate in processing spatiotemporal sequences of oriented stimuli extending outside the RF. PMID:23091631
NASA Astrophysics Data System (ADS)
Wang, Xicheng; Gao, Jiaobo; Wu, Jianghui; Li, Jianjun; Cheng, Hongliang
2017-02-01
Recently, hyperspectral image projectors (HIP) have been developed in the field of remote sensing. For the advanced performance of system-level validation, target detection and hyperspectral image calibration, HIP has great possibility of development in military, medicine, commercial and so on. HIP is based on the digital micro-mirror device (DMD) and projection technology, which is capable to project arbitrary programmable spectra (controlled by PC) into the each pixel of the IUT1 (instrument under test), such that the projected image could simulate realistic scenes that hyperspectral image could be measured during its use and enable system-level performance testing and validation. In this paper, we built a visible hyperspectral image projector also called the visible target simulator with double DMDs, which the first DMD is used to product the selected monochromatic light from the wavelength of 410 to 720 um, and the light come to the other one. Then we use computer to load image of realistic scenes to the second DMD, so that the target condition and background could be project by the second DMD with the selected monochromatic light. The target condition can be simulated and the experiment could be controlled and repeated in the lab, making the detector instrument could be tested in the lab. For the moment, we make the focus on the spectral engine design include the optical system, research of DMD programmable spectrum and the spectral resolution of the selected spectrum. The detail is shown.
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
Interface Pattern Selection Criterion for Cellular Structures in Directional Solidification
NASA Technical Reports Server (NTRS)
Trivedi, R.; Tewari, S. N.; Kurtze, D.
1999-01-01
The aim of this investigation is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. We shall first address scientific concepts that are crucial in the selection of interface patterns. Next, the results of ground-based experimental studies in the Al-4.0 wt % Cu system will be described. Both experimental studies and theoretical calculations will be presented to establish the need for microgravity experiments.
Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm
NASA Technical Reports Server (NTRS)
Sidick, Erikin
2009-01-01
A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.
GeoCrystal: graphic-interactive access to geodata archives
NASA Astrophysics Data System (ADS)
Goebel, Stefan; Haist, Joerg; Jasnoch, Uwe
2002-03-01
Recently there is spent a lot of effort to establish information systems and global infrastructures enabling both data suppliers and users to describe (-> eCommerce, metadata) as well as to find appropriate data. Examples for this are metadata information systems, online-shops or portals for geodata. The main disadvantages of existing approaches are insufficient methods and mechanisms leading users to (e.g. spatial) data archives. This affects aspects concerning usability and personalization in general as well as visual feedback techniques in the different steps of the information retrieval process. Several approaches aim at the improvement of graphical user interfaces by using intuitive metaphors, but only some of them offer 3D interfaces in the form of information landscapes or geographic result scenes in the context of information systems for geodata. This paper presents GeoCrystal, which basic idea is to adopt Venn diagrams to compose complex queries and to visualize search results in a 3D information and navigation space for geodata. These concepts are enhanced with spatial metaphors and 3D information landscapes (library for geodata) wherein users can specify searches for appropriate geodata and are enabled to graphic-interactively communicate with search results (book metaphor).
Distributed and collaborative synthetic environments
NASA Technical Reports Server (NTRS)
Bajaj, Chandrajit L.; Bernardini, Fausto
1995-01-01
Fast graphics workstations and increased computing power, together with improved interface technologies, have created new and diverse possibilities for developing and interacting with synthetic environments. A synthetic environment system is generally characterized by input/output devices that constitute the interface between the human senses and the synthetic environment generated by the computer; and a computation system running a real-time simulation of the environment. A basic need of a synthetic environment system is that of giving the user a plausible reproduction of the visual aspect of the objects with which he is interacting. The goal of our Shastra research project is to provide a substrate of geometric data structures and algorithms which allow the distributed construction and modification of the environment, efficient querying of objects attributes, collaborative interaction with the environment, fast computation of collision detection and visibility information for efficient dynamic simulation and real-time scene display. In particular, we address the following issues: (1) A geometric framework for modeling and visualizing synthetic environments and interacting with them. We highlight the functions required for the geometric engine of a synthetic environment system. (2) A distribution and collaboration substrate that supports construction, modification, and interaction with synthetic environments on networked desktop machines.
Working research codes into fluid dynamics education: a science gateway approach
NASA Astrophysics Data System (ADS)
Mason, Lachlan; Hetherington, James; O'Reilly, Martin; Yong, May; Jersakova, Radka; Grieve, Stuart; Perez-Suarez, David; Klapaukh, Roman; Craster, Richard V.; Matar, Omar K.
2017-11-01
Research codes are effective for illustrating complex concepts in educational fluid dynamics courses, compared to textbook examples, an interactive three-dimensional visualisation can bring a problem to life! Various barriers, however, prevent the adoption of research codes in teaching: codes are typically created for highly-specific `once-off' calculations and, as such, have no user interface and a steep learning curve. Moreover, a code may require access to high-performance computing resources that are not readily available in the classroom. This project allows academics to rapidly work research codes into their teaching via a minimalist `science gateway' framework. The gateway is a simple, yet flexible, web interface allowing students to construct and run simulations, as well as view and share their output. Behind the scenes, the common operations of job configuration, submission, monitoring and post-processing are customisable at the level of shell scripting. In this talk, we demonstrate the creation of an example teaching gateway connected to the Code BLUE fluid dynamics software. Student simulations can be run via a third-party cloud computing provider or a local high-performance cluster. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
Crime scene units: a look to the future
NASA Astrophysics Data System (ADS)
Baldwin, Hayden B.
1999-02-01
The scientific examination of physical evidence is well recognized as a critical element in conducting successful criminal investigations and prosecutions. The forensic science field is an ever changing discipline. With the arrival of DNA, new processing techniques for latent prints, portable lasers, and electro-static dust print lifters, and training of evidence technicians has become more important than ever. These scientific and technology breakthroughs have increased the possibility of collecting and analyzing physical evidence that was never possible before. The problem arises with the collection of physical evidence from the crime scene not from the analysis of the evidence. The need for specialized units in the processing of all crime scenes is imperative. These specialized units, called crime scene units, should be trained and equipped to handle all forms of crime scenes. The crime scenes units would have the capability to professionally evaluate and collect pertinent physical evidence from the crime scenes.
Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared
NASA Technical Reports Server (NTRS)
Smith, J. A.; Ballard, J. R., Jr.
1999-01-01
We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.
Visual memory for moving scenes.
DeLucia, Patricia R; Maldia, Maria M
2006-02-01
In the present study, memory for picture boundaries was measured with scenes that simulated self-motion along the depth axis. The results indicated that boundary extension (a distortion in memory for picture boundaries) occurred with moving scenes in the same manner as that reported previously for static scenes. Furthermore, motion affected memory for the boundaries but this effect of motion was not consistent with representational momentum of the self (memory being further forward in a motion trajectory than actually shown). We also found that memory for the final position of the depicted self in a moving scene was influenced by properties of the optical expansion pattern. The results are consistent with a conceptual framework in which the mechanisms that underlie boundary extension and representational momentum (a) process different information and (b) both contribute to the integration of successive views of a scene while the scene is changing.
Short report: the effect of expertise in hiking on recognition memory for mountain scenes.
Kawamura, Satoru; Suzuki, Sae; Morikawa, Kazunori
2007-10-01
The nature of an expert memory advantage that does not depend on stimulus structure or chunking was examined, using more ecologically valid stimuli in the context of a more natural activity than previously studied domains. Do expert hikers and novice hikers see and remember mountain scenes differently? In the present experiment, 18 novice hikers and 17 expert hikers were presented with 60 photographs of scenes from hiking trails. These scenes differed in the degree of functional aspects that implied some action possibilities or dangers. The recognition test revealed that the memory performance of experts was significantly superior to that of novices for scenes with highly functional aspects. The memory performance for the scenes with few functional aspects did not differ between novices and experts. These results suggest that experts pay more attention to, and thus remember better, scenes with functional meanings than do novices.
NASA Astrophysics Data System (ADS)
Banon, J.-P.; Hetland, Ø. S.; Simonsen, I.
2018-02-01
By the use of both perturbative and non-perturbative solutions of the reduced Rayleigh equation, we present a detailed study of the scattering of light from two-dimensional weakly rough dielectric films. It is shown that for several rough film configurations, Selényi interference rings exist in the diffusely scattered light. For film systems supported by dielectric substrates where only one of the two interfaces of the film is weakly rough and the other planar, Selényi interference rings are observed at angular positions that can be determined from simple phase arguments. For such single-rough-interface films, we find and explain by a single scattering model that the contrast in the interference patterns is better when the top interface of the film (the interface facing the incident light) is rough than when the bottom interface is rough. When both film interfaces are rough, Selényi interference rings exist but a potential cross-correlation of the two rough interfaces of the film can be used to selectively enhance some of the interference rings while others are attenuated and might even disappear. This feature may in principle be used in determining the correlation properties of interfaces of films that otherwise would be difficult to access.
NASA Astrophysics Data System (ADS)
Sun, Yuxing
2018-05-01
In this paper, a grey prediction model is used to predict the carbon emission in Hebei province, and the impact analysis model based on TermCo2 is established. At the same time, we read a lot about CGE and study on how to build the scene, the selection of key parameters, and sensitivity analysis of application scenarios do industry for reference.
Mission Specialist (MS) Fabian sleeps on middeck
1983-06-24
STS007-26-1439 (18-24 June 1983) --- Astronaut John M. Fabian, STS-7 mission specialist, is captured with a 35mm camera at his sleep station in the middeck of the Earth-orbiting space shuttle Challenger. This scene was selected by the five-member astronaut crew for showing at its July 1, 1983 Post Flight Press Conference (PFPC) at the Johnson Space Center's (JSC) main auditorium. Photo credit: NASA
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Gursel, Yekta; Sepulveda, Cesar A.; Anderson, Mark; La Baw, Clayton; Johnson, Kenneth R.; Deans, Matthew; Beegle, Luther; Boynton, John
2008-01-01
Conducting high resolution field microscopy with coupled laser spectroscopy that can be used to selectively analyze the surface chemistry of individual pixels in a scene is an enabling capability for next generation robotic and manned spaceflight missions, civil, and military applications. In the laboratory, we use a range of imaging and surface preparation tools that provide us with in-focus images, context imaging for identifying features that we want to investigate at high magnification, and surface-optical coupling that allows us to apply optical spectroscopic analysis techniques for analyzing surface chemistry particularly at high magnifications. The camera, hand lens, and microscope probe with scannable laser spectroscopy (CHAMP-SLS) is an imaging/spectroscopy instrument capable of imaging continuously from infinity down to high resolution microscopy (resolution of approx. 1 micron/pixel in a final camera format), the closer CHAMP-SLS is placed to a feature, the higher the resultant magnification. At hand lens to microscopic magnifications, the imaged scene can be selectively interrogated with point spectroscopic techniques such as Raman spectroscopy, microscopic Laser Induced Breakdown Spectroscopy (micro-LIBS), laser ablation mass-spectrometry, Fluorescence spectroscopy, and/or Reflectance spectroscopy. This paper summarizes the optical design, development, and testing of the CHAMP-SLS optics.
Emergence of neural encoding of auditory objects while listening to competing speakers
Ding, Nai; Simon, Jonathan Z.
2012-01-01
A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation. PMID:22753470
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
The cognitive processing of film and musical soundtracks.
Boltz, Marilyn G
2004-10-01
Previous research has demonstrated that musical soundtracks can influence the interpretation, emotional impact, and remembering of film information. The intent here was to examine how music is encoded into the cognitive system and subsequently represented relative to its accompanying visual action. In Experiment 1, participants viewed a set of music/film clips that were either congruent or incongruent in their emotional affects. Selective attending was also systematically manipulated by instructing viewers to attend to and remember the music, film, or both in tandem. The results from tune recognition, film recall, and paired discrimination tasks collectively revealed that mood-congruent pairs lead to a joint encoding of music/film information as well as an integrated memory code. Incongruent pairs, on the other hand, result in an independent encoding in which a given dimension, music or film, is only remembered well if it was selectively attended to at the time of encoding. Experiment 2 extended these findings by showing that tunes from mood-congruent pairs are better recognized when cued by their original scenes, while those from incongruent pairs are better remembered in the absence of scene information. These findings both support and extend the "Congruence Associationist Model" (A. J. Cohen, 2001), which addresses those cognitive mechanisms involved in the processing of music/film information.