Science.gov

Sample records for actual scene note

  1. Exocentric direction judgements in computer-generated displays and actual scenes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Smith, Stephen; Mcgreevy, Michael W.; Grunwald, Arthur J.

    1989-01-01

    One of the most remarkable perceptual properties of common experience is that the perceived shapes of known objects are constant despite movements about them which transform their projections on the retina. This perceptual ability is one aspect of shape constancy (Thouless, 1931; Metzger, 1953; Borresen and Lichte, 1962). It requires that the viewer be able to sense and discount his or her relative position and orientation with respect to a viewed object. This discounting of relative position may be derived directly from the ranging information provided from stereopsis, from motion parallax, from vestibularly sensed rotation and translation, or from corollary information associated with voluntary movement. It is argued that: (1) errors in exocentric judgements of the azimuth of a target generated on an electronic perspective display are not viewpoint-independent, but are influenced by the specific geometry of their perspective projection; (2) elimination of binocular conflict by replacing electronic displays with actual scenes eliminates a previously reported equidistance tendency in azimuth error, but the viewpoint dependence remains; (3) the pattern of exocentrically judged azimuth error in real scenes viewed with a viewing direction depressed 22 deg and rotated + or - 22 deg with respect to a reference direction could not be explained by overestimation of the depression angle, i.e., a slant overestimation.

  2. Noted

    ERIC Educational Resources Information Center

    Nunberg, Geoffrey

    2013-01-01

    Considering how much attention people lavish on the technologies of writing--scroll, codex, print, screen--it's striking how little they pay to the technologies for digesting and regurgitating it. One way or another, there's no sector of the modern world that is not saturated with note-taking--the bureaucracy, the liberal professions, the…

  3. Automatic pitching scene archiving system for video indexing support

    NASA Astrophysics Data System (ADS)

    Shono, Yuki; Aoki, Yoshimitsu

    2004-10-01

    A content-based scene indexing has been important technique for an effective video contents handling such as scene retrieval and editing. The standard multimedia content descriptor (MPEG7) has been proposed for the key scene indexing. As for an automatic scene indexing, audio-visual features are most important clues. Many methods have been proposed for effective scene indexing based on those features. In this paper, we propose an automatic key scene detection method for baseball video contents using video features. We regard pitching scenes as key scenes, because they are starting points of all baseball play scenes. If the pitching scenes are detected, they could be effective hints to detect other scenes. In addition, a pitching scene digest video can be easily edited by gathering automatically extracted scenes. The pitching scene digest can be useful data for pitching analysis. We extract pitching scenes using color, domain and motion template created from manually selected pitching scene samples. Those templates contain image features unique to pitching scenes. Template matching is applied to video stream, so that target scenes can be detected by judging calculated matching rate. We experimentally test our method for actual baseball video contents. It can be useful data for pitching analysis and editing of digest news broad casting. We are developing the video indexing support system which users can give text annotations to indexed scenes using MPEG7 format descriptors.

  4. Three-dimensional imaging in crime scene investigations

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    Law enforcement is responsible for investigating crimes, identifying and arresting the suspects, and presenting evidence to a judge and jury in court. In order to objectively perform these duties, police need to gather accurate information and clearly explain the crime scene and physical evidence in a court of law. Part of this information includes the documentation of the incident. Documenting an incident has always been divided into three categories: notes, sketch, and photographs. This method of recording crime scenes has been the standard for years. The major drawback, however, is that the visual documents of sketches and photographs are two dimensional. This greatly restricts the actual visualization of the incident requiring a careful cross referencing of the details in order to understand it.

  5. Diacria Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image provides a representative view of the vast martian northern plains in the Diacria region near 52.8oN, 184.7oW. This is what the plains looked like in late northern spring in August 2004, after the seasonal winter frost had sublimed away and dust devils began to leave dark streaks on the surface. Many of the dark dust devil streaks in this image are concentrated near a low mound -- the location of a shallowly-filled and buried impact crater. The picture covers an area about 3 km (1.9 mi) wide. Sunlight illuminates the scene from the lower left.

  6. Constructing, Perceiving, and Maintaining Scenes: Hippocampal Activity and Connectivity

    PubMed Central

    Zeidman, Peter; Mullally, Sinéad L.; Maguire, Eleanor A.

    2015-01-01

    In recent years, evidence has accumulated to suggest the hippocampus plays a role beyond memory. A strong hippocampal response to scenes has been noted, and patients with bilateral hippocampal damage cannot vividly recall scenes from their past or construct scenes in their imagination. There is debate about whether the hippocampus is involved in the online processing of scenes independent of memory. Here, we investigated the hippocampal response to visually perceiving scenes, constructing scenes in the imagination, and maintaining scenes in working memory. We found extensive hippocampal activation for perceiving scenes, and a circumscribed area of anterior medial hippocampus common to perception and construction. There was significantly less hippocampal activity for maintaining scenes in working memory. We also explored the functional connectivity of the anterior medial hippocampus and found significantly stronger connectivity with a distributed set of brain areas during scene construction compared with scene perception. These results increase our knowledge of the hippocampus by identifying a subregion commonly engaged by scenes, whether perceived or constructed, by separating scene construction from working memory, and by revealing the functional network underlying scene construction, offering new insights into why patients with hippocampal lesions cannot construct scenes. PMID:25405941

  7. Analyzing crime scene videos

    NASA Astrophysics Data System (ADS)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  8. Hydrological AnthropoScenes

    NASA Astrophysics Data System (ADS)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y

  9. Infant death scene investigation.

    PubMed

    Tabor, Pamela D; Ragan, Krista

    2015-01-01

    The sudden unexpected death of an infant is a tragedy to the family, a concern to the community, and an indicator of national health. To accurately determine the cause and manner of the infant's death, a thorough and accurate death scene investigation by properly trained personnel is key. Funding and resources are directed based on autopsy reports, which are only as accurate as the scene investigation. The investigation should include a standardized format, body diagrams, and a photographed or videotaped scene recreation utilizing doll reenactment. Forensic nurses, with their basic nursing knowledge and additional forensic skills and abilities, are optimally suited to conduct infant death scene investigations as well as train others to properly conduct death scene investigations. Currently, 49 states have child death review teams, which is an idea avenue for a forensic nurse to become involved in death scene investigations.

  10. Scene analysis without spectral analysis?

    NASA Astrophysics Data System (ADS)

    de Cheveigne, Alain

    2003-04-01

    Auditory scene analysis is often described in terms of grouping stimulus components. Components, once grouped, are assigned to one source or another [A. S. Bregman, Auditory Scene Analysis (MIT, Cambridge, MA, 2002)]. The actual grouping must operate on whatever representation is available within the auditory nervous system. An obvious hypothesis is that correlates of individual stimulus components are created by peripheral spectral analysis. However, peripheral frequency resolution is limited. The number of resolved partials is between 5 and 8 for a harmonic complex in isolation, but resolution must necessarily be less good for the interleaved components of concurrent sources. Source amplitudes are rarely equal, and partials of a weaker source must be particularly hard to resolve. The question is thus: given the paucity of resolved elements to group, how does the auditory system perform the grouping? A number of possibilities will be reviewed. One is that partials not resolved peripherally are somehow resolved centrally (a modern version of the ``second filter'' hypothesis). Another is that scene analysis does not operate by grouping resolved elements, but instead by modifying directly unresolved entities, for example by time-domain processing.

  11. The etiological significance of the primal scene in perversions.

    PubMed

    Peto, A

    1975-01-01

    The etiological significance of the actually observed primal scene in fetishism and other perversions is discussed. The impact of the primal scene on the pathology of part object relationships, self and object image, and on the development of superego structures in perversion is stressed.

  12. Underwater Scene Composition

    ERIC Educational Resources Information Center

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  13. Navigating the auditory scene: an expert role for the hippocampus

    PubMed Central

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C. Rebecca; Moore, Brian C. J.; Capleton, Brian; Griffiths, Timothy D.

    2012-01-01

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on: firstly, selective listening to beats within frequency windows and, secondly, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in grey matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of grey matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with GM volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound ‘templates’ are encoded and consolidated into memory over time in an experience-dependent manner. PMID:22933806

  14. Navigating the auditory scene: an expert role for the hippocampus.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.

  15. Reasoning about scene descriptions

    SciTech Connect

    DiManzo, M.; Adorni, G.; Giunchiglia, F.

    1986-07-01

    When a scene is described by means of natural language sentences, many details are usually omitted, because they are not in the focus of the conversation. Moreover, natural language is not the best tool to define precisely positions and spatial relationships. The process of interpreting ambiguous statements and inferring missing details involves many types of knowledge, from linguistics to physics. This paper is mainly concerned with the problem of modeling the process of understanding descriptions of static scenes. The specific topics covered by this work are the analysis of the meaning of spatial prepositions, the problem of the reference system and dimensionality, the activation of expectations about unmentioned objects, the role of default knowledge about object positions and its integration with contextual information sources, and the problem of space representation. The issue of understanding dynamic scenes descriptions is briefly approached in the last section.

  16. Rural Scene Perspective Transformations

    NASA Astrophysics Data System (ADS)

    Devich, Robert N.; Weinhaus, Frederick M.

    1982-06-01

    This paper presents a method for converting Landsat imagery of natural rural scenes to horizontal viewing perspectives in a digital image processing system. The technique uses digital terrain images for a three-dimensional representation of the scene. Full color pixel-by-pixel (as opposed to skeletal or graphical) images are synthesized, and hidden pixels are eliminated. A sequence of synthesized images of the Colorado River basin is shown. Examples of panoramic and orthographic projections are also shown. An appendix presents a method for converting a contour map into a digital terrain map in raster format.

  17. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.

    1975-01-01

    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.

  18. Canonical Views of Dynamic Scenes

    ERIC Educational Resources Information Center

    Garsoffky, Barbel; Schwan, Stephan; Huff, Markus

    2009-01-01

    The visual recognition of dynamic scenes was examined. The authors hypothesized that the notion of canonical views, which has received strong empirical support for static objects, also holds for dynamic scenes. In Experiment 1, viewpoints orthogonal to the main axis of movement in the scene were preferred over other viewpoints, whereas viewpoints…

  19. Capturing, processing, and rendering real-world scenes

    NASA Astrophysics Data System (ADS)

    Nyland, Lars S.; Lastra, Anselmo A.; McAllister, David K.; Popescu, Voicu; McCue, Chris; Fuchs, Henry

    2000-12-01

    While photographs vividly capture a scene from a single viewpoint, it is our goal to capture a scene in such a way that a viewer can freely move to any viewpoint, just as he or she would in an actual scene. We have built a prototype system to quickly digitize a scene using a laser rangefinder and a high-resolution digital camera that accurately captures a panorama of high-resolution range and color information. With real-world scenes, we have provided data to fuel research in many area, including representation, registration, data fusion, polygonization, rendering, simplification, and reillumination. The real-world scene data can be used for many purposes, including immersive environments, immersive training, re-engineering and engineering verification, renovation, crime-scene and accident capture and reconstruction, archaeology and historic preservation, sports and entertainment, surveillance, remote tourism and remote sales. We will describe our acquisition system, the necessary processing to merge data from the multiple input devices and positions. We will also describe high quality rendering using the data we have collected. Issues about specific rendering accelerators and algorithms will also be presented. We will conclude by describing future uses and methods of collection for real- world scene data.

  20. Opportunity's Heat Shield Scene

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This image from NASA's Mars Exploration Rover Opportunity reveals the scene of the rover's heat shield impact. In this view, Opportunity is approximately 130 meters (427 feet) away from the device that protected it while hurtling through the martian atmosphere.

    The rover spent 36 sols investigating how the severe heating during entry through the atmosphere affected the heat shield. The most obvious is the fact that the heat shield inverted upon impact.

    This is the panoramic camera team's best current attempt at generating a true-color view of what this scene would look like if viewed by a human on Mars. It was generated from a mathematical combination of six calibrated, left-eye panoramic camera images acquired around 1:50 p.m. local solar time on Opportunity's sol 322 (Dec. 19, 2004) using filters ranging in wavelengths from 430 to 750 nanometers.

  1. Automated Synthetic Scene Generation

    DTIC Science & Technology

    2014-07-01

    materials using measured emissivities or, if measured emissivities are not available, thermal modeling for mid and long-wave infrared regions. This...reflected irradiance to the directional incident irradiance. Using the Lambertian simplification of Equation (3.6) and recognizing that thermal ...simplification is used here in that all materials in the autonomously generated scenes are assumed to be opaque and at thermal equilibrium. Using these

  2. South Polar Scene

    NASA Technical Reports Server (NTRS)

    2006-01-01

    19 January 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a mid-summer scene in the south polar region of the red planet. The light-toned surface is covered with seasonal frost that, later in the season, would have sublimed away.

    Location near: 86.8oS, 322.8oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Summer

  3. South Polar Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    5 February 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a portion of the south polar residual cap. Sunlight illuminates this scene from the upper left, thus the somewhat kidney bean-shaped features are pits, not mounds. These pits and their neighboring polygonal cracks are formed in a material composed mostly of carbon dioxide ice. The image is located near 87.0oS, 5.7oW, and covers an area 3 km (1.9 mi) wide.

  4. Use of Data Mining Techniques to Model Crime Scene Investigator Performance

    NASA Astrophysics Data System (ADS)

    Adderley, Richard; Townsley, Michael; Bond, John

    This paper examines how data mining techniques can assist the monitoring of Crime Scene Investigator performance. The findings show that Investigators can be placed in one of four groups according to their ability to recover DNA and fingerprints from crime scenes. They also show that their ability to predict which crime scenes will yield the best opportunity of recovering forensic samples has no correlation to their actual ability to recover those samples.

  5. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Barrow, H. G.; Weyl, S. A.

    1976-01-01

    Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography.

  6. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  7. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  8. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1980-01-01

    Presents four notes that report new equipment and techniques of interest to physics teachers. These notes deal with collosions of atoms in solids, determining the viscosity of a liquid, measuring the speed of sound and demonstrating Doppler effect. (HM)

  9. Suicide notes.

    PubMed

    O'Donnell, I; Farmer, R; Catalan, J

    1993-07-01

    Detailed case reports of incidents of suicide and attempted suicide on the London Underground railway system between 1985 and 1989 were examined for the presence of suicide notes. The incidence of note-leaving was 15%. Notes provided little insight into the causes of suicide as subjectively perceived, or strategies for suicide prevention.

  10. Monocular visual scene understanding: understanding multi-object traffic scenes.

    PubMed

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.

  11. Beyond Scene Gist: Objects Guide Search More Than Scene Background.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-03-13

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record

  12. Multi- and hyperspectral scene modeling

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  13. Chemistry Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1980

    1980-01-01

    Presents 12 chemistry notes for British secondary school teachers. Some of these notes are: (1) a simple device for testing pH-meters; (2) portable fume cupboard safety screen; and (3) Mass spectroscopy-analysis of a mass peak. (HM)

  14. Creating Three-Dimensional Scenes

    ERIC Educational Resources Information Center

    Krumpe, Norm

    2005-01-01

    Persistence of Vision Raytracer (POV-Ray), a free computer program for creating photo-realistic, three-dimensional scenes and a link for Mathematica users interested in generating POV-Ray files from within Mathematica, is discussed. POV-Ray has great potential in secondary mathematics classrooms and helps in strengthening students' visualization…

  15. Scene-of-crime analysis by a 3-dimensional optical digitizer: a useful perspective for forensic science.

    PubMed

    Sansoni, Giovanna; Cattaneo, Cristina; Trebeschi, Marco; Gibelli, Daniele; Poppa, Pasquale; Porta, Davide; Maldarella, Monica; Picozzi, Massimo

    2011-09-01

    Analysis and detailed registration of the crime scene are of the utmost importance during investigations. However, this phase of activity is often affected by the risk of loss of evidence due to the limits of traditional scene of crime registration methods (ie, photos and videos). This technical note shows the utility of the application of a 3-dimensional optical digitizer on different crime scenes. This study aims in fact at verifying the importance and feasibility of contactless 3-dimensional reconstruction and modeling by optical digitization to achieve an optimal registration of the crime scene.

  16. Project Notes

    ERIC Educational Resources Information Center

    School Science Review, 1978

    1978-01-01

    Presents sixteen project notes developed by pupils of Chipping Norton School and Bristol Grammar School, in the United Kingdom. These Projects include eight biology A-level projects and eight Chemistry A-level projects. (HM)

  17. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  18. Standard Scenes Program for Establishing a Natural Scenes Data Base

    DTIC Science & Technology

    1989-12-01

    portable weather station . Observations for many of the meteorological variables are recorded at five minute intervals throughout the 24-hour period...standard scenes is recorded, an extensive set of physical and meteorological measurements is also recorded on-site by a portable weather station ...of the KRC portable weather station and used to collect the data listed above. Additional weather data are obtained from NOAA (National Oceanic and

  19. Blue Note

    ScienceCinema

    Murray Gibson

    2016-07-12

    Argonne's Murray Gibson is a physicist whose life's work includes finding patterns among atoms. The love of distinguishing patterns also drives Gibson as a musician and Blues enthusiast."Blue" notes are very harmonic notes that are missing from the equal temperament scale.The techniques of piano blues and jazz represent the melding of African and Western music into something totally new and exciting.

  20. Blue Note

    SciTech Connect

    Murray Gibson

    2007-04-27

    Argonne's Murray Gibson is a physicist whose life's work includes finding patterns among atoms. The love of distinguishing patterns also drives Gibson as a musician and Blues enthusiast."Blue" notes are very harmonic notes that are missing from the equal temperament scale.The techniques of piano blues and jazz represent the melding of African and Western music into something totally new and exciting.

  1. Crime scene interpretation: back to basics

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    This presentation is a review of the basics involved in the interpretation of the crime scene based on facts derived from the physical and testimonial evidence obtained from the scene. This presentation will demonstrate the need to thoroughly document the scene to prove the interpretation. Part of this documentation is based on photography and crime scene sketches. While the methodology is simple and well demonstrated in this presentation this aspect is one of the tasks least completed by most law enforcement agencies.

  2. Scanning scene tunnel for city traversing.

    PubMed

    Zheng, Jiang Yu; Zhou, Yu; Milli, Panayiotis

    2006-01-01

    This paper proposes a visual representation named scene tunnel for capturing urban scenes along routes and visualizing them on the Internet. We scan scenes with multiple cameras or a fish-eye camera on a moving vehicle, which generates a real scene archive along streets that is more complete than previously proposed route panoramas. Using a translating spherical eye, properly set planes of scanning, and unique parallel-central projection, we explore the image acquisition of the scene tunnel from camera selection and alignment, slit calculation, scene scanning, to image integration. The scene tunnels cover high buildings, ground, and various viewing directions and have uniformed resolutions along the street. The sequentially organized scene tunnel benefits texture mapping onto the urban models. We analyze the shape characteristics in the scene tunnels for designing visualization algorithms. After combining this with a global panorama and forward image caps, the capped scene tunnels can provide continuous views directly for virtual or real navigation in a city. We render scene tunnel dynamically by view warping, fast transmission, and flexible interaction. The compact and continuous scene tunnel facilitates model construction, data streaming, and seamless route traversing on the Internet and mobile devices.

  3. Simulator scene display evaluation device

    NASA Technical Reports Server (NTRS)

    Haines, R. F. (Inventor)

    1986-01-01

    An apparatus for aligning and calibrating scene displays in an aircraft simulator has a base on which all of the instruments for the aligning and calibrating are mounted. Laser directs beam at double right prism which is attached to pivoting support on base. The pivot point of the prism is located at the design eye point (DEP) of simulator during the aligning and calibrating. The objective lens in the base is movable on a track to follow the laser beam at different angles within the field of vision at the DEP. An eyepiece and a precision diopter are movable into a position behind the prism during the scene evaluation. A photometer or illuminometer is pivotable about the pivot into and out of position behind the eyepiece.

  4. Crime scene investigation, reporting, and reconstuction (CSIRR)

    NASA Astrophysics Data System (ADS)

    Booth, John F.; Young, Jeffrey M.; Corrigan, Paul

    1997-02-01

    Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.

  5. Dynamic Scene Classification Using Redundant Spatial Scenelets.

    PubMed

    Du, Liang; Ling, Haibin

    2016-09-01

    Dynamic scene classification started drawing an increasing amount of research efforts recently. While existing arts mainly rely on low-level features, little work addresses the need of exploring the rich spatial layout information in dynamic scene. Motivated by the fact that dynamic scenes are characterized by both dynamic and static parts with spatial layout priors, we propose to use redundant spatial grouping of a large number of spatiotemporal patches, named scenelet, to represent a dynamic scene. Specifically, each scenelet is associated with a category-dependent scenelet model to encode the likelihood of a specific scene category. All scenelet models for a scene category are jointly learned to encode the spatial interactions and redundancies among them. Subsequently, a dynamic scene sequence is represented as a collection of category likelihoods estimated by these scenelet models. Such presentation effectively encodes the spatial layout prior together with associated semantic information, and can be used for classifying dynamic scenes in combination with a standard learning algorithm such as k -nearest neighbor or linear support vector machine. The effectiveness of our approach is clearly demonstrated using two dynamic scene benchmarks and a related application for violence video classification. In the nearest neighbor classification framework, for dynamic scene classification, our method outperforms previous state-of-the-arts on both Maryland "in the wild" dataset and "stabilized" dynamic scene dataset. For violence video classification on a benchmark dataset, our method achieves a promising classification rate of 87.08%, which significantly improves previous best result of 81.30%.

  6. The Course of Actualization

    ERIC Educational Resources Information Center

    De Smet, Hendrik

    2012-01-01

    Actualization is traditionally seen as the process following syntactic reanalysis whereby an item's new syntactic status manifests itself in new syntactic behavior. The process is gradual in that some new uses of the reanalyzed item appear earlier or more readily than others. This article accounts for the order in which new uses appear during…

  7. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1980-01-01

    This collection of notes describes (1) an optoelectronic apparatus for classroom demonstrations of mechanical laws, (2) a more efficient method for demonstrated nuclear chain reactions using electrically energized "traps" and ping-pong balls, and (3) an inexpensive demonstration for qualitative analysis of temperature-dependent resistance. (CS)

  8. Classroom Notes

    ERIC Educational Resources Information Center

    International Journal of Mathematical Education in Science and Technology, 2007

    2007-01-01

    In this issue's "Classroom Notes" section, the following papers are described: (1) "Sequences of Definite Integrals" by T. Dana-Picard; (2) "Structural Analysis of Pythagorean Monoids" by M.-Q Zhan and J. Tong; (3) "A Random Walk Phenomenon under an Interesting Stopping Rule" by S. Chakraborty; (4) "On Some Confidence Intervals for Estimating the…

  9. Biology Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1984

    1984-01-01

    Presents information on the teaching of nutrition (including new information relating to many current O-level syllabi) and part 16 of a reading list for A- and S-level biology. Also includes a note on using earthworms as a source of material for teaching meiosis. (JN)

  10. Editor's note

    NASA Astrophysics Data System (ADS)

    Umapathy, Siva

    2017-01-01

    This is an editor's note related to the publication 'Biologically active and thermally stable polymeric Schiff base and its metal polychelates: Their synthesis and spectral aspects' by Raza Rasool and Sumaiya Hasnain, which appeared in Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy 148 (2015) 435-443.

  11. Classroom Notes

    ERIC Educational Resources Information Center

    International Journal of Mathematical Education in Science and Technology, 2007

    2007-01-01

    In this issue's "Classroom Notes" section, the following papers are discussed: (1) "Constructing a line segment whose length is equal to the measure of a given angle" (W. Jacob and T. J. Osler); (2) "Generating functions for the powers of Fibonacci sequences" (D. Terrana and H. Chen); (3) "Evaluation of mean and variance integrals without…

  12. Scenes, Spaces, and Memory Traces

    PubMed Central

    Maguire, Eleanor A.; Intraub, Helene; Mullally, Sinéad L.

    2015-01-01

    The hippocampus is one of the most closely scrutinized brain structures in neuroscience. While traditionally associated with memory and spatial cognition, in more recent years it has also been linked with other functions, including aspects of perception and imagining fictitious and future scenes. Efforts continue apace to understand how the hippocampus plays such an apparently wide-ranging role. Here we consider recent developments in the field and in particular studies of patients with bilateral hippocampal damage. We outline some key findings, how they have subsequently been challenged, and consider how to reconcile the disparities that are at the heart of current lively debates in the hippocampal literature. PMID:26276163

  13. Application note :

    SciTech Connect

    Russo, Thomas V.

    2013-08-01

    The development of the XyceTM Parallel Electronic Simulator has focused entirely on the creation of a fast, scalable simulation tool, and has not included any schematic capture or data visualization tools. This application note will describe how to use the open source schematic capture tool gschem and its associated netlist creation tool gnetlist to create basic circuit designs for Xyce, and how to access advanced features of Xyce that are not directly supported by either gschem or gnetlist.

  14. A system of infrared scene simulation

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Huo, Yi; Kuang, Wenqing; Zhang, Ting

    2016-10-01

    We propose an integral infrared scene simulation system. The proposed system, which is based on the parameters of the thermal physical property and optical property, computes the radiation distribution of the scenery on the focus plane of the camera according to the scene of the geometrical parameter, the position and intensity of the light source, the location and direction of the camera and so on. Then the radiation distribution is mapped to the space of gray, and we finally obtain the virtual image of the scene. The proposed system includes eight modules namely basic data maintaining, model importing, scene saving, geometry parameters setting and infrared property parameters of the scene, data pre-processing, infrared scene simulation, and scene loading. The proposed system organizes all the data by the mode of database lookup table that stores all relative parameters and computation results of different states to avoid repetitive computation. Experimental results show that the proposed system produces three dimension infrared images in real time to some extent, and can reach 60 frames/second in simple scene drawing and 20 frames/second in complex scene drawing. Experimental results also show that the simulated images can represent infrared features of the scenery to a certain degree.

  15. Editor's Note.

    PubMed

    Alberts, Bruce

    2011-05-27

    The Research Article "A bacterium that can grow by using arsenic instead of phosphorus" by F. Wolfe-Simon et al., published online 2 December 2010, was the subject of extensive discussion and criticism following its online publication. Science received a wide range of correspondence that raised specific concerns about the Research Article's methods and interpretations. Eight Technical Comments that represent the main concerns, as well as a Technical Response by Wolfe-Simon et al., are published online in Science Express at the addresses listed in this note. They have been peer-reviewed and revised according to Science's standard procedure.

  16. Teaching Notes

    NASA Astrophysics Data System (ADS)

    2001-07-01

    If you would like to contribute a teaching note for any of these sections please contact ped@iop.org Contents: LET'S INVESTIGATE: Bows and arrows STARTING OUT: A late start ON THE MAP: A South African school making a world of difference TECHNICAL TRIMMINGS: May the force be with you an easily constructed force sensor Modelling Ultrasound A-scanning with the Pico Technology ADC-200 Virtual Instrument PHYSICS ON A SHOESTRING: Sugar cube radioactivity models CURIOSITY: Euler's disk MY WAY: Why heavy things don't fall faster

  17. Hierarchical, Three-Dimensional Measurement System for Crime Scene Scanning.

    PubMed

    Marcin, Adamczyk; Maciej, Sieniło; Robert, Sitnik; Adam, Woźniak

    2017-02-02

    We present a new generation of three-dimensional (3D) measuring systems, developed for the process of crime scene documentation. This measuring system facilitates the preparation of more insightful, complete, and objective documentation for crime scenes. Our system reflects the actual requirements for hierarchical documentation, and it consists of three independent 3D scanners: a laser scanner for overall measurements, a situational structured light scanner for more minute measurements, and a detailed structured light scanner for the most detailed parts of tscene. Each scanner has its own spatial resolution, of 2.0, 0.3, and 0.05 mm, respectively. The results of interviews we have conducted with technicians indicate that our developed 3D measuring system has significant potential to become a useful tool for forensic technicians. To ensure the maximum compatibility of our measuring system with the standards that regulate the documentation process, we have also performed a metrological validation and designated the maximum permissible length measurement error EMPE for each structured light scanner. In this study, we present additional results regarding documentation processes conducted during crime scene inspections and a training session.

  18. Eye Movement Control during Scene Viewing: Immediate Effects of Scene Luminance on Fixation Durations

    ERIC Educational Resources Information Center

    Henderson, John M.; Nuthmann, Antje; Luke, Steven G.

    2013-01-01

    Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation…

  19. When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes

    ERIC Educational Resources Information Center

    Vo, Melissa L. -H.; Wolfe, Jeremy M.

    2012-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…

  20. Surreal Scene Part of Lives.

    ERIC Educational Resources Information Center

    Freeman, Christina

    1999-01-01

    Describes a school newspaper editor's attempts to cover the devastating tornado that severely damaged her school--North Hall High School in Gainesville, Georgia. Notes that the 16-page special edition she and the staff produced included first-hand accounts, tributes to victims, tales of survival, and pictures of the tragedy. (RS)

  1. Auditory and visual scene analysis: an overview

    PubMed Central

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  2. Auditory and visual scene analysis: an overview.

    PubMed

    Kondo, Hirohito M; van Loon, Anouk M; Kawahara, Jun-Ichiro; Moore, Brian C J

    2017-02-19

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how 'scene analysis' is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  3. Illumination discrimination in real and simulated scenes

    PubMed Central

    Radonjić, Ana; Pearce, Bradley; Aston, Stacey; Krieger, Avery; Dubin, Hilary; Cottaris, Nicolas P.; Brainard, David H.; Hurlbert, Anya C.

    2016-01-01

    Characterizing humans' ability to discriminate changes in illumination provides information about the visual system's representation of the distal stimulus. We have previously shown that humans are able to discriminate illumination changes and that sensitivity to such changes depends on their chromatic direction. Probing illumination discrimination further would be facilitated by the use of computer-graphics simulations, which would, in practice, enable a wider range of stimulus manipulations. There is no a priori guarantee, however, that results obtained with simulated scenes generalize to real illuminated scenes. To investigate this question, we measured illumination discrimination in real and simulated scenes that were well-matched in mean chromaticity and scene geometry. Illumination discrimination thresholds were essentially identical for the two stimulus types. As in our previous work, these thresholds varied with illumination change direction. We exploited the flexibility offered by the use of graphics simulations to investigate whether the differences across direction are preserved when the surfaces in the scene are varied. We show that varying the scene's surface ensemble in a manner that also changes mean scene chromaticity modulates the relative sensitivity to illumination changes along different chromatic directions. Thus, any characterization of sensitivity to changes in illumination must be defined relative to the set of surfaces in the scene.

  4. History Scene Investigations: From Clues to Conclusions

    ERIC Educational Resources Information Center

    McIntyre, Beverly

    2011-01-01

    In this article, the author introduces a social studies lesson that allows students to learn history and practice reading skills, critical thinking, and writing. The activity is called History Scene Investigation or HSI, which derives its name from the popular television series based on crime scene investigations (CSI). HSI uses discovery learning…

  5. Editors' note

    NASA Astrophysics Data System (ADS)

    Denker, Carsten; Feller, Alex; Schmidt, Wolfgang; von der Lühe, Oskar

    2012-11-01

    This topical issue of Astronomische Nachrichten/Astronomical Notes is a collection of reference articles covering the GREGOR solar telescope, its science capabilities, its subsystems, and its dedicated suite of instruments for high-resolution observations of the Sun. Because ground-based telescopes have life spans of several decades, it is only natural that they continuously reinvent themselves. Literally, the GREGOR telescope builds on the foundations of the venerable Gregory-Coudé Telescope (GCT) at Observatorio del Teide, Tenerife, Spain. Acknowledging the fact that new discoveries in observational solar physics are driven by larger apertures to collect more photons and to scrutinize the Sun in finer detail, the GCT was decommissioned and the building was made available to the GREGOR project.

  6. Understanding natural scenes: Contributions of image statistics.

    PubMed

    De Cesarei, Andrea; Loftus, Geoffrey R; Mastria, Serena; Codispoti, Maurizio

    2017-03-01

    Visual processing of natural scenes is carried out in a hierarchical sequence of stages that involve the analysis of progressively more complex features of the visual input. Recent studies have suggested that the semantic content of natural stimuli (e.g., real world photos) can be categorized based on statistical regularities in their appearance, which can be detected early in the visual processing stream. Here we review the studies which have investigated the role of scene statistics in the perception of natural scenes, focusing on both basic visual processing and specific tasks (visual search, expert categorization, emotional picture viewing). Visual processing seems to be adapted to visual regularities in the visual input, such as the amplitude-frequency relationship. Moreover, scene statistics can aid performance in specific tasks such as distinguishing animals from artifactual scenes, possibly by modulating early visual processing stages.

  7. Typicality of objects in urban park scenes.

    PubMed

    Beltran, F S; Herrando, S; Miñano, M

    2000-06-01

    Typicality is a basic property of any categorization process, including the categorization of environmental scenes. We tried to judge whether a scene as a type of situation influences judgements about the typicality of elements when considering the part-whole relationship between the element and the scene. The subjects were asked to rate typicality on a scale of 1 to 7 and to give a word that defined the urban park elements appearing in a park scene or on a blank background. Some elements (wastepaper basket, fountain, street lamp, and statue) were depicted with two styles (classical and contemporary). The results indicate that the scene or blank background had no effect on the typicality scores but subjects had difficulty providing an appropriate word when elements with a contemporary style were shown on a blank background.

  8. Spatial Feature Evaluation for Aerial Scene Analysis

    SciTech Connect

    Swearingen, Thomas S; Cheriyadat, Anil M

    2013-01-01

    High-resolution aerial images are becoming more readily available, which drives the demand for robust, intelligent and efficient systems to process increasingly large amounts of image data. However, automated image interpretation still remains a challenging problem. Robust techniques to extract and represent features to uniquely characterize various aerial scene categories is key for automated image analysis. In this paper we examined the role of spatial features to uniquely characterize various aerial scene categories. We studied low-level features such as colors, edge orientations, and textures, and examined their local spatial arrangements. We computed correlograms representing the spatial correlation of features at various distances, then measured the distance between correlograms to identify similar scenes. We evaluated the proposed technique on several aerial image databases containing challenging aerial scene categories. We report detailed evaluation of various low-level features by quantitatively measuring accuracy and parameter sensitivity. To demonstrate the feature performance, we present a simple query-based aerial scene retrieval system.

  9. Actual use scene of Han-Character for proper name and coded character set

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tatsuo

    This article discusses the following two issues. One is overview of standardization of Han-Character in coded character set including Universal coded character set (ISO/IEC 10646), with the relation to Japanese language policy of the government. The other is the difference and particularity of Han-Character usage for proper name and difficulty to implement in ICT systems.

  10. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed.

    PubMed

    Slavich, George M; Zimbardo, Philip G

    2013-12-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person's past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness, a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior.

  11. Editorial Note

    NASA Astrophysics Data System (ADS)

    van der Meer, F.; Ommen Kloeke, E.

    2015-07-01

    With this editorial note we would like to update you on the performance of the International Journal of Applied Earth Observation and Geoinformation (JAG) and inform you about changes that have been made to the composition of the editorial team. Our Journal publishes original papers that apply earth observation data for the management of natural resources and the environment. Environmental issues include biodiversity, land degradation, industrial pollution and natural hazards such as earthquakes, floods and landslides. As such the scope is broad and ranges from conceptual and more fundamental work on earth observation and geospatial sciences to the more problem-solving type of work. When I took over the role of Editor-in-Chief in 2012, I together with the Publisher set myself the mission to position JAG in the top-3 of the remote sensing and GIS journals. To do so we strived at attracting high quality and high impact papers to the journal and to reduce the review turnover time to make JAG a more attractive medium for publications. What has been achieved? Have we reached our ambitions? We can say that: The submissions have increased over the years with over 23% for the last 12 months. Naturally not all may lead to more papers, but at least a portion of the additional submissions should lead to a growth in journal content and quality.

  12. Evaluation of temporal registration of Landsat scenes

    NASA Technical Reports Server (NTRS)

    Nelson, R.; Grebowsky, G.

    1982-01-01

    The registration of Landsat images is important for multitemporal classification and for detecting change. Landsat data are now rectified to a ground coordinate system during preprocessing, hence scenes obtained over the same area are registered. The machine responsible for preprocessing the Landsat multispectral scanner data is the master data processor (MDP). This paper describes the rectification approach taken by the MDP, reviews the accuracy standards of the resultant product, and provides an assessment of the accuracy of the scene to scene registration of two Landsat images.

  13. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  14. Behind the Scenes: Under the Shuttle

    NASA Video Gallery

    In this episode of "NASA Behind the Scenes," astronaut Mike Massimino takes you up to - and under - the space shuttle as it waits on launch pad 39A at the Kennedy Space Center for the start of a re...

  15. Behind the Scenes: Discovery Crew Practices Landing

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, Astronaut Mike Massimino introduces you to Commander Steve Lindsey and the crewmembers of STS-133, space shuttle Discovery's last mission. Go inside one o...

  16. Cognition inspired framework for indoor scene annotation

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    We present a simple yet effective scene annotation framework based on a combination of bag-of-visual words (BoVW), three-dimensional scene structure estimation, scene context, and cognitive theory. From a macroperspective, the proposed cognition-based hybrid motivation framework divides the annotation problem into empirical inference and real-time classification. Inspired by the inference ability of human beings, common objects of indoor scenes are defined for experience-based inference, while in the real-time classification stage, an improved BoVW-based multilayer abstract semantics labeling method is proposed by introducing abstract semantic hierarchies to narrow the semantic gap and improve the performance of object categorization. The proposed framework was evaluated on a variety of common data sets and experimental results proved its effectiveness.

  17. Evidence for scene-based motion correspondence.

    PubMed

    Hein, Elisabeth; Moore, Cathleen M

    2014-04-01

    To maintain stable object representations as our eyes or the objects themselves move, the visual system must determine how newly sampled information relates to existing object representations. To solve this correspondence problem, the visual system uses not only spatiotemporal information (e.g., the spatial and temporal proximity between elements), but also feature information (e.g., the similarity in size or luminance between elements). Here we asked whether motion correspondence relies solely on image-based feature information, or whether it is influenced by scene-based information (e.g., the perceived sizes of surfaces or the perceived illumination conditions). We manipulated scene-based information separately from image-based information in the Ternus display, an ambiguous apparent-motion display, and found that scene-based information influences how motion correspondence is resolved, indicating that theories of motion correspondence that are based on "scene-blind" mechanisms are insufficient.

  18. Behind the Scenes: 'Fishing' For Rockets

    NASA Video Gallery

    In this episode of NASA "Behind the Scenes," go on board the two ships -- Liberty Star and Freedom Star -- which retrieve the shuttle's solid rocket boosters after every launch. Astronaut Mike Mass...

  19. Behind the Scenes: Discovery Crew Performs Swimmingly

    NASA Video Gallery

    In this episode of NASA "Behind the Scenes," astronaut Mike Massimino visits the Johnson Space Center's Neutral Buoyancy Laboratory. The world's largest indoor pool is where Al Drew, Tim Kopra, Mik...

  20. The Double Scene of Televised AIDS Campaigns.

    ERIC Educational Resources Information Center

    Cummings, Kate

    1992-01-01

    Analyzes the double or mirrored scene of the Centers for Disease Control's AIDS education campaign and the responses to that campaign, basically, the dominant, heterosexual, televised discourses' defensive erasure of those semiotic objects that represent illicit and nonreproductive sex. (RS)

  1. Behind the Scenes: Astronauts Get Float Training

    NASA Video Gallery

    In this episode of "NASA Behind the Scenes," astronaut Mike Massimino continues his visit with safety divers and flight doctors at the Johnson Space Center's Neutral Buoyancy Laboratory as they com...

  2. Statistics of high-level scene context

    PubMed Central

    Greene, Michelle R.

    2013-01-01

    Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by

  3. Road Scene Analysis using Trinocular Stereo Vision

    NASA Astrophysics Data System (ADS)

    Matsushima, Kousuke; Matsuura, Hiroto; Kijima, Yoshitaka; Hu, Zhencheng; Uchimura, Keiichi

    Road scene analysis in 3D driving environment, which aims to detect objects from continuously changing background, is vital for driver assitance system and Adaptive Cruise Control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to the factors such as parking vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution for road scene analysis using a trinocular stereo vision based algorithm. In this algorithm, trinocular stereo vision detects all types of objects in road scene. And “U-V-disparity" concept is employed to analyze 3D geometric feature of road scene. The proposed algorithm has been tested on real road scenes and experimental results verified its efficiency.

  4. Scene analysis in the natural environment.

    PubMed

    Lewicki, Michael S; Olshausen, Bruno A; Surlykke, Annemarie; Moss, Cynthia F

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals.

  5. A really complicated problem: Auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Yost, William A.

    2004-05-01

    It has been more than a decade since Al Bregman and other authors brought the challenge of auditory scene analysis back to the attention of auditory science. While a lot of research has been done on and around this topic, an accepted theory of auditory scene analysis has not evolved. Auditory science has little, if any, information about how the nervous system solves this problem, and there have not been any major successes in developing computational methods that solve the problem for most real-world auditory scenes. I will argue that the major reason that more has not been accomplished is that auditory scene analysis is a really hard problem. If one starts with a single sound source and tries to understand how the auditory system determines this single source, the problem is already very complicated without adding other sources that occur at the same time as is the typical depiction of the auditory scene. In this paper I will illustrate some of the challenges that exist for determining the auditory scene that have not received a lot of attention, as well as some of the more discussed aspects of the challenge. [Work supported by NIDCD.

  6. Scene change detection based on multimodal integration

    NASA Astrophysics Data System (ADS)

    Zhu, Yingying; Zhou, Dongru

    2003-09-01

    Scene change detection is an essential step to automatic and content-based video indexing, retrieval and browsing. In this paper, a robust scene change detection and classification approach is presented, which analyzes audio, visual and textual sources and accounts for their inter-relations and coincidence to semantically identify and classify video scenes. Audio analysis focuses on the segmentation of audio stream into four types of semantic data such as silence, speech, music and environmental sound. Further processing on speech segments aims at locating speaker changes. Video analysis partitions visual stream into shots. Text analysis can provide a supplemental source of clues for scene classification and indexing information. We integrate the video and audio analysis results to identify video scenes and use the text information detected by the video OCR technology or derived from transcripts available to refine scene classification. Results from single source segmentation are in some cases suboptimal. By combining visual, aural features adn the accessorial text information, the scence extraction accuracy is enhanced, and more semantic segmentations are developed. Experimental results are proven to rather promising.

  7. Scene analysis in the natural environment

    PubMed Central

    Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740

  8. Auditory scene analysis by echolocation in bats.

    PubMed

    Moss, C F; Surlykke, A

    2001-10-01

    Echolocating bats transmit ultrasonic vocalizations and use information contained in the reflected sounds to analyze the auditory scene. Auditory scene analysis, a phenomenon that applies broadly to all hearing vertebrates, involves the grouping and segregation of sounds to perceptually organize information about auditory objects. The perceptual organization of sound is influenced by the spectral and temporal characteristics of acoustic signals. In the case of the echolocating bat, its active control over the timing, duration, intensity, and bandwidth of sonar transmissions directly impacts its perception of the auditory objects that comprise the scene. Here, data are presented from perceptual experiments, laboratory insect capture studies, and field recordings of sonar behavior of different bat species, to illustrate principles of importance to auditory scene analysis by echolocation in bats. In the perceptual experiments, FM bats (Eptesicus fuscus) learned to discriminate between systematic and random delay sequences in echo playback sets. The results of these experiments demonstrate that the FM bat can assemble information about echo delay changes over time, a requirement for the analysis of a dynamic auditory scene. Laboratory insect capture experiments examined the vocal production patterns of flying E. fuscus taking tethered insects in a large room. In each trial, the bats consistently produced echolocation signal groups with a relatively stable repetition rate (within 5%). Similar temporal patterning of sonar vocalizations was also observed in the field recordings from E. fuscus, thus suggesting the importance of temporal control of vocal production for perceptually guided behavior. It is hypothesized that a stable sonar signal production rate facilitates the perceptual organization of echoes arriving from objects at different directions and distances as the bat flies through a dynamic auditory scene. Field recordings of E. fuscus, Noctilio albiventris, N

  9. Visual Scenes are Categorized by Function

    PubMed Central

    Greene, Michelle R.; Baldassano, Christopher; Esteva, Andre; Beck, Diane M.; Fei-Fei, Li

    2015-01-01

    How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. We therefore test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether two images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r=0.50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r=0.33), visual features from a convolutional neural network (r=0.39), lexical distance (r=0.27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was due to their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene’s category may be determined by the scene’s function. PMID:26709590

  10. Infants detect changes in everyday scenes: the role of scene gist.

    PubMed

    Duh, Shinchieh; Wang, Su-hua

    2014-07-01

    When watching physical events, infants bring to bear prior knowledge about objects and readily detect changes that contradict physical rules. Here we investigate the possibility that scene gist may affect infants, as it affects adults, when detecting changes in everyday scenes. In Experiment 1, 15-month-old infants missed a perceptually salient change that preserved the gist of a generic outdoor scene; the same change was readily detected if infants had insufficient time to process the display and had to rely on perceptual information for change detection. In Experiment 2, 15-month-olds detected a perceptually subtle change that preserved the scene gist but violated the rule of object continuity, suggesting that physical rules may overpower scene gist in infants' change detection. Finally, Experiments 3 and 4 provided converging evidence for the effects of scene gist, showing that 15-month-olds missed a perceptually salient change that preserved the gist and detected a perceptually subtle change that disrupted the gist. Together, these results suggest that prior knowledge, including scene knowledge and physical knowledge, affects the process by which infants maintain their representations of everyday scenes.

  11. Scene Construction, Visual Foraging, and Active Inference

    PubMed Central

    Mirza, M. Berk; Adams, Rick A.; Mathys, Christoph D.; Friston, Karl J.

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  12. Exploiting spatial descriptions in visual scene analysis.

    PubMed

    Ziegler, Leon; Johannsen, Katrin; Swadzba, Agnes; De Ruiter, Jan P; Wachsmuth, Sven

    2012-08-01

    The reliable automatic visual recognition of indoor scenes with complex object constellations using only sensor data is a nontrivial problem. In order to improve the construction of an accurate semantic 3D model of an indoor scene, we exploit human-produced verbal descriptions of the relative location of pairs of objects. This requires the ability to deal with different spatial reference frames (RF) that humans use interchangeably. In German, both the intrinsic and relative RF are used frequently, which often leads to ambiguities in referential communication. We assume that there are certain regularities that help in specific contexts. In a first experiment, we investigated how speakers of German describe spatial relationships between different pieces of furniture. This gave us important information about the distribution of the RFs used for furniture-predicate combinations, and by implication also about the preferred spatial predicate. The results of this experiment are compiled into a computational model that extracts partial orderings of spatial arrangements between furniture items from verbal descriptions. In the implemented system, the visual scene is initially scanned by a 3D camera system. From the 3D point cloud, we extract point clusters that suggest the presence of certain furniture objects. We then integrate the partial orderings extracted from the verbal utterances incrementally and cumulatively with the estimated probabilities about the identity and location of objects in the scene, and also estimate the probable orientation of the objects. This allows the system to significantly improve both the accuracy and richness of its visual scene representation.

  13. Moving through a multiplex holographic scene

    NASA Astrophysics Data System (ADS)

    Mrongovius, Martina

    2013-02-01

    This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.

  14. Space and Time Scale Characterization of Image Data in Varying Environmental Conditions for Better Scene Understanding

    DTIC Science & Technology

    2015-09-01

    field of view, depth of view, image resolution, pixel size, pixel separation, color matrix size, scene color or shading variations as a function of...environmental and weather conditions, the field of view, depth of view, and image resolution, as noted above. Table 2 provides a list of several space...field of view, and depth of view. Together with the environmental effects, these data can be used as a basic building block for the analysis of

  15. Improving semantic scene understanding using prior information

    NASA Astrophysics Data System (ADS)

    Laddha, Ankit; Hebert, Martial

    2016-05-01

    Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.

  16. Global Ensemble Texture Representations are Critical to Rapid Scene Perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-03-06

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record

  17. The Temporal Dynamics of Scene Processing: A Multifaceted EEG Investigation

    PubMed Central

    Kravitz, Dwight J.

    2016-01-01

    Abstract Our remarkable ability to process complex visual scenes is supported by a network of scene-selective cortical regions. Despite growing knowledge about the scene representation in these regions, much less is known about the temporal dynamics with which these representations emerge. We conducted two experiments aimed at identifying and characterizing the earliest markers of scene-specific processing. In the first experiment, human participants viewed images of scenes, faces, and everyday objects while event-related potentials (ERPs) were recorded. We found that the first ERP component to evince a significantly stronger response to scenes than the other categories was the P2, peaking ∼220 ms after stimulus onset. To establish that the P2 component reflects scene-specific processing, in the second experiment, we recorded ERPs while the participants viewed diverse real-world scenes spanning the following three global scene properties: spatial expanse (open/closed), relative distance (near/far), and naturalness (man-made/natural). We found that P2 amplitude was sensitive to these scene properties at both the categorical level, distinguishing between open and closed natural scenes, as well as at the single-image level, reflecting both computationally derived scene statistics and behavioral ratings of naturalness and spatial expanse. Together, these results establish the P2 as an ERP marker for scene processing, and demonstrate that scene-specific global information is available in the neural response as early as 220 ms. PMID:27699208

  18. The Temporal Dynamics of Scene Processing: A Multifaceted EEG Investigation.

    PubMed

    Harel, Assaf; Groen, Iris I A; Kravitz, Dwight J; Deouell, Leon Y; Baker, Chris I

    2016-01-01

    Our remarkable ability to process complex visual scenes is supported by a network of scene-selective cortical regions. Despite growing knowledge about the scene representation in these regions, much less is known about the temporal dynamics with which these representations emerge. We conducted two experiments aimed at identifying and characterizing the earliest markers of scene-specific processing. In the first experiment, human participants viewed images of scenes, faces, and everyday objects while event-related potentials (ERPs) were recorded. We found that the first ERP component to evince a significantly stronger response to scenes than the other categories was the P2, peaking ∼220 ms after stimulus onset. To establish that the P2 component reflects scene-specific processing, in the second experiment, we recorded ERPs while the participants viewed diverse real-world scenes spanning the following three global scene properties: spatial expanse (open/closed), relative distance (near/far), and naturalness (man-made/natural). We found that P2 amplitude was sensitive to these scene properties at both the categorical level, distinguishing between open and closed natural scenes, as well as at the single-image level, reflecting both computationally derived scene statistics and behavioral ratings of naturalness and spatial expanse. Together, these results establish the P2 as an ERP marker for scene processing, and demonstrate that scene-specific global information is available in the neural response as early as 220 ms.

  19. Emotional Television Scenes and Hemispheric Specialization.

    ERIC Educational Resources Information Center

    Reeves, Byron; And Others

    1989-01-01

    Examines hemispheric differences in cortical arousal as a function of positive and negative emotional television scenes. Finds that (1) the processing of emotional content is hemispherically asymmetric; and (2) negative material produced greater cortical arousal in the right hemisphere and positive material greater arousal in the left. (MS)

  20. Scene Parsing From an MAP Perspective.

    PubMed

    Li, Xuelong; Mou, Lichao; Lu, Xiaoqiang

    2015-09-01

    Scene parsing is an important problem in the field of computer vision. Though many existing scene parsing approaches have obtained encouraging results, they fail to overcome within-category inconsistency and intercategory similarity of superpixels. To reduce the aforementioned problem, a novel method is proposed in this paper. The proposed approach consists of three main steps: 1) posterior category probability density function (PDF) is learned by an efficient low-rank representation classifier (LRRC); 2) prior contextual constraint PDF on the map of pixel categories is learned by Markov random fields; and 3) final parsing results are yielded up to the maximum a posterior process based on the two learned PDFs. In this case, the nature of being both dense for within-category affinities and almost zeros for intercategory affinities is integrated into our approach by using LRRC to model the posterior category PDF. Meanwhile, the contextual priori generated by modeling the prior contextual constraint PDF helps to promote the performance of scene parsing. Experiments on benchmark datasets show that the proposed approach outperforms the state-of-the-art approaches for scene parsing.

  1. Nonparametric Scene Parsing via Label Transfer.

    PubMed

    Liu, Ce; Yuen, Jenny; Torralba, Antonio

    2011-12-01

    While there has been a lot of recent work on object recognition and image understanding, the focus has been on carefully establishing mathematical models for images, scenes, and objects. In this paper, we propose a novel, nonparametric approach for object recognition and scene parsing using a new technology we name label transfer. For an input image, our system first retrieves its nearest neighbors from a large database containing fully annotated images. Then, the system establishes dense correspondences between the input image and each of the nearest neighbors using the dense SIFT flow algorithm [28], which aligns two images based on local image structures. Finally, based on the dense scene correspondences obtained from SIFT flow, our system warps the existing annotations and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on challenging databases. Compared to existing object recognition approaches that require training classifiers or appearance models for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval/alignment procedure.

  2. Stacked Learning to Search for Scene Labeling.

    PubMed

    Cheng, Feiyang; He, Xuming; Zhang, Hong

    2017-02-13

    Search-based structured prediction methods have shown promising successes in both computer vision and natural language processing recently. However, most existing search-based approaches lead to a complex multi-stage learning process, which is ill-suited for scene labeling problems with a high-dimensional output space. In this paper, a stacked learning to search method is proposed to address scene labeling tasks. We design a simplified search process consisting of a sequence of ranking functions, which are learned based on a stacked learning strategy to prevent over-fitting. Our method is able to encode rich prior knowledge by incorporating a variety of local and global scene features. In addition, we estimate a labeling confidence map to further improve the search efficiency from two aspects: first, it constrains the search space more effectively by pruning out low-quality solutions based on confidence scores; second, we employ the confidence map as an additional ranking feature to improve its prediction performance and thus reduce the search steps. Our approach is evaluated on both semantic segmentation and geometric labeling tasks, including the Stanford Background, Sift Flow, Geometric Context and NYUv2 RGB-D dataset. The competitive results demonstrate that our stacked learning to search method provides an effective alternative paradigm for scene labeling.

  3. Processing of Unattended Emotional Visual Scenes

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2007-01-01

    Prime pictures of emotional scenes appeared in parafoveal vision, followed by probe pictures either congruent or incongruent in affective valence. Participants responded whether the probe was pleasant or unpleasant (or whether it portrayed people or animals). Shorter latencies for congruent than for incongruent prime-probe pairs revealed affective…

  4. Augustus De Morgan behind the Scenes

    ERIC Educational Resources Information Center

    Simmons, Charlotte

    2011-01-01

    Augustus De Morgan's support was crucial to the achievements of the four mathematicians whose work is considered greater than his own. This article explores the contributions he made to mathematics from behind the scenes by supporting the work of Hamilton, Boole, Gompertz, and Ramchundra.

  5. Parafoveal Semantic Processing of Emotional Visual Scenes

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Lang, Peter J.

    2005-01-01

    The authors investigated whether emotional pictorial stimuli are especially likely to be processed in parafoveal vision. Pairs of emotional and neutral visual scenes were presented parafoveally (2.1[degrees] or 2.5[degrees] of visual angle from a central fixation point) for 150-3,000 ms, followed by an immediate recognition test (500-ms delay).…

  6. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  7. Partially sparse imaging of stationary indoor scenes

    NASA Astrophysics Data System (ADS)

    Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian

    2014-12-01

    In this paper, we exploit the notion of partial sparsity for scene reconstruction associated with through-the-wall radar imaging of stationary targets under reduced data volume. Partial sparsity implies that the scene being imaged consists of a sparse part and a dense part, with the support of the latter assumed to be known. For the problem at hand, sparsity is represented by a few stationary indoor targets, whereas the high scene density is defined by exterior and interior walls. Prior knowledge of wall positions and extent may be available either through building blueprints or from prior surveillance operations. The contributions of the exterior and interior walls are removed from the data through the use of projection matrices, which are determined from wall- and corner-specific dictionaries. The projected data, with enhanced sparsity, is then processed using l 1 norm reconstruction techniques. Numerical electromagnetic data is used to demonstrate the effectiveness of the proposed approach for imaging stationary indoor scenes using a reduced set of measurements.

  8. Extracting text from real-world scenes

    NASA Technical Reports Server (NTRS)

    Bixler, J. Patrick; Miller, David P.

    1989-01-01

    Many scenes contain significant textual information that can be extremely helpful for understanding and/or navigation. For example, text-based information can frequently be the primary cure used for navigating inside buildings. A subject might first read a marquee, then look for an appropriate hallway and walk along reading door signs and nameplates until the destination is found. Optical character recognition has been studied extensively in recent years, but has been applied almost exclusively to printed documents. As these techniques improve it becomes reasonable to ask whether they can be applied to an arbitrary scene in an attempt to extract text-based information. Before an automated system can be expected to navigate by reading signs, however, the text must first be segmented from the rest of the scene. This paper discusses the feasibility of extracting text from an arbitrary scene and using that information to guide the navigation of a mobile robot. Considered are some simple techniques for first locating text components and then tracking the individual characters to form words and phrases. Results for some sample images are also presented.

  9. Optical Neural Nets for Scene Analysis

    DTIC Science & Technology

    1989-10-23

    Report DARPA/SEMI/ 1089 OPTICAL NEURAL NETS FOR SCENE ANALYSIS W David Casasent (PI) DTIC 00 Carnegie Mellon University ELECT E & , Center for...REPORT NUMBER(S) S. MONITORING ORGANIZATION REPORT NUMBER(S) DARPA/Semi/ 1089 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a NAME OF MONITORING

  10. Remote Dynamic Three-Dimensional Scene Reconstruction

    PubMed Central

    Yang, You; Liu, Qiong; Ji, Rongrong; Gao, Yue

    2013-01-01

    Remote dynamic three-dimensional (3D) scene reconstruction renders the motion structure of a 3D scene remotely by means of both the color video and the corresponding depth maps. It has shown a great potential for telepresence applications like remote monitoring and remote medical imaging. Under this circumstance, video-rate and high resolution are two crucial characteristics for building a good depth map, which however mutually contradict during the depth sensor capturing. Therefore, recent works prefer to only transmit the high-resolution color video to the terminal side, and subsequently the scene depth is reconstructed by estimating the motion vectors from the video, typically using the propagation based methods towards a video-rate depth reconstruction. However, in most of the remote transmission systems, only the compressed color video stream is available. As a result, color video restored from the streams has quality losses, and thus the extracted motion vectors are inaccurate for depth reconstruction. In this paper, we propose a precise and robust scheme for dynamic 3D scene reconstruction by using the compressed color video stream and their inaccurate motion vectors. Our method rectifies the inaccurate motion vectors by analyzing and compensating their quality losses, motion vector absence in spatial prediction, and dislocation in near-boundary region. This rectification ensures the depth maps can be compensated in both video-rate and high resolution at the terminal side towards reducing the system consumption on both the compression and transmission. Our experiments validate that the proposed scheme is robust for depth map and dynamic scene reconstruction on long propagation distance, even with high compression ratio, outperforming the benchmark approaches with at least 3.3950 dB quality gains for remote applications. PMID:23667417

  11. A graph theoretic approach to scene matching

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1991-01-01

    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.

  12. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed

    PubMed Central

    Zimbardo, Philip G.

    2013-01-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person’s past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness, a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior. PMID:24363542

  13. Scene identification probabilities for evaluating radiation flux errors due to scene misidentification

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The scene identification probabilities (Pij) are fundamentally important in evaluations of the top-of-the-atmosphere (TOA) radiation-flux errors due to the scene misidentification. In this paper, the scene identification error probabilities were empirically derived from data collected in 1985 by the Earth Radiation Budget Experiment (ERBE) scanning radiometer when the ERBE satellite and the NOAA-9 spacecraft were rotated so as to scan alongside during brief periods in January and August 1985. Radiation-flux error computations utilizing these probabilities were performed, using orbit specifications for the ERBE, the Cloud and Earth's Radiant Energy System (CERES), and the SCARAB missions for a scene that was identified as partly cloudy over ocean. Typical values of the standard deviation of the random shortwave error were in the order of 1.5-5 W/sq m, but could reach values as high as 18.0 W/sq m as computed from NOAA-9.

  14. Detecting and representing predictable structure during auditory scene analysis.

    PubMed

    Sohoglu, Ediz; Chait, Maria

    2016-09-07

    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to 'scenes' comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces 'surprise'. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA.

  15. Iconic memory for the gist of natural scenes.

    PubMed

    Clarke, Jason; Mack, Arien

    2014-11-01

    Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category.

  16. The Clutterpalette: An Interactive Tool for Detailing Indoor Scenes.

    PubMed

    Yu, Lap-Fai; Yeung, Sai-Kit; Terzopoulos, Demetri

    2016-02-01

    We introduce the Clutterpalette, an interactive tool for detailing indoor scenes with small-scale items. When the user points to a location in the scene, the Clutterpalette suggests detail items for that location. In order to present appropriate suggestions, the Clutterpalette is trained on a dataset of images of real-world scenes, annotated with support relations. Our experiments demonstrate that the adaptive suggestions presented by the Clutterpalette increase modeling speed and enhance the realism of indoor scenes.

  17. Crime Scene Intelligence. An Experiment in Forensic Entomology

    DTIC Science & Technology

    2006-11-01

    Crime Scene Intelligence An Experiment in Forensic Entomology Albert M. Cruz Occasional Paper Number Twelve National Defense Intelligence College C...4. TITLE AND SUBTITLE Crime Scene Intelligence An Experiment in Forensic Entomology 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Occasional Paper Number Twelve CRIME SCENE INTELLIGENCE An Experiment in Forensic Entomology Albert M. Cruz, Lieutenant, USN National Defense

  18. Scene and Position Specificity in Visual Memory for Objects

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2006-01-01

    This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object…

  19. The influence of natural scene dynamics on auditory cortical activity.

    PubMed

    Chandrasekaran, Chandramouli; Turesson, Hjalmar K; Brown, Charles H; Ghazanfar, Asif A

    2010-10-20

    The efficient cortical encoding of natural scenes is essential for guiding adaptive behavior. Because natural scenes and network activity in cortical circuits share similar temporal scales, it is necessary to understand how the temporal structure of natural scenes influences network dynamics in cortical circuits and spiking output. We examined the relationship between the structure of natural acoustic scenes and its impact on network activity [as indexed by local field potentials (LFPs)] and spiking responses in macaque primary auditory cortex. Natural auditory scenes led to a change in the power of the LFP in the 2-9 and 16-30 Hz frequency ranges relative to the ongoing activity. In contrast, ongoing rhythmic activity in the 9-16 Hz range was essentially unaffected by the natural scene. Phase coherence analysis showed that scene-related changes in LFP power were at least partially attributable to the locking of the LFP and spiking activity to the temporal structure in the scene, with locking extending up to 25 Hz for some scenes and cortical sites. Consistent with distributed place and temporal coding schemes, a key predictor of phase locking and power changes was the overlap between the spectral selectivity of a cortical site and the spectral structure of the scene. Finally, during the processing of natural acoustic scenes, spikes were locked to LFP phase at frequencies up to 30 Hz. These results are consistent with an idea that the cortical representation of natural scenes emerges from an interaction between network activity and stimulus dynamics.

  20. High-Level Aftereffects to Global Scene Properties

    ERIC Educational Resources Information Center

    Greene, Michelle R.; Oliva, Aude

    2010-01-01

    Adaptation is ubiquitous in the human visual system, allowing recalibration to the statistical regularities of its input. Previous work has shown that global scene properties such as openness and mean depth are informative dimensions of natural scene variation useful for human and machine scene categorization (Greene & Oliva, 2009b; Oliva…

  1. Combining MMW radar and radiometer images for enhanced characterization of scenes

    NASA Astrophysics Data System (ADS)

    Peichl, Markus; Dill, Stephan

    2016-05-01

    Since several years the use of active (radar) and passive (radiometer) MMW remote sensing is considered as an appropriate tool for a lot of security related applications. Those are personnel screening for concealed object detection under clothing, or enhanced vision for vehicles or aircraft, just to mention few examples. Radars, having a transmitter for scene illumination and a receiver for echo recording, are basically range measuring devices which deliver in addition information about a target's reflectivity behavior. Radiometers, having only a receiver to record natural thermal radiation power, provide typically emission and reflection properties of a scene using the environment and the cosmic background radiation as a natural illumination source. Consequently, the active and passive signature of a scene and its objects is quite different depending on the target and its scattering characteristics, and the actual illumination properties. Typically technology providers are working either purely on radar or purely on radiometers for gathering information about a scene of interest. Rather rarely both information sources are really combined for enhanced information extraction, and then the sensor's imaging geometries usually do not fit adequately so that the benefit of doing that cannot be fully exploited. Consequently, investigations on adequate combinations of MMW radar and radiometer data have been performed. A mechanical scanner used from earlier experiments on personnel screening was modified to provide similar imaging geometry for Ka-band radiometer and K-band radar. First experimental results are shown and discussed.

  2. Worth a quick look? Initial scene previews can guide eye movements as a function of domain-specific expertise but can also have unforeseen costs.

    PubMed

    Litchfield, Damien; Donovan, Tim

    2016-07-01

    Rapid scene recognition is a global visual process we can all exploit to guide search. This ability is thought to underpin expertise in medical image perception yet there is no direct evidence that isolates the expertise-specific contribution of processing scene previews on subsequent eye movement performance. We used the flash-preview moving window paradigm (Castelhano & Henderson, 2007) to investigate this issue. Expert radiologists and novice observers underwent 2 experiments whereby participants viewed a 250-ms scene preview or a mask before searching for a target. Observers looked for everyday objects from real-world scenes (Experiment 1), and searched for lung nodules from medical images (Experiment 2). Both expertise groups exploited the brief preview of the upcoming scene to more efficiently guide windowed search in Experiment 1, but there was only a weak effect of domain-specific expertise in Experiment 2, with experts showing small improvements in search metrics with scene previews. Expert diagnostic performance was better than novices in all conditions but was not contingent on seeing the scene preview, and scene preview actually impaired novice diagnostic performance. Experiment 3 required novice and experienced observers to search for a variety of abnormalities from different medical images. Rather than maximizing the expertise-specific advantage of processing scene previews, both novices and experienced radiographers were worse at detecting abnormalities with scene previews. We discuss how restricting access to the initial glimpse can be compensated for by subsequent search and discovery processing, but there can still be costs in integrating a fleeting glimpse of a medical scene. (PsycINFO Database Record

  3. Memory, scene construction, and the human hippocampus.

    PubMed

    Kim, Soyun; Dede, Adam J O; Hopkins, Ramona O; Squire, Larry R

    2015-04-14

    We evaluated two different perspectives about the function of the human hippocampus--one that emphasizes the importance of memory and another that emphasizes the importance of spatial processing and scene construction. We gave tests of boundary extension, scene construction, and memory to patients with lesions limited to the hippocampus or large lesions of the medial temporal lobe. The patients were intact on all of the spatial tasks and impaired on all of the memory tasks. We discuss earlier studies that associated performance on these spatial tasks to hippocampal function. Our results demonstrate the importance of medial temporal lobe structures for memory and raise doubts about the idea that these structures have a prominent role in spatial cognition.

  4. Additional Crime Scenes for Projectile Motion Unit

    NASA Astrophysics Data System (ADS)

    Fullerton, Dan; Bonner, David

    2011-12-01

    Building students' ability to transfer physics fundamentals to real-world applications establishes a deeper understanding of underlying concepts while enhancing student interest. Forensic science offers a great opportunity for students to apply physics to highly engaging, real-world contexts. Integrating these opportunities into inquiry-based problem solving in a team environment provides a terrific backdrop for fostering communication, analysis, and critical thinking skills. One such activity, inspired jointly by the museum exhibit "CSI: The Experience"2 and David Bonner's TPT article "Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene,"3 provides students with three different crime scenes, each requiring an analysis of projectile motion. In this lesson students socially engage in higher-order analysis of two-dimensional projectile motion problems by collecting information from 3-D scale models and collaborating with one another on its interpretation, in addition to diagramming and mathematical analysis typical to problem solving in physics.

  5. Memory, scene construction, and the human hippocampus

    PubMed Central

    Kim, Soyun; Dede, Adam J. O.; Hopkins, Ramona O.; Squire, Larry R.

    2015-01-01

    We evaluated two different perspectives about the function of the human hippocampus–one that emphasizes the importance of memory and another that emphasizes the importance of spatial processing and scene construction. We gave tests of boundary extension, scene construction, and memory to patients with lesions limited to the hippocampus or large lesions of the medial temporal lobe. The patients were intact on all of the spatial tasks and impaired on all of the memory tasks. We discuss earlier studies that associated performance on these spatial tasks to hippocampal function. Our results demonstrate the importance of medial temporal lobe structures for memory and raise doubts about the idea that these structures have a prominent role in spatial cognition. PMID:25825712

  6. Vision Based SLAM in Dynamic Scenes

    DTIC Science & Technology

    2012-12-20

    cameras , while conventional studies are limited with a single camera (or a multi- camera rig where the relative positions between cameras are fixed...Our flexible configuration of cameras makes this algorithm applicable to robot teams, which also makes this study the world’s first vision based SLAM...algorithm for robot teams. Furthermore, the collaboration among multiple cameras allows us to deal with challenging dynamic scenes which make most

  7. Setting the Scene: Climate Risk and Resilience

    DTIC Science & Technology

    2011-10-01

    Adaptation partnerships opportunities Potential increase in Humanitarian Assistance/Disaster Response Wild-cards Ocean acidification Abrupt climate ...UNCLASSIFIED Setting the Scene: Climate Risk and Resilience RADM Dave Titley Director, Task Force Climate Change Oceanographer of the Navy October...ORGANIZATION NAME(S) AND ADDRESS(ES) Oceanographer of the Navy,Task Force Climate Change ,Washington,DC,20301 8. PERFORMING ORGANIZATION REPORT NUMBER

  8. Text Detection and Translation from Natural Scenes

    DTIC Science & Technology

    2001-06-01

    a sign in a foreign country. A visually handicapped person can be in danger if he/she misses signs that specify warnings or hazards. In this research... sign detection with OCR. The confidence of the sign extraction system can be improved by incorporating the OCR engine in an early stage. Figure 14...multimedia system Abstract We present a system for automatic extraction and interpretation of signs from a natural scene. The

  9. BASINS Technical Notes

    EPA Pesticide Factsheets

    EPA has developed several technical notes that provide in depth information on a specific function in BASINS. Technical notes can be used to answer questions users may have, or to provide additional information on the application of features in BASINS.

  10. Linguistic Theory and Actual Language.

    ERIC Educational Resources Information Center

    Segerdahl, Par

    1995-01-01

    Examines Noam Chomsky's (1957) discussion of "grammaticalness" and the role of linguistics in the "correct" way of speaking and writing. It is argued that the concern of linguistics with the tools of grammar has resulted in confusion, with the tools becoming mixed up with the actual language, thereby becoming the central…

  11. Lecture Notes on Multigrid Methods

    SciTech Connect

    Vassilevski, P S

    2010-06-28

    The Lecture Notes are primarily based on a sequence of lectures given by the author while been a Fulbright scholar at 'St. Kliment Ohridski' University of Sofia, Sofia, Bulgaria during the winter semester of 2009-2010 academic year. The notes are somewhat expanded version of the actual one semester class he taught there. The material covered is slightly modified and adapted version of similar topics covered in the author's monograph 'Multilevel Block-Factorization Preconditioners' published in 2008 by Springer. The author tried to keep the notes as self-contained as possible. That is why the lecture notes begin with some basic introductory matrix-vector linear algebra, numerical PDEs (finite element) facts emphasizing the relations between functions in finite dimensional spaces and their coefficient vectors and respective norms. Then, some additional facts on the implementation of finite elements based on relation tables using the popular compressed sparse row (CSR) format are given. Also, typical condition number estimates of stiffness and mass matrices, the global matrix assembly from local element matrices are given as well. Finally, some basic introductory facts about stationary iterative methods, such as Gauss-Seidel and its symmetrized version are presented. The introductory material ends up with the smoothing property of the classical iterative methods and the main definition of two-grid iterative methods. From here on, the second part of the notes begins which deals with the various aspects of the principal TG and the numerous versions of the MG cycles. At the end, in part III, we briefly introduce algebraic versions of MG referred to as AMG, focusing on classes of AMG specialized for finite element matrices.

  12. Functional imaging of auditory scene analysis.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew R

    2014-01-01

    Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging.

  13. The time course of natural scene perception with reduced attention.

    PubMed

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention.

  14. Rapid 3D video/laser sensing and digital archiving with immediate on-scene feedback for 3D crime scene/mass disaster data collection and reconstruction

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.

    1996-02-01

    We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.

  15. Collaboration on Scene Graph Based 3D Data

    NASA Astrophysics Data System (ADS)

    Ammon, Lorenz; Bieri, Hanspeter

    Professional 3D digital content creation tools, like Alias Maya or discreet 3ds max, offer only limited support for a team of artists to work on a 3D model collaboratively. We present a scene graph repository system that enables fine-grained collaboration on scenes built using standard 3D DCC tools by applying the concept of collaborative versions to a general attributed scene graph. Artists can work on the same scene in parallel without locking out each other. The artists' changes to a scene are regularly merged to ensure that all artists can see each others progress and collaborate on current data. We introduce the concept of indirect changes and indirect conflicts to systematically inspect the effects that collaborative changes have on a scene. Inspecting indirect conflicts helps maintaining scene consistency by systematically looking for inconsistencies at the right places.

  16. High-level aftereffects to global scene properties.

    PubMed

    Greene, Michelle R; Oliva, Aude

    2010-12-01

    Adaptation is ubiquitous in the human visual system, allowing recalibration to the statistical regularities of its input. Previous work has shown that global scene properties such as openness and mean depth are informative dimensions of natural scene variation useful for human and machine scene categorization (Greene & Oliva, 2009b; Oliva & Torralba, 2001). A visual system that rapidly categorizes scenes using such statistical regularities should be continuously updated, and therefore is prone to adaptation along these dimensions. Using a rapid serial visual presentation paradigm, we show aftereffects to several global scene properties (magnitude 8-21%). In addition, aftereffects were preserved when the test image was presented 10 degrees away from the adapted location, suggesting that the origin of these aftereffects is not solely due to low-level adaptation. We show systematic modulation of observers' basic-level scene categorization performances after adapting to a global property, suggesting a strong representational role of global properties in rapid scene categorization.

  17. Basic level scene understanding: categories, attributes and structures

    PubMed Central

    Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude

    2013-01-01

    A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590

  18. Scene recognition by manifold regularized deep learning architecture.

    PubMed

    Yuan, Yuan; Mou, Lichao; Lu, Xiaoqiang

    2015-10-01

    Scene recognition is an important problem in the field of computer vision, because it helps to narrow the gap between the computer and the human beings on scene understanding. Semantic modeling is a popular technique used to fill the semantic gap in scene recognition. However, most of the semantic modeling approaches learn shallow, one-layer representations for scene recognition, while ignoring the structural information related between images, often resulting in poor performance. Modeled after our own human visual system, as it is intended to inherit humanlike judgment, a manifold regularized deep architecture is proposed for scene recognition. The proposed deep architecture exploits the structural information of the data, making for a mapping between visible layer and hidden layer. By the proposed approach, a deep architecture could be designed to learn the high-level features for scene recognition in an unsupervised fashion. Experiments on standard data sets show that our method outperforms the state-of-the-art used for scene recognition.

  19. Applying artificial vision models to human scene understanding.

    PubMed

    Aminoff, Elissa M; Toneva, Mariya; Shrivastava, Abhinav; Chen, Xinlei; Misra, Ishan; Gupta, Abhinav; Tarr, Michael J

    2015-01-01

    How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective-the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)-have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN-the models that best accounted for the patterns obtained from PPA and TOS-were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method-NEIL ("Never-Ending-Image-Learner"), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes-showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.

  20. Reconstruction of an inn fire scene using the Fire Dynamics Simulator (FDS) program.

    PubMed

    Chi, Jen-Hao

    2013-01-01

    An inn fire occurring in the middle of the night usually causes a great deal more injuries and deaths. This article examines the case study of an inn fire accident that resulted in the most serious casualties in Taiwan's history. Data based on the official fire investigation report and NFPA921 regulations are used, and the fire scenes are reconstructed using the latest Fire Dynamics Simulator (FDS) program from NIST. The personnel evacuation time and time variants for various fire hazard factors of reconstructive analysis clarify the reason for such a high number of casualties. It reveals that the FDS program has come to play an essential role in fire investigation. The close comparison between simulation result and the actual fire scene also provides fire prevention engineers, a possible utilization of FDS to examine the effects of improved schemes for fire safety of buildings.

  1. Primal scene derivatives in the work of Yukio Mishima: the primal scene fantasy.

    PubMed

    Turco, Ronald N

    2002-01-01

    This article discusses the preoccupation with fire, revenge, crucifixion, and other fantasies as they relate to the primal scene. The manifestations of these fantasies are demonstrated in a work of fiction by Yukio Mishima. The Temple of the Golden Pavillion. As is the case in other writings of Mishima there is a fusion of aggressive and libidinal drives and a preoccupation with death. The primal scene is directly connected with pyromania and destructive "acting out" of fantasies. This article is timely with regard to understanding contemporary events of cultural and national destruction.

  2. TMS to object cortex affects both object and scene remote networks while TMS to scene cortex only affects scene networks.

    PubMed

    Rafique, Sara A; Solomon-Harris, Lily M; Steeves, Jennifer K E

    2015-12-01

    Viewing the world involves many computations across a great number of regions of the brain, all the while appearing seamless and effortless. We sought to determine the connectivity of object and scene processing regions of cortex through the influence of transient focal neural noise in discrete nodes within these networks. We consecutively paired repetitive transcranial magnetic stimulation (rTMS) with functional magnetic resonance-adaptation (fMR-A) to measure the effect of rTMS on functional response properties at the stimulation site and in remote regions. In separate sessions, rTMS was applied to the object preferential lateral occipital region (LO) and scene preferential transverse occipital sulcus (TOS). Pre- and post-stimulation responses were compared using fMR-A. In addition to modulating BOLD signal at the stimulation site, TMS affected remote regions revealing inter and intrahemispheric connections between LO, TOS, and the posterior parahippocampal place area (PPA). Moreover, we show remote effects from object preferential LO to outside the ventral perception network, in parietal and frontal areas, indicating an interaction of dorsal and ventral streams and possibly a shared common framework of perception and action.

  3. On-scene times for trauma patients in West Yorkshire.

    PubMed Central

    Goodacre, S W; Gray, A; McGowan, A

    1997-01-01

    OBJECTIVE: To assess whether length of time on-scene in patients with major injury was associated with severity of injury or with abnormal on-scene physiology. METHODS: A retrospective analysis of a convenience sample of patients in whom prehospital on-scene times were entered onto the regional major trauma database. On-scene times of patients were analysed to assess whether ultimate injury severity score or on scene physiology measurements affected times. This was undertaken by examining subgroups of patients with similar injury severity or physiological measurements by Wilcoxon-Mann-Whitney testing and comparing 95% confidence intervals of the mean on-scene times. RESULTS: The mean on-scene time for 111 non-entrapped patients was 26 minutes (95% confidence interval 23.5 to 28.6). Patients with injury severity score of > 15, with a Glasgow coma scale of < 13, and with an abnormal pulse spent significantly less time on-scene than less severely injured or physiologically deranged patients. CONCLUSIONS: Paramedics have the ability to recognise patients with severe injury and reduce on-scene times. On-scene times were consistently long throughout all subgroups of major trauma patients. PMID:9315926

  4. Optimising crime scene temperature collection for forensic entomology casework.

    PubMed

    Hofer, Ines M J; Hart, Andrew J; Martín-Vega, Daniel; Hall, Martin J R

    2017-01-01

    The value of minimum post-mortem interval (minPMI) estimations in suspicious death investigations from insect evidence using temperature modelling is indisputable. In order to investigate the reliability of the collected temperature data used for modelling minPMI, it is necessary to study the effects of data logger location on the accuracy and precision of measurements. Digital data logging devices are the most commonly used temperature measuring devices in forensic entomology, however, the relationship between ambient temperatures (measured by loggers) and body temperatures has been little studied. The placement of loggers in this study in three locations (two outdoors, one indoors) had measurable effects when compared with actual body temperature measurements (simulated with pig heads), some more significant than others depending on season, exposure to the environment and logger location. Overall, the study demonstrated the complexity of the question of optimal logger placement at a crime scene and the potential impact of inaccurate temperature data on minPMI estimations, showing the importance of further research in this area and development of a standard protocol. Initial recommendations are provided for data logger placement (within a Stevenson Screen where practical), situations to avoid (e.g. placement of logger in front of windows when measuring indoor temperatures), and a baseline for further research into producing standard guidelines for logger placement, to increase the accuracy of minPMI estimations and, thereby, the reliability of forensic entomology evidence in court.

  5. Integration and segregation in auditory scene analysis.

    PubMed

    Sussman, Elyse S

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech.

  6. Tachistoscopic illumination and masking of real scenes

    PubMed Central

    Chichka, David; Philbeck, John W.; Gajewski, Daniel A.

    2014-01-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and the directional locations of objects in 2D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues may be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This paper describes the system and the timing characteristics of each component. Verification of the ability to control exposure to time scales as low as a few milliseconds is demonstrated. PMID:24519496

  7. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  8. Recognition and memory for briefly presented scenes.

    PubMed

    Potter, Mary C

    2012-01-01

    Three times per second, our eyes make a new fixation that generates a new bottom-up analysis in the visual system. How much is extracted from each glimpse? For how long and in what form is that information remembered? To answer these questions, investigators have mimicked the effect of continual shifts of fixation by using rapid serial visual presentation of sequences of unrelated pictures. Experiments in which viewers detect specified target pictures show that detection on the basis of meaning is possible at presentation durations as brief as 13 ms, suggesting that understanding may be based on feedforward processing, without feedback. In contrast, memory for what was just seen is poor unless the viewer has about 500 ms to think about the scene: the scene does not need to remain in view. Initial memory loss after brief presentations occurs over several seconds, suggesting that at least some of the information from the previous few fixations persists long enough to support a coherent representation of the current environment. In contrast to marked memory loss shortly after brief presentations, memory for pictures viewed for 1 s or more is excellent. Although some specific visual information persists, the form and content of the perceptual and memory representations of pictures over time indicate that conceptual information is extracted early and determines most of what remains in longer-term memory.

  9. Interactive Display of Scenes with Annotations

    NASA Technical Reports Server (NTRS)

    Vona, Marsette; Powell, Mark; Backes, Paul; Norris, Jeffrey; Steinke, Robert

    2005-01-01

    ThreeDView is a computer program that enables high-performance interactive display of real-world scenes with annotations. ThreeDView was developed primarily as a component of the Science Activity Planner (SAP) software, wherein it is to be used to display annotated images of terrain acquired by exploratory robots on Mars and possibly other remote planets. The images can be generated from sets of multiple-texture image data in the Visible Scalable Terrain (ViSTa) format, which was described in "Format for Interchange and Display of 3D Terrain Data" (NPO-30600) NASA Tech Briefs, Vol. 28, No. 12 (December 2004), page 25. In ThreeDView, terrain data can be loaded rapidly, the geometric level of detail and texture resolution can be selected, false colors can be used to represent scientific data mapped onto terrain, and the user can select among navigation modes. ThreeDView consists largely of modular Java software components that can easily be reused and extended to produce new high-performance, application-specific software systems for displaying images of three-dimensional real-world scenes.

  10. Comprehensive Understanding for Vegetated Scene Radiance Relationships

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Deering, D. W.

    1984-01-01

    Directional reflectance distributions spanning the entire existent hemisphere were measured in two field studies; one using a Mark III 3-band radiometer and one using the rapid scanning bidirectional field instrument called PARABOLA. Surfaces measured included corn, soybeans, bare soils, grass lawn, orchard grass, alfalfa, cotton row crops, plowed field, annual grassland, stipa grass, hard wheat, salt plain shrubland, and irrigated wheat. Analysis of field data showed unique reflectance distributions ranging from bare soil to complete vegetation canopies. Physical mechanisms causing these trends were proposed. A 3-D model was developed and is unique in that it predicts: (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy; (2) the spectral absorption as a function of location within the scene; and (3) the directional spectral radiance as a function of the sensor's location within the scene. Initial verification of the model as applied to a soybean row crop showed that the simulated directional data corresponded relatively well in gross trends to the measured data. The model was expanded to include the anisotropic scattering properties of leaves as a function of the leaf orientation distribution in both the zenith and azimuth angle modes.

  11. The scene and the unseen: manipulating photographs for experiments on change blindness and scene memory: image manipulation for change blindness.

    PubMed

    Ball, Felix; Elzemann, Anne; Busch, Niko A

    2014-09-01

    The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.

  12. How People Actually Use Thermostats

    SciTech Connect

    Meier, Alan; Aragon, Cecilia; Hurwitz, Becky; Mujumdar, Dhawal; Peffer, Therese; Perry, Daniel; Pritoni, Marco

    2010-08-15

    Residential thermostats have been a key element in controlling heating and cooling systems for over sixty years. However, today's modern programmable thermostats (PTs) are complicated and difficult for users to understand, leading to errors in operation and wasted energy. Four separate tests of usability were conducted in preparation for a larger study. These tests included personal interviews, an on-line survey, photographing actual thermostat settings, and measurements of ability to accomplish four tasks related to effective use of a PT. The interviews revealed that many occupants used the PT as an on-off switch and most demonstrated little knowledge of how to operate it. The on-line survey found that 89% of the respondents rarely or never used the PT to set a weekday or weekend program. The photographic survey (in low income homes) found that only 30% of the PTs were actually programmed. In the usability test, we found that we could quantify the difference in usability of two PTs as measured in time to accomplish tasks. Users accomplished the tasks in consistently shorter times with the touchscreen unit than with buttons. None of these studies are representative of the entire population of users but, together, they illustrate the importance of improving user interfaces in PTs.

  13. Figure-Ground Organization in Visual Cortex for Natural Scenes

    PubMed Central

    2016-01-01

    Abstract Figure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes, and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ∼30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge. PMID:28058269

  14. Crime scene units: a look to the future

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    The scientific examination of physical evidence is well recognized as a critical element in conducting successful criminal investigations and prosecutions. The forensic science field is an ever changing discipline. With the arrival of DNA, new processing techniques for latent prints, portable lasers, and electro-static dust print lifters, and training of evidence technicians has become more important than ever. These scientific and technology breakthroughs have increased the possibility of collecting and analyzing physical evidence that was never possible before. The problem arises with the collection of physical evidence from the crime scene not from the analysis of the evidence. The need for specialized units in the processing of all crime scenes is imperative. These specialized units, called crime scene units, should be trained and equipped to handle all forms of crime scenes. The crime scenes units would have the capability to professionally evaluate and collect pertinent physical evidence from the crime scenes.

  15. A note on notes: note taking and containment.

    PubMed

    Levine, Howard B

    2007-07-01

    In extreme situations of massive projective identification, both the analyst and the patient may come to share a fantasy or belief that his or her own psychic reality will be annihilated if the psychic reality of the other is accepted or adopted (Britton 1998). In the example of' Dr. M and his patient, the paradoxical dilemma around note taking had highly specific transference meanings; it was not simply an instance of the generalized human response of distracted attention that Freud (1912) had spoken of, nor was it the destabilization of analytic functioning that I tried to describe in my work with Mr. L. Whether such meanings will always exist in these situations remains a matter to be determined by further clinical experience. In reopening a dialogue about note taking during sessions, I have attempted to move the discussion away from categorical injunctions about what analysis should or should not do, and instead to foster a more nuanced, dynamic, and pair-specific consideration of the analyst's functioning in the immediate context of the analytic relationship. There is, of course, a wide variety of listening styles among analysts, and each analyst's mental functioning may be affected differently by each patient whom the analyst sees. I have raised many questions in the hopes of stimulating an expanded discussion that will allow us to share our experiences and perhaps reach additional conclusions. Further consideration may lead us to decide whether note taking may have very different meanings for other analysts and analyst-patient pairs, and whether it may serve useful functions in addition to the one that I have described.

  16. A qualitative approach for recovering relative depths in dynamic scenes

    NASA Technical Reports Server (NTRS)

    Haynes, S. M.; Jain, R.

    1987-01-01

    This approach to dynamic scene analysis is a qualitative one. It computes relative depths using very general rules. The depths calculated are qualitative in the sense that the only information obtained is which object is in front of which others. The motion is qualitative in the sense that the only required motion data is whether objects are moving toward or away from the camera. Reasoning, which takes into account the temporal character of the data and the scene, is qualitative. This approach to dynamic scene analysis can tolerate imprecise data because in dynamic scenes the data are redundant.

  17. The occipital place area represents the local elements of scenes.

    PubMed

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties.

  18. Sticky-Note Murals

    ERIC Educational Resources Information Center

    Sands, Ian

    2011-01-01

    In this article, the author describes a sticky-note mural project that originated from his desire to incorporate contemporary materials into his assignments as well as to inspire collaboration between students. The process takes much more than sticking sticky notes to the wall. It takes critical thinking skills and teamwork to design and complete…

  19. Imaging polarimetry in scene element discrimination

    NASA Astrophysics Data System (ADS)

    Duggin, Michael J.

    1999-10-01

    Recent work has shown that the use of a calibrated digital camera fitted with a rotating linear polarizer can facilitate the study of Stokes parameter images across a wide dynamic range of scene radiance values. Here, we show images of a MacBeth color chips, Spectralon gray scale targets and Kodak gray cards. We also consider a static aircraft mounted on a platform against a clear sky background. We show that the contrast in polarization is greater than for intensity, and that polarization contrast increases as intensity contrast decreases. We also show that there is a great variation in the polarization in and between each of the bandpasses: this variation is comparable to the magnitude of the variation in intensity.

  20. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  1. Behind the scenes of auditory perception.

    PubMed

    Shamma, Shihab A; Micheyl, Christophe

    2010-06-01

    'Auditory scenes' often contain contributions from multiple acoustic sources. These are usually heard as separate auditory 'streams', which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the past two years indicate that both cortical and subcortical processes contribute to the formation of auditory streams, and they raise important questions concerning the roles of primary and secondary areas of auditory cortex in this phenomenon. In addition, these findings underline the importance of taking into account the relative timing of neural responses, and the influence of selective attention, in the search for neural correlates of the perception of auditory streams.

  2. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    PubMed

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes.

  3. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  4. An analysis of LANDSAT MSS scene-to-scene registration accuracy

    NASA Technical Reports Server (NTRS)

    Seyfarth, B. R.; Cook, P. W. (Principal Investigator)

    1981-01-01

    Measurements were made for 12 registrations done by ERL for 8 registrations done by SRS. The results indicate that the ERL method is significantly more accurate in five of the eight comparison. The difference between the two methods are not significant in the other three cases. There are two possible reasons for the differences. First, the ERL model is a piecewise linear model and the EDITOR model is a cubic polynomial model. Second, the ERL program resamples using bilinear interpolation while the EDITOR software uses a nearest neighbor resampling. This study did not indicate how much of the difference is attributable to each factor. The average of all merged scene error values for ERL was 31.6 meters and the average for the eight common areas was 32.6 meters. The average of the eight merged scene error values for SRS was 40.1 meters.

  5. LUVOIR Tech Notes

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew R.; Shaklan, Stuart; Roberge, Aki; Rioux, Norman; Feinberg, Lee; Werner, Michael; Rauscher, Bernard; Mandell, Avi; France, Kevin; Schiminovich, David

    2016-01-01

    We present nine "tech notes" prepared by the Large UV/Optical/Infrared (LUVOIR) Science and Technology Definition Team (STDT), Study Office, and Technology Working Group. These tech notes are intended to highlight technical challenges that represent boundaries in the trade space for developing the LUVOIR architecture that may impact the science objectives being developed by the STDT. These tech notes are intended to be high-level discussions of the technical challenges and will serve as starting points for more in-depth analysis as the LUVOIR study progresses.

  6. Editing Emily Dickinson: Poetry "Behind the Scenes."

    ERIC Educational Resources Information Center

    Braswell, Mary Flowers

    1995-01-01

    Describes an activity in which students are given copies of a poem by Emily Dickinson in her own handwriting and are assigned the task, in groups, of preparing a version of the poem for publication. Notes that this activity makes students aware of an editor's work and the work that goes into preparing material for publication. (SR)

  7. Experiencing simultanagnosia through windowed viewing of complex social scenes.

    PubMed

    Dalrymple, Kirsten A; Birmingham, Elina; Bischof, Walter F; Barton, Jason J S; Kingstone, Alan

    2011-01-07

    Simultanagnosia is a disorder of visual attention, defined as an inability to see more than one object at once. It has been conceived as being due to a constriction of the visual "window" of attention, a metaphor that we examine in the present article. A simultanagnosic patient (SL) and two non-simultanagnosic control patients (KC and ES) described social scenes while their eye movements were monitored. These data were compared to a group of healthy subjects who described the same scenes under the same conditions as the patients, or through an aperture that restricted their vision to a small portion of the scene. Experiment 1 demonstrated that SL showed unusually low proportions of fixations to the eyes in social scenes, which contrasted with all other participants who demonstrated the standard preferential bias toward eyes. Experiments 2 and 3 revealed that when healthy participants viewed scenes through a window that was contingent on where they looked (Experiment 2) or where they moved a computer mouse (Experiment 3), their behavior closely mirrored that of patient SL. These findings suggest that a constricted window of visual processing has important consequences for how simultanagnosic patients explore their world. Our paradigm's capacity to mimic simultanagnosic behaviors while viewing complex scenes implies that it may be a valid way of modeling simultanagnosia in healthy individuals, providing a useful tool for future research. More broadly, our results support the thesis that people fixate the eyes in social scenes because they are informative to the meaning of the scene.

  8. CRISP: A Computational Model of Fixation Durations in Scene Viewing

    ERIC Educational Resources Information Center

    Nuthmann, Antje; Smith, Tim J.; Engbert, Ralf; Henderson, John M.

    2010-01-01

    Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations.…

  9. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  10. The Influence of Color on the Perception of Scene Gist

    ERIC Educational Resources Information Center

    Castelhano, Monica S.; Henderson, John M.

    2008-01-01

    In 3 experiments the authors used a new contextual bias paradigm to explore how quickly information is extracted from a scene to activate gist, whether color contributes to this activation, and how color contributes, if it does. Participants were shown a brief presentation of a scene followed by the name of a target object. The target object could…

  11. The Importance of Information Localization in Scene Gist Recognition

    ERIC Educational Resources Information Center

    Loschky, Lester C.; Sethi, Amit; Simons, Daniel J.; Pydimarri, Tejaswi N.; Ochs, Daniel; Corbeille, Jeremy L.

    2007-01-01

    People can recognize the meaning or gist of a scene from a single glance, and a few recent studies have begun to examine the sorts of information that contribute to scene gist recognition. The authors of the present study used visual masking coupled with image manipulations (randomizing phase while maintaining the Fourier amplitude spectrum;…

  12. Inattentional blindness with the same scene at different scales.

    PubMed

    Apfelbaum, Henry L; Gambacorta, Christina; Woods, Russell L; Peli, Eli

    2010-03-01

    People with severely restricted peripheral visual fields have difficulty walking confidently and safely in the physical environment. Augmented vision devices that we are developing for low-vision rehabilitation implement vision multiplexing, providing two views of the same scene at two different scales (sizes), with a cartooned minified wide view overlaying a natural see-through view. Inattentional blindness may partially limit the utility of these devices as low-vision aids. Inattentional blindness, the apparent inability to notice significant but unexpected events in an unattended scene when attention is fixed on another scene, has classically been demonstrated by overlaying two unrelated game scenes, with unexpected events occurring in one scene while attention is maintained on the other scene by a distractor task. We hypothesized that context like that provided by the related wide view in our devices might mitigate inattentional blindness in a study with two simultaneous views of the same scene shown at different scales. It did not, and unexpected event detection rates were remarkably consistent with our and other mixed-scene studies. Still, detecting about half of the unexpected events bodes well for our use of vision aids that employ vision multiplexing. Without the aids, is it likely that many more events would be missed.

  13. High-fidelity real-time maritime scene rendering

    NASA Astrophysics Data System (ADS)

    Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin

    2011-06-01

    The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.

  14. Three-dimensional scene capturing for the virtual reality display

    NASA Astrophysics Data System (ADS)

    Dong, Jingsheng; Sang, Xinzhu; Guo, Nan; Chen, Duo; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    A virtual reality shooting and display system based on multiple degrees of freedom camera is designed and demonstrated. Three-dimensional scene display and the wide angle display can be achieved easily and quickly through the construction with the proposed system. The range of the viewing scene can be broaden with the image stitching process, and the display in the demonstrated system can achieve the effect of wide angle for applications of image mosaic. In the meantime, the system can realize 3D scene display, which can effectively reduce the complexity of the 3D scene generation, and provide a foundation for adding interactive characteristics for the 3D scene in the future. The system includes an adjustable bracket, computer software, and a virtual reality device. Multiple degrees of freedom of the adjustable bracket are developed to obtain 3D scene source images and mosaic source images easily. 5 degrees of freedom are realized, including rotation, lifting, translation, convergence and pitching. To realize the generation and display of three-dimensional scenes, two cameras are adjusted into a parallel state. With the process of image distortion eliminating and calibration, the image is transferred to the virtual reality device for display. In order to realize wide angle display, the cameras are adjusted into "V" type. The preprocessing includes image matching and fusion to realize image stitching. The mosaic image is transferred for virtual reality display with its image reading and display functions. The wide angle 3D scene display is realized by adjusting different states.

  15. Scenes: Social Context in an Age of Contingency

    ERIC Educational Resources Information Center

    Silver, Daniel; Clark, Terry Nichols; Yanez, Clemente Jesus Navarro

    2010-01-01

    This article builds on an important but underdeveloped social science concept--the "scene" as a cluster of urban amenities--to contribute to social science theory and subspecialties such as urban and rural, class, race and gender studies. Scenes grow more important in less industrial, more expressively-oriented and contingent societies where…

  16. Emotional Scene Content Drives the Saccade Generation System Reflexively

    ERIC Educational Resources Information Center

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2009-01-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster…

  17. Being There: (Re)Making the Assessment Scene

    ERIC Educational Resources Information Center

    Gallagher, Chris W.

    2011-01-01

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the…

  18. A note on migration with borrowing constraints.

    PubMed

    Ghatak, S; Levine, P

    1994-12-01

    "This note examines an important conflict between the theory and evidence on migration in LDCs. While the Harris-Todaro class of models explain the phenomenon of migration mainly by expected income differential between the economically advanced and the backward regions, the actual evidence in some cases suggests that migration could actually rise following a rise in income in backward areas. We resolve this puzzle by analysing migration in the context of the existence of imperfect credit markets in LDCs. We show that under certain plausible conditions, the rate of migration from the rural to the urban areas may actually rise when rural wages rise, as they ease the constraints on borrowing by potential migrants."

  19. Improving text recognition by distinguishing scene and overlay text

    NASA Astrophysics Data System (ADS)

    Quehl, Bernhard; Yang, Haojin; Sack, Harald

    2015-02-01

    Video texts are closely related to the content of a video. They provide a valuable source for indexing and interpretation of video data. Text detection and recognition task in images or videos typically distinguished between overlay and scene text. Overlay text is artificially superimposed on the image at the time of editing and scene text is text captured by the recording system. Typically, OCR systems are specialized on one kind of text type. However, in video images both types of text can be found. In this paper, we propose a method to automatically distinguish between overlay and scene text to dynamically control and optimize post processing steps following text detection. Based on a feature combination a Support Vector Machine (SVM) is trained to classify scene and overlay text. We show how this distinction in overlay and scene text improves the word recognition rate. Accuracy of the proposed methods has been evaluated by using publicly available test data sets.

  20. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  1. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  2. Indoor scene classification of robot vision based on cloud computing

    NASA Astrophysics Data System (ADS)

    Hu, Tao; Qi, Yuxiao; Li, Shipeng

    2016-07-01

    For intelligent service robots, indoor scene classification is an important issue. To overcome the weak real-time performance of conventional algorithms, a new method based on Cloud computing is proposed for global image features in indoor scene classification. With MapReduce method, global PHOG feature of indoor scene image is extracted in parallel. And, feature eigenvector is used to train the decision classifier through SVM concurrently. Then, the indoor scene is validly classified by decision classifier. To verify the algorithm performance, we carried out an experiment with 350 typical indoor scene images from MIT LabelMe image library. Experimental results show that the proposed algorithm can attain better real-time performance. Generally, it is 1.4 2.1 times faster than traditional classification methods which rely on single computation, while keeping stable classification correct rate as 70%.

  3. The actual goals of geoethics

    NASA Astrophysics Data System (ADS)

    Nemec, Vaclav

    2014-05-01

    The most actual goals of geoethics have been formulated as results of the International Conference on Geoethics (October 2013) held at the geoethics birth-place Pribram (Czech Republic): In the sphere of education and public enlightenment an appropriate needed minimum know how of Earth sciences should be intensively promoted together with cultivating ethical way of thinking and acting for the sustainable well-being of the society. The actual activities of the Intergovernmental Panel of Climate Changes are not sustainable with the existing knowledge of the Earth sciences (as presented in the results of the 33rd and 34th International Geological Congresses). This knowledge should be incorporated into any further work of the IPCC. In the sphere of legislation in a large international co-operation following steps are needed: - to re-formulate the term of a "false alarm" and its legal consequences, - to demand very consequently the needed evaluation of existing risks, - to solve problems of rights of individuals and minorities in cases of the optimum use of mineral resources and of the optimum protection of the local population against emergency dangers and disasters; common good (well-being) must be considered as the priority when solving ethical dilemmas. The precaution principle should be applied in any decision making process. Earth scientists presenting their expert opinions are not exempted from civil, administrative or even criminal liabilities. Details must be established by national law and jurisprudence. The well known case of the L'Aquila earthquake (2009) should serve as a serious warning because of the proven misuse of geoethics for protecting top Italian seismologists responsible and sentenced for their inadequate superficial behaviour causing lot of human victims. Another recent scandal with the Himalayan fossil fraud will be also documented. A support is needed for any effort to analyze and to disclose the problems of the deformation of the contemporary

  4. Impairments of auditory scene analysis in Alzheimer's disease.

    PubMed

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  5. Hysteresis in the dynamic perception of scenes and objects.

    PubMed

    Poltoratski, Sonia; Tong, Frank

    2014-10-01

    Scenes and objects are effortlessly processed and integrated by the human visual system. Given the distinct neural and behavioral substrates of scene and object processing, it is likely that individuals sometimes preferentially rely on one process or the other when viewing canonical "scene" or "object" stimuli. This would allow the visual system to maximize the specific benefits of these 2 types of processing. It is less obvious which of these modes of perception would be invoked during naturalistic visual transition between a focused view of a single object and an expansive view of an entire scene, particularly at intermediate views that may not be assigned readily to either stimulus category. In the current study, we asked observers to report their online perception of such dynamic image sequences, which zoomed and panned between a canonical view of a single object and an entire scene. We found a large and consistent effect of prior perception, or hysteresis, on the classification of the sequence: observers classified the sequence as an object for several seconds longer if the trial started at the object view and zoomed out, whereas scenes were perceived for longer on trials beginning with a scene view. This hysteresis effect resisted several manipulations of the movie stimulus and of the task performed, but hinged on the perceptual history built by unidirectional progression through the image sequence. Multiple experiments confirmed that this hysteresis effect was not purely decisional and was more prominent for transitions between corresponding objects and scenes than between other high-level stimulus classes. This finding suggests that the competitive mechanisms underlying hysteresis may be especially prominent in the perception of objects and scenes. We propose that hysteresis aids in disambiguating perception during naturalistic visual transitions, which may facilitate a dynamic balance between scene and object processing to enhance processing efficiency.

  6. Does object view influence the scene consistency effect?

    PubMed

    Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2015-04-01

    Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.

  7. Fixations on objects in natural scenes: dissociating importance from salience

    PubMed Central

    't Hart, Bernard M.; Schmidt, Hannah C. E. F.; Roth, Christine; Einhäuser, Wolfgang

    2013-01-01

    The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's “importance” for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (“common”/“important”) or a rarely named (“rare”/“unimportant”) object, track the observers' eye movements during scene viewing and ask them to provide keywords describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in

  8. Just Another Social Scene: Evidence for Decreased Attention to Negative Social Scenes in High-Functioning Autism

    ERIC Educational Resources Information Center

    Santos, Andreia; Chaminade, Thierry; Da Fonseca, David; Silva, Catarina; Rosset, Delphine; Deruelle, Christine

    2012-01-01

    The adaptive threat-detection advantage takes the form of a preferential orienting of attention to threatening scenes. In this study, we compared attention to social scenes in 15 high-functioning individuals with autism (ASD) and matched typically developing (TD) individuals. Eye-tracking was recorded while participants were presented with pairs…

  9. Memory efficient atmospheric effects modeling for infrared scene generators

    NASA Astrophysics Data System (ADS)

    Kavak, Çaǧlar; Özsaraç, Seçkin

    2015-05-01

    The infrared (IR) energy radiated from any source passes through the atmosphere before reaching the sensor. As a result, the total signature captured by the IR sensor is significantly modified by the atmospheric effects. The dominant physical quantities that constitute the mentioned atmospheric effects are the atmospheric transmittance and the atmospheric path radiance. The incoming IR radiation is attenuated by the transmittance and path radiance is added on top of the attenuated radiation. In IR scene simulations OpenGL is widely used for rendering purposes. In the literature there are studies, which model the atmospheric effects in an IR band using OpenGLs exponential fog model as suggested by Beers law. In the standard pipeline of OpenGL, the related fog model needs single equivalent OpenGL variables for the transmittance and path radiance, which actually depend on both the distance between the source and the sensor and also on the wavelength of interest. However, in the conditions where the range dependency cannot be modeled as an exponential function, it is not accurate to replace the atmospheric quantities with a single parameter. The introduction of OpenGL Shading Language (GLSL) has enabled the developers to use the GPU more flexible. In this paper, a novel method is proposed for the atmospheric effects modeling using the least squares estimation with polynomial fitting by programmable OpenGL shader programs built with GLSL. In this context, a radiative transfer model code is used to obtain the transmittance and path radiance data. Then, polynomial fits are computed for the range dependency of these variables. Hence, the atmospheric effects model data that will be uploaded in the GPU memory is significantly reduced. Moreover, the error because of fitting is negligible as long as narrow IR bands are used.

  10. Crime scene investigation (as seen on TV).

    PubMed

    Durnal, Evan W

    2010-06-15

    A mysterious green ooze is injected into a brightly illuminated and humming machine; 10s later, a printout containing a complete biography of the substance is at the fingertips of an attractive young investigator who exclaims "we found it!" We have all seen this event occur countless times on any and all of the three CSI dramas, Cold Cases, Crossing Jordans, and many more. With this new style of "infotainment" (Surette, 2007), comes an increasingly blurred line between the hard facts of reality and the soft, quick solutions of entertainment. With these advances in technology, how can crime rates be anything but plummeting as would-be criminals cringe at the idea of leaving the smallest speck of themselves at a crime scene? Surely there are very few serious crimes that go unpunished in today's world of high-tech, fast-paced gadgetry. Science and technology have come a great distance since Sir Arthur Conan Doyle first described the first famous forensic scientist (Sherlock Holmes), but still have light-years to go.

  11. A Crime Scene Fabricated as Suicide

    PubMed Central

    Amararatne, RRG Sriyantha

    2017-01-01

    When ascertaining the manner of death, the forensic pathologist should be careful, because in some instances, attempts are made by the criminals to conceal homicides as suicides. The case under discussion highlights the contribution of the forensic pathologist in the ascertainment of the manner in firearm deaths. The deceased was a poacher and his dead body was found in a cashew land with his shotgun lying over him. The shirt had a roughly circular defect with muzzle mark, and burnt and blackened margin. Beneath that, on front of the left upper chest a 2cm diameter circular, perforated laceration, with muzzle imprint and, burnt and blackened margin was found. Shelving was found at the upper margin. Chest X-ray showed the downward pellet distribution. Cause of death was chest injuries due to pellets discharged from a smooth bore weapon. Length of the upper arm reach was 65cm (25 inches) and the length from the muzzle to the trigger was 79cm (31 inches). In conclusion, it was found to be a fabricated suicide scene and the manner of death was ascertained as homicide. This reiterates that the postmortem investigation of firearm deaths should be performed or conducted under direct supervision of forensic specialist to deliver justice. PMID:28384886

  12. Scotopic hue percepts in natural scenes

    PubMed Central

    Elliott, Sarah L.; Cao, Dingcai

    2012-01-01

    Traditional trichromatic theories of color vision conclude that color perception is not possible under scotopic illumination in which only one type of photoreceptor, rods, is active. The current study demonstrates the existence of scotopic color perception and indicates that perceived hue is influenced by spatial context and top-down processes of color perception. Experiment 1 required observers to report the perceived hue in various natural scene images under purely rod-mediated vision. The results showed that when the test patch had low variation in the luminance distribution and was a decrement in luminance compared to the surrounding area, reddish or orangish percepts were more likely to be reported compared to all other percepts. In contrast, when the test patch had a high variation and was an increment in luminance, the probability of perceiving blue, green, or yellow hues increased. In addition, when observers had a strong, but singular, daylight hue association for the test patch, color percepts were reported more often and hues appeared more saturated compared to patches with no daylight hue association. This suggests that experience in daylight conditions modulates the bottom-up processing for rod-mediated color perception. In Experiment 2, observers reported changes in hue percepts for a test ring surrounded by inducing rings that varied in spatial context. In sum, the results challenge the classic view that rod vision is achromatic and suggest that scotopic hue perception is mediated by cortical mechanisms. PMID:24233245

  13. Visual conspicuity of objects in complex scenes

    NASA Astrophysics Data System (ADS)

    Boersema, Theo; Zwaga, Harm J. G.

    2000-06-01

    In many everyday situations people have to locate a particular object in a cluttered environment, for instance a routing sign among commercial signs in an airport terminal. The object's conspicuity determines the efficiency of the search. The literature on human visual search does not unequivocally answer the question how a cluttered environment affects target conspicuity. A method was developed and validated to measure target conspicuity and the effect of distractors, using the case of routing signs (blue with white lettering) and commercial signs (non-blue) in large public buildings as a vehicle. The stimulus fields are complex, computer-generated images, which mimic natural scenes in public buildings but have no apparent meaning. Target conspicuity is operationalized as the time a subject needs to locate the target; this search time is derived from the subject's eye movements. To avoid artifacts of perceptual learning, the number of trials per subject is limited. Experiments in which this method was used clearly demonstrated that conspicuity results from the combined action of the object's own physical properties in relation to those of its environment and of the observer's perceptual and cognitive properties and current intention. Thus, conspicuity never depends merely on the characteristics of the visual stimulus.

  14. The primal scene and Picasso's Guernica.

    PubMed

    Hartke, R

    2000-02-01

    The author examines a group of works by Picasso dating from the late 1930s in terms of the artist's experiences as documented by his biographers and of primal-scene fantasies as described in the field of psychoanalysis by, in particular, Freud and Klein. Pointing out that the artist himself is on record as inviting such a consideration, he contends that these fantasies constitute the latent motivating force behind one of Picasso's most famous paintings, the mural Guernica, and a number of other productions from the same period. Biographical accounts are drawn upon to show how aspects of his inner world are revealed in the specific works described and reproduced in this paper. The role of women is shown to have been particularly relevant. The author demonstrates how Picasso's constant pattern of triangular relationships culminated in his personal crisis of 1935, which, together with the Spanish Civil War, reflecting as it did the conflicts of his internal and external relations, contributed to the production of the works in this group. The artist is seen as attempting to work through and make reparation for envious attacks on the parental objects, but it is pointed out that art works should not be assessed by the criterion of therapeutic change.

  15. Dense Correspondences across Scenes and Scales.

    PubMed

    Tau, Moria; Hassner, Tal

    2016-05-01

    We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only few image pixels; matching only pixels for which stable scales may be reliably estimated. Recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work is motivated by the observation that pixels in the image have contexts-the pixels around them-which may be exploited in order to reliably estimate local scales. We make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image. (ii) We explore three means for propagating this information: using the scales at detected interest points, using the underlying image information to guide scale propagation in each image separately, and using both images together. Finally, (iii), we provide extensive qualitative and quantitative results, demonstrating that scale propagation allows for accurate dense correspondences to be obtained even between very different images, with little computational costs beyond those required by existing methods.

  16. Adopting Abstract Images for Semantic Scene Understanding.

    PubMed

    Zitnick, C Lawrence; Vedantam, Ramakrishna; Parikh, Devi

    2016-04-01

    Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages over real images. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of real images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of real images that are semantically similar would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract images with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity. Finally, we study the relation between the saliency and memorability of objects and their semantic importance.

  17. Analyzing visual signals as visual scenes.

    PubMed

    Allen, William L; Higham, James P

    2013-07-01

    The study of visual signal design is gaining momentum as techniques for studying signals become more sophisticated and more freely available. In this paper we discuss methods for analyzing the color and form of visual signals, for integrating signal components into visual scenes, and for producing visual signal stimuli for use in psychophysical experiments. Our recommended methods aim to be rigorous, detailed, quantitative, objective, and where possible based on the perceptual representation of the intended signal receiver(s). As methods for analyzing signal color and luminance have been outlined in previous publications we focus on analyzing form information by discussing how statistical shape analysis (SSA) methods can be used to analyze signal shape, and spatial filtering to analyze repetitive patterns. We also suggest the use of vector-based approaches for integrating multiple signal components. In our opinion elliptical Fourier analysis (EFA) is the most promising technique for shape quantification but we await the results of empirical comparison of techniques and the development of new shape analysis methods based on the cognitive and perceptual representations of receivers. Our manuscript should serve as an introductory guide to those interested in measuring visual signals, and while our examples focus on primate signals, the methods are applicable to quantifying visual signals in most taxa.

  18. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  19. An LED-based lighting system for acquiring multispectral scenes

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Lansel, Steven; Farrell, Joyce

    2012-01-01

    The availability of multispectral scene data makes it possible to simulate a complete imaging pipeline for digital cameras, beginning with a physically accurate radiometric description of the original scene followed by optical transformations to irradiance signals, models for sensor transduction, and image processing for display. Certain scenes with animate subjects, e.g., humans, pets, etc., are of particular interest to consumer camera manufacturers because of their ubiquity in common images, and the importance of maintaining colorimetric fidelity for skin. Typical multispectral acquisition methods rely on techniques that use multiple acquisitions of a scene with a number of different optical filters or illuminants. Such schemes require long acquisition times and are best suited for static scenes. In scenes where animate objects are present, movement leads to problems with registration and methods with shorter acquisition times are needed. To address the need for shorter image acquisition times, we developed a multispectral imaging system that captures multiple acquisitions during a rapid sequence of differently colored LED lights. In this paper, we describe the design of the LED-based lighting system and report results of our experiments capturing scenes with human subjects.

  20. Decoding Representations of Scenes in the Medial Temporal Lobes

    PubMed Central

    Bonnici, Heidi M; Kumaran, Dharshan; Chadwick, Martin J; Weiskopf, Nikolaus; Hassabis, Demis; Maguire, Eleanor A

    2012-01-01

    Recent theoretical perspectives have suggested that the function of the human hippocampus, like its rodent counterpart, may be best characterized in terms of its information processing capacities. In this study, we use a combination of high-resolution functional magnetic resonance imaging, multivariate pattern analysis, and a simple decision making task, to test specific hypotheses concerning the role of the medial temporal lobe (MTL) in scene processing. We observed that while information that enabled two highly similar scenes to be distinguished was widely distributed throughout the MTL, more distinct scene representations were present in the hippocampus, consistent with its role in performing pattern separation. As well as viewing the two similar scenes, during scanning participants also viewed morphed scenes that spanned a continuum between the original two scenes. We found that patterns of hippocampal activity during morph trials, even when perceptual inputs were held entirely constant (i.e., in 50% morph trials), showed a robust relationship with participants' choices in the decision task. Our findings provide evidence for a specific computational role for the hippocampus in sustaining detailed representations of complex scenes, and shed new light on how the information processing capacities of the hippocampus may influence the decision making process. © 2011 Wiley Periodicals, Inc. PMID:21656874

  1. Semantic categorization precedes affective evaluation of visual scenes.

    PubMed

    Nummenmaa, Lauri; Hyönä, Jukka; Calvo, Manuel G

    2010-05-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by a predefined target scene. The affective task involved saccading toward an unpleasant or pleasant scene, and the semantic task involved saccading toward a scene containing an animal. Both affective and semantic target scenes could be reliably categorized in less than 220 ms, but semantic categorization was always faster than affective categorization. This finding was replicated with singly, foveally presented scenes and manual responses. In comparison with foveal presentation, extrafoveal presentation slowed down the categorization of affective targets more than that of semantic targets. Exposure threshold for accurate categorization was lower for semantic information than for affective information. Superordinate-, basic-, and subordinate-level semantic categorizations were faster than affective evaluation. We conclude that affective analysis of scenes cannot bypass object recognition. Rather, semantic categorization precedes and is required for affective evaluation.

  2. Detecting and representing predictable structure during auditory scene analysis

    PubMed Central

    Sohoglu, Ediz; Chait, Maria

    2016-01-01

    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to ‘scenes’ comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces ‘surprise’. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA. DOI: http://dx.doi.org/10.7554/eLife.19113.001 PMID:27602577

  3. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Astrophysics Data System (ADS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-12-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  4. Robust encoding of scene anticipation during human spatial navigation

    PubMed Central

    Shikauchi, Yumi; Ishii, Shin

    2016-01-01

    In a familiar city, people can recall scene views (e.g., a particular street corner scene) they could encounter again in the future. Complex objects with multiple features are represented by multiple neural units (channels) in the brain, but when anticipating a scene view, the kind of feature that is assigned to a specific channel is unknown. Here, we studied neural encoding of scene view anticipation during spatial navigation, using a novel data-driven analysis to evaluate encoding channels. Our encoding models, based on functional magnetic resonance imaging (fMRI) activity, provided channel error correction via redundant channel assignments that reflected the navigation environment. We also found that our encoding models strongly reflected brain activity in the inferior parietal gyrus and precuneus, and that details of future scenes were locally represented in the superior prefrontal gyrus and temporal pole. Furthermore, a decoder associated with the encoding models accurately predicted future scene views in both passive and active navigation. These results suggest that the human brain uses scene anticipation, mediated especially by parietal and medial prefrontal cortical areas, as a robust and effective navigation processing. PMID:27874089

  5. Viewing Complex, Dynamic Scenes “Through the Eyes” of Another Person: The Gaze-Replay Paradigm

    PubMed Central

    Morin Duchesne, Xavier; Kagemann, Sebastian Alexander; Kennedy, Daniel Patrick

    2015-01-01

    We present a novel “Gaze-Replay” paradigm that allows the experimenter to directly test how particular patterns of visual input—generated from people’s actual gaze patterns—influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze “donor.” This was intended to simulate the donor’s visual selection, such that a participant could effectively view scenes “through the eyes” of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition. PMID:26252493

  6. Real-time IR/EO scene generation utilizing an optimized scene rendering subsystem

    NASA Astrophysics Data System (ADS)

    Makar, Robert J.; Howe, Daniel B.

    2000-07-01

    This paper describes advances in the development IR/EO scene generation using the second generation Comptek Amherst Systems' Scene Rendering Subsystem (SRS). The SRS is a graphics rendering engine designed specifically to support real-time hardware-in-the-loop testing of IR/EO sensor systems. The SRS serves as an alternative to commercial rendering systems, such as the Silicon GraphicsR InfiniteReality, when IR/EO sensor fidelity requirements surpass the limits designed into COTS hardware that is optimized for visual rendering. The paper will discuss the need for such a system and will present examples of the kinds of sensor tests that can take advantage of the high radiometric fidelity provided by the SRS. Examples of situations where the high spatial fidelity of the InfiniteReality is more appropriate will also be presented. The paper will also review models and algorithms used in IR/EO scene rendering and show how the design of the SRS was driven by the requirements of these models and algorithms. This work has been done in support of the Infrared Sensor Stimulator system (IRSS) which will be used for installed system testing of avionics electronic combat systems. The IRSS will provide a high frame rate, real-time, reactive, hardware-in-the-loop test capability for the stimulation of current and future infrared and ultraviolet based sensor systems. The IRSS program is a joint development effort under the leadership of the Naval Air Warfare Center -- Aircraft Division, Air Combat Environment Test and Evaluation Facility (ACETEF) with close coordination and technical support from the Electronic Combat Integrated Test (ECIT) Program Office. The system will be used for testing of multiple sensor avionics systems to support the Development Test & Evaluation and Operational Test & Evaluation objectives of the U.S. Navy and Air Force.

  7. Notes on Piezoelectricity

    SciTech Connect

    Redondo, Antonio

    2016-02-03

    These notes provide a pedagogical discussion of the physics of piezoelectricity. The exposition starts with a brief analysis of the classical (continuum) theory of piezoelectric phenomena in solids. The main subject of the notes is, however, a quantum mechanical analysis. We first derive the Frohlich Hamiltonian as part of the description of the electron-phonon interaction. The results of this analysis are then employed to derive the equations of piezoelectricity. A couple of examples with the zinc blende and and wurtzite structures are presented at the end

  8. Robotics Technical Note 102.

    DTIC Science & Technology

    1981-06-01

    IAfl-AIBZ 4U2 AIR FORCE BUSINESS RESEARCH MANAGEMENT CENTER WRIGHT-ETC F/6 13/8 I ROBOTICS TECHNIICAL NOTE 102.(U) JUN Al B M BLABIERSALL UNCLASSIFE...CATALOG uME 1T4.T7- Subtitle S. TYPE OF REPOR & PERIOO COVERED Technical Note 102 Robotics 𔄁 FInal r ---- 6. PERFORMING O1G. REPORT NUMBER C 7. A tNORa B...Identify by block number) Robotics Manufacturing Industrial Robots Robot Technology SRobotics Application BQ~.STRACT (Continue on revere* side It

  9. Discomfort Glare: What Do We Actually Know?

    SciTech Connect

    Clear, Robert D.

    2012-04-19

    We reviewed glare models with an eye for missing conditions or inconsistencies. We found ambiguities as to when to use small source versus large source models, and as to what constitutes a glare source in a complex scene. We also found surprisingly little information validating the assumed independence of the factors driving glare. A barrier to progress in glare research is the lack of a standardized dependent measure of glare. We inverted the glare models to predict luminance, and compared model predictions against the 1949 Luckiesh and Guth data that form the basis of many of them. The models perform surprisingly poorly, particularly with regards to the luminance-size relationship and additivity. Evaluating glare in complex scenes may require fundamental changes to form of the glare models.

  10. Syntactic Pattern Recognition Approach To Scene Matching

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.

    1983-03-01

    This paper describes a technique for matching two images containing natural terrain and tactical objects using syntactic pattern recognition. A preprocessor analyzes each image to identify potential areas of interest. Points of interest in an image are classified and a graph possessing properties of invariance is created based on these points. Classification derived grammar strings are generated for each classified graph structure. A local match analysis is performed and the best global match is constructed. A probability-of-match metric is computed in order to evaluate the global match. Examples demonstrating these steps are provided and actual FLIR image results are shown.

  11. Notes and Discussion

    ERIC Educational Resources Information Center

    American Journal of Physics, 1978

    1978-01-01

    Includes eleven short notes, comments and responses to comments on a variety of topics such as uncertainty in a least-squares fit, display of diffraction patterns, the dark night sky paradox, error in the dynamics of deformable bodies and relative velocities and the runner. (GA)

  12. Notes on Linguistics, 1990.

    ERIC Educational Resources Information Center

    Notes on Linguistics, 1990

    1990-01-01

    This document consists of the four issues of "Notes on Linguistics" published during 1990. Articles in the four issues include: "The Indians Do Say Ugh-Ugh" (Howard W. Law); "Constraints of Relevance, A Key to Particle Typology" (Regina Blass); "Whatever Happened to Me? (An Objective Case Study)" (Aretta…

  13. Notes on Linguistics, 1999.

    ERIC Educational Resources Information Center

    Payne, David, Ed.

    1999-01-01

    The 1999 issues of "Notes on Linguistics," published quarterly, include the following articles, review articles, reviews, book notices, and reports: "A New Program for Doing Morphology: Hermit Crab"; "Lingualinks CD-ROM: Field Guide to Recording Language Data"; "'Unruly' Phonology: An Introduction to Optimality Theory"; "Borrowing vs. Code…

  14. NCTM Student Math Notes.

    ERIC Educational Resources Information Center

    Maletsky, Evan, Ed.; Yunker, Lee E., Ed.

    1986-01-01

    Five sets of activities for students are included in this document. Each is designed for use in junior high and secondary school mathematics instruction. The first Note concerns mathematics on postage stamps. Historical procedures and mathematicians, metric conversion, geometric ideas, and formulas are among the topics considered. Successful…

  15. Sawtooth Functions. Classroom Notes

    ERIC Educational Resources Information Center

    Hirst, Keith

    2004-01-01

    Using MAPLE enables students to consider many examples which would be very tedious to work out by hand. This applies to graph plotting as well as to algebraic manipulation. The challenge is to use these observations to develop the students' understanding of mathematical concepts. In this note an interesting relationship arising from inverse…

  16. Programmable Logic Application Notes

    NASA Technical Reports Server (NTRS)

    Katz, Richard

    2000-01-01

    This column will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will start a series of notes concentrating on analysis techniques with this issues section discussing worst-case analysis requirements.

  17. Programmable Logic Application Notes

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Day, John H. (Technical Monitor)

    2001-01-01

    This report will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will continue a series of notes concentrating on analysis techniques with this issue's section discussing the use of Root-Sum-Square calculations for digital delays.

  18. Student Math Notes.

    ERIC Educational Resources Information Center

    Maletsky, Evan, Ed.

    1985-01-01

    Five sets of activities for students are included in this document. Each is designed for use in junior high and secondary school mathematics instruction. The first "Note" concerns magic squares in which the numbers in every row, column, and diagonal add up to the same sum. An etching by Albrecht Durer is presented, with four questions followed by…

  19. NASA Social: Behind the Scenes at NASA Dryden

    NASA Video Gallery

    More than 50 followers of NASA's social media websites went behind the scenes at NASA's Dryden Flight Research Center during a "NASA Social" on May 4, 2012. The visitors were briefed on what Dryden...

  20. Cross-linguistic Differences in Talking About Scenes

    PubMed Central

    Sethuraman, Nitya; Smith, Linda B.

    2010-01-01

    Speakers of English and Tamil differ widely in which relational roles they overtly express with a verb. This study provides new information about how speakers of these languages differ in their descriptions of the same scenes and how explicit mention of roles and other scene elements vary with the properties of the scenes themselves. Specifically, we find that English speakers, who in normal speech rely more on explicit mention of verb arguments, in fact appear to be more affected by the pragmatic manipulations used in this study than Tamil speakers. Additionally, although the mention of scene items increases with development in both languages, Tamil-speaking children mention fewer items than do English-speaking children, showing that the children know the structure of the language to which they are exposed. PMID:20802845

  1. Behind the Scenes: Shuttle Crawls to Launch Pad

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, take a look at what's needed to roll a space shuttle out of the Vehicle Assembly Building and out to the launch pad. Astronaut Mike Massimino talks to som...

  2. Behind the Scenes: Astronauts Keep Trainers in BBQ Bliss

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino talks with astronaut Terry Virts as well as Stephanie Turner, one of the people who keeps the astronaut corps in line. Mass also ...

  3. Behind the Scenes: Michoud Builder of Shuttle's External Tank

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino takes you on a tour of the Michoud Assembly Facility in New Orleans, La. This historic facility helped build the mighty Saturn V ...

  4. Reconstruction of indoor scene from a single image

    NASA Astrophysics Data System (ADS)

    Wu, Di; Li, Hongyu; Zhang, Lin

    2015-03-01

    Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.

  5. Behind the Scenes: Rolling Room Greets Returning Astronauts

    NASA Video Gallery

    Have you ever wondered what is the first thing the shuttle crews see after they land? In this episode of NASA Behind the Scenes, astronaut Mike Massimino takes you into the Crew Transport Vehicle, ...

  6. Behind the Scenes: Sarafin Goes from Farm to Flight Director

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino chats with flight director Mike Sarafin about when he joined NASA and moved from his family's farm in New York to Houston...with ...

  7. The influence of color on emotional perception of natural scenes.

    PubMed

    Codispoti, Maurizio; De Cesarei, Andrea; Ferrari, Vera

    2012-01-01

    Is color a critical factor when processing the emotional content of natural scenes? Under challenging perceptual conditions, such as when pictures are briefly presented, color might facilitate scene segmentation and/or function as a semantic cue via association with scene-relevant concepts (e.g., red and blood/injury). To clarify the influence of color on affective picture perception, we compared the late positive potentials (LPP) to color versus grayscale pictures, presented for very brief (24 ms) and longer (6 s) exposure durations. Results indicated that removing color information had no effect on the affective modulation of the LPP, regardless of exposure duration. These findings imply that the recognition of the emotional content of scenes, even when presented very briefly, does not critically rely on color information.

  8. Scene Categorization in Alzheimer's Disease: A Saccadic Choice Task

    PubMed Central

    Lenoble, Quentin; Bubbico, Giovanna; Szaffarczyk, Sébastien; Pasquier, Florence; Boucart, Muriel

    2015-01-01

    Aims We investigated the performance in scene categorization of patients with Alzheimer's disease (AD) using a saccadic choice task. Method 24 patients with mild AD, 28 age-matched controls and 26 young people participated in the study. The participants were presented pairs of coloured photographs and were asked to make a saccadic eye movement to the picture corresponding to the target scene (natural vs. urban, indoor vs. outdoor). Results The patients' performance did not differ from chance for natural scenes. Differences between young and older controls and patients with AD were found in accuracy but not saccadic latency. Conclusions The results are interpreted in terms of cerebral reorganization in the prefrontal and temporo-occipital cortex of patients with AD, but also in terms of impaired processing of visual global properties of scenes. PMID:25759714

  9. Behind the Scenes: Mission Control Practices Launching Discovery

    NASA Video Gallery

    Before every shuttle launch, the astronauts train with their ascent team in Mission Control Houston. In this episode of NASA Behind the Scenes, astronaut Mike Massimino introduces you to some of th...

  10. SpaceTime Environmental Image Information for Scene Understanding

    DTIC Science & Technology

    2016-04-01

    characterization of the measured data for better scene description, but can also help the end user (Soldier) develop improved course of action strategies...help the end user (Soldier) develop improved course of action strategies based on scene understanding (algorithms and analysis) incorporating...activities; and wind speed and direction that can favor upwind forces in nuclear, biological, and chemical (NBC) attacks or decrease the effectiveness of

  11. TIFF Image Writer patch for OpenSceneGraph

    SciTech Connect

    Eldridge, Bryce

    2012-01-05

    This software consists of code modifications to the open-source OpenSceneGraph software package to enable the creation of TlFF images containing 16 bit unsigned data. They also allow the user to disable compression and set the DPI tags in the resulting TIFF Images. Some image analysis programs require uncompressed, 16 bit unsigned input data. These code modifications allow programs based on OpenSceneGraph to write out such images, improving connectivity between applications.

  12. Two Distinct Scene-Processing Networks Connecting Vision and Memory

    PubMed Central

    Esteva, Andre; Fei-Fei, Li

    2016-01-01

    A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions. PMID:27822493

  13. Emotional scene content drives the saccade generation system reflexively.

    PubMed

    Nummenmaa, Lauri; Hyönä, Jukka; Calvo, Manuel G

    2009-04-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster when the cue pointed toward the emotional picture rather than toward the neutral picture. Experiment 2 replicated these findings with a reflexive saccade task, in which abrupt luminosity changes were used as exogenous saccade cues. In Experiment 3, participants performed vertical reflexive saccades that were orthogonal to the emotional-neutral picture locations. Saccade endpoints and trajectories deviated away from the visual field in which the emotional scenes were presented. Experiment 4 showed that computationally modeled visual saliency does not vary as a function of scene content and that inversion abolishes the rapid orienting toward the emotional scenes. Visual confounds cannot thus explain the results. The authors conclude that early saccade target selection and execution processes are automatically influenced by emotional picture content. This reveals processing of meaningful scene content prior to overt attention to the stimulus.

  14. Political conservatism predicts asymmetries in emotional scene memory.

    PubMed

    Mills, Mark; Gonzalez, Frank J; Giuseffi, Karl; Sievert, Benjamin; Smith, Kevin B; Hibbing, John R; Dodd, Michael D

    2016-06-01

    Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology.

  15. Registration Study. Research Note.

    ERIC Educational Resources Information Center

    Baratta, Mary Kathryne

    During spring 1977 registration, 3,255 or 45% of Moraine Valley Community College (MVCC) registering students responded to a scheduling preferences and problems questionnaire covering enrollment status, curriculum load, program preference, ability to obtain courses, schedule conflicts, preferred times for class offerings, actual scheduling of…

  16. Digital forensics: an analytical crime scene procedure model (ACSPM).

    PubMed

    Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut

    2013-12-10

    In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic

  17. Early childhood exposure to parental nudity and scenes of parental sexuality ("primal scenes"): an 18-year longitudinal study of outcome.

    PubMed

    Okami, P; Olmstead, R; Abramson, P R; Pendleton, L

    1998-08-01

    As part of the UCLA Family Lifestyles Project (FLS), 200 male and female children participated in an 18-year longitudinal outcome study of early childhood exposure to parental nudity and scenes of parental sexuality ("primal scenes"). At age 17-18, participants were assessed for levels of self-acceptance; relations with peers, parents, and other adults; antisocial and criminal behavior; substance use; suicidal ideation; quality of sexual relationships; and problems associated with sexual relations. No harmful "main effect" correlates of the predictor variables were found. A significant crossover Sex of Participant X Primal Scenes interaction was found such that boys exposed to primal scenes before age 6 had reduced risk of STD transmission or having impregnated someone in adolescence. In contrast, girls exposed to primal scenes before age 6 had increased risk of STD transmission or having become pregnant. A number of main effect trends in the data (nonsignificant at p < 0.05, following the Bonferonni correction) linked exposure to nudity and exposure to primal scenes with beneficial outcomes. However, a number of these findings were mediated by sex of participant interactions showing that the effects were attenuated or absent for girls. All effects were independent of family stability, pathology, or child-rearing ideology; sex of participant; SES; and beliefs and attitudes toward sexuality. Limitations of the data and of long-term regression studies in general are discussed, and the sex of participant interactions are interpreted speculatively. It is suggested that pervasive beliefs in the harmfulness of the predictor variables are exaggerated.

  18. A comparison of actual and perceived residential proximity to toxic waste sites.

    PubMed

    Howe, H L

    1988-01-01

    Studies of Memphis and Three Mile Island have noted a positive association between actual residential distance and public concern about exposure to the potential of contamination, whereas none was found at Love Canal. In this study, concern about environmental contamination and exposure was examined in relation to both perceived and actual proximity to a toxic waste disposal site (TWDS). It was hypothesized that perceived residential proximity would better predict concern levels that would actual residential distance. The data were abstracted from a New York State, excluding New York City, survey using all respondents (N = 317) from one county known to have a large number of TWDSs. Using linear regression, the variance explained in concern scores was 22 times higher with perceived distance than for actual distance. Perceived residential distance was a significant predictor of concern scores, while actual distance was not. However, perceived distance explained less than 5% of the variance in concern scores.

  19. Trade-offs in designing a mobile infrared scene projector

    NASA Astrophysics Data System (ADS)

    Brown, Richard; Lastra, Henry M.; Vuong, Francisca R.; Brooks, Geoffrey W.

    2000-07-01

    Current test and evaluation methods are not adequate for fully assessing the operational performance of imaging infrared sensors while they are installed on the weapon system platform. The use of infrared (IR) scene projection in test and evaluation will augment and redefine test methodologies currently being used to test and evaluate forward looking infrared (FLIR) and imaging IR sensors. The Mobile Infrared Scene Projector (MIRSP) projects accurate, dynamic, and realistic IR imagery into the entrance aperture of the sensor, such that the sensor would perceive and respond to the imagery as it would to the real-world scenario. The MIRSP domain of application includes development, analysis, integration, exploitation, training, and test and evaluation of ground and aviation based imaging IR sensors/subsystems/systems. This applies to FLIR systems, imaging IR missile seekers/guidance sections, as well as non-imaging thermal sensors. The MIRSP Phase I, 'pathfinder' has evolved from other scene projector systems, such as the Flight Motion Simulator Infrared Scene Projector (FIRSP) and the Dynamic Infrared Scene Projector (DIRSP). Both of these projector systems were designed for laboratory test and evaluation use rather than field test and evaluation use. This paper will detail the MIRSP design to include trade-off analysis performed at the system/subsystem levels. The MIRSP Phase II will provide the capability to test and evaluate various electro-optical sensors on weapon platform. The MIRSP Phase I and II will be advancing current IR scene projector technologies by exploring other technologies such as mobility/transportability, packaging, sensors, and scene generation.

  20. Color constancy in natural scenes explained by global image statistics.

    PubMed

    Foster, David H; Amano, Kinjiro; Nascimento, Sérgio M C

    2006-01-01

    To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance.

  1. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  2. 4. Panama Mount. Note concrete ring and metal rail. Note ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Panama Mount. Note concrete ring and metal rail. Note cliff erosion under foundation at left center. Looking 297° W. - Fort Funston, Panama Mounts for 155mm Guns, Skyline Boulevard & Great Highway, San Francisco, San Francisco County, CA

  3. REKRIATE: A Knowledge Representation System for Object Recognition and Scene Interpretation

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Bhasin, Sanjay; Chen, X.

    1990-02-01

    What humans actually observe and how they comprehend this information is complex due to Gestalt processes and interaction of context in predicting the course of thinking and enforcing one idea while repressing another. How we extract the knowledge from the scene, what we get from the scene indeed and what we bring from our mechanisms of perception are areas separated by a thin, ill-defined line. The purpose of this paper is to present a system for Representing Knowledge and Recognizing and Interpreting Attention Trailed Entities dubbed as REKRIATE. It will be used as a tool for discovering the underlying principles involved in knowledge representation required for conceptual learning. REKRIATE has some inherited knowledge and is given a vocabulary which is used to form rules for identification of the object. It has various modalities of sensing and has the ability to measure the distance between the objects in the image as well as the similarity between different images of presumably the same object. All sensations received from matrix of different sensors put into an adequate form. The methodology proposed is applicable to not only the pictorial or visual world representation, but to any sensing modality. It is based upon the two premises: a) inseparability of all domains of the world representation including linguistic, as well as those formed by various sensor modalities. and b) representativity of the object at several levels of resolution simultaneously.

  4. Do Simultaneously Viewed Objects Influence Scene Recognition Individually or as Groups? Two Perceptual Studies

    PubMed Central

    Gagne, Christopher R.; MacEvoy, Sean P.

    2014-01-01

    The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process

  5. Note-Taking: Different Notes for Different Research Stages.

    ERIC Educational Resources Information Center

    Callison, Daniel

    2003-01-01

    Explains the need to teach students different strategies for taking notes for research, especially at the exploration and collecting information stages, based on Carol Kuhlthau's research process. Discusses format changes; using index cards; notes for live presentations or media presentations versus notes for printed sources; and forming focus…

  6. Interindividual variability in auditory scene analysis revealed by confidence judgements.

    PubMed

    Pelofi, C; de Gardelle, V; Egré, P; Pressnitzer, D

    2017-02-19

    Because musicians are trained to discern sounds within complex acoustic scenes, such as an orchestra playing, it has been hypothesized that musicianship improves general auditory scene analysis abilities. Here, we compared musicians and non-musicians in a behavioural paradigm using ambiguous stimuli, combining performance, reaction times and confidence measures. We used 'Shepard tones', for which listeners may report either an upward or a downward pitch shift for the same ambiguous tone pair. Musicians and non-musicians performed similarly on the pitch-shift direction task. In particular, both groups were at chance for the ambiguous case. However, groups differed in their reaction times and judgements of confidence. Musicians responded to the ambiguous case with long reaction times and low confidence, whereas non-musicians responded with fast reaction times and maximal confidence. In a subsequent experiment, non-musicians displayed reduced confidence for the ambiguous case when pure-tone components of the Shepard complex were made easier to discern. The results suggest an effect of musical training on scene analysis: we speculate that musicians were more likely to discern components within complex auditory scenes, perhaps because of enhanced attentional resolution, and thus discovered the ambiguity. For untrained listeners, stimulus ambiguity was not available to perceptual awareness.This article is part of the themed issue 'Auditory and visual scene analysis'.

  7. Selective looking at natural scenes: Hedonic content and gender☆

    PubMed Central

    Bradley, Margaret M.; Costa, Vincent D.; Lang, Peter J.

    2015-01-01

    Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. PMID:26156939

  8. Can cigarette warnings counterbalance effects of smoking scenes in movies?

    PubMed

    Golmier, Isabelle; Chebat, Jean-Charles; Gélinas-Chebat, Claire

    2007-02-01

    Scenes in movies where smoking occurs have been empirically shown to influence teenagers to smoke cigarettes. The capacity of a Canadian warning label on cigarette packages to decrease the effects of smoking scenes in popular movies has been investigated. A 2 x 3 factorial design was used to test the effects of the same movie scene with or without electronic manipulation of all elements related to smoking, and cigarette pack warnings, i.e., no warning, text-only warning, and text+picture warning. Smoking-related stereotypes and intent to smoke of teenagers were measured. It was found that, in the absence of warning, and in the presence of smoking scenes, teenagers showed positive smoking-related stereotypes. However, these effects were not observed if the teenagers were first exposed to a picture and text warning. Also, smoking-related stereotypes mediated the relationship of the combined presentation of a text and picture warning and a smoking scene on teenagers' intent to smoke. Effectiveness of Canadian warning labels to prevent or to decrease cigarette smoking among teenagers is discussed, and areas of research are proposed.

  9. Investigating cultural diversity for extrafoveal information use in visual scenes.

    PubMed

    Miellet, Sébastien; Zhou, Xinyue; He, Lingnan; Rodger, Helen; Caldara, Roberto

    2010-06-01

    Culture shapes how people gather information from the visual world. We recently showed that Western observers focus on the eyes region during face recognition, whereas Eastern observers fixate predominantly the center of faces, suggesting a more effective use of extrafoveal information for Easterners compared to Westerners. However, the cultural variation in eye movements during scene perception is a highly debated topic. Additionally, the extent to which those perceptual differences across observers from different cultures rely on modulations of extrafoveal information use remains to be clarified. We used a gaze-contingent technique designed to dynamically mask central vision, the Blindspot, during a visual search task of animals in natural scenes. We parametrically controlled the Blindspots and target animal sizes (0°, 2°, 5°, or 8°). We processed eye-tracking data using an unbiased data-driven approach based on fixation maps and we introduced novel spatiotemporal analyses in order to finely characterize the dynamics of scene exploration. Both groups of observers, Eastern and Western, showed comparable animal identification performance, which decreased as a function of the Blindspot sizes. Importantly, dynamic analysis of the exploration pathways revealed identical oculomotor strategies for both groups of observers during animal search in scenes. Culture does not impact extrafoveal information use during the ecologically valid visual search of animals in natural scenes.

  10. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.

  11. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  12. False recognition of objects in visual scenes: findings from a combined direct and indirect memory test.

    PubMed

    Weinstein, Yana; Nash, Robert A

    2013-01-01

    We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures.

  13. Constructing Virtual Forest Scenes for Assessment of Sub-pixel Vegetation Structure From Imaging Spectroscopy

    NASA Astrophysics Data System (ADS)

    Gerace, A. D.; Yao, W.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; van Leeuwen, M.; Kampe, T. U.

    2015-12-01

    Assessment of vegetation structure via remote sensing modalities has a long history for a range of sensor platforms. Imaging spectroscopy, while often used for biochemical measurements, also applies to structural assessment in that the Hyperspectral Infrared Imager (HyspIRI), for instance, will provide an opportunity to monitor the global ecosystem. Establishing the linkage between HyspIRI data and sub-pixel vegetation structural variation therefore is of keen interest to the remote sensing and ecology communities. NASA's AVIRIS-C was used to collect airborne data during the 2013-2015 time frame, while ground truth data were limited to 2013 due to time-consuming and labor-intensive nature of field data collection. We augmented the available field data with a first-principles, physics-based simulation approach to refine our field efforts and to maintain larger control over within-pixel variation and associated assessments. Three virtual scenes were constructed for the study, corresponding to the actual vegetation structure of the NEON's Pacific Southwest site (Fresno, CA). They presented three typical forest types: oak savanna, dense coniferous forest, and conifer manzanita mixed forest. Airborne spectrometer and a field leaf area index sensor were simulated over these scenes using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model, a synthetic image generation model. After verifying the geometrical parameters and physical model with those replicative senses, more scenes could be constructed by changing one or more vegetation structural parameters, such as forest density, tree species, size, location, and within-pixel distribution. We constructed regression models of leaf area index (LAI, R2=0.92) and forest density(R2=0.97) with narrow-band vegetation indices through simulation. Those models can be used to improve the HyspIRI's suitability for consistent global vegetation structural assessments. The virtual scene and model can also be used in

  14. Ray tracing a three dimensional scene using a grid

    DOEpatents

    Wald, Ingo; Ize, Santiago; Parker, Steven G; Knoll, Aaron

    2013-02-26

    Ray tracing a three-dimensional scene using a grid. One example embodiment is a method for ray tracing a three-dimensional scene using a grid. In this example method, the three-dimensional scene is made up of objects that are spatially partitioned into a plurality of cells that make up the grid. The method includes a first act of computing a bounding frustum of a packet of rays, and a second act of traversing the grid slice by slice along a major traversal axis. Each slice traversal includes a first act of determining one or more cells in the slice that are overlapped by the frustum and a second act of testing the rays in the packet for intersection with any objects at least partially bounded by the one or more cells overlapped by the frustum.

  15. Virtual environments for scene of crime reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  16. A Grouped Threshold Approach for Scene Identification in AVHRR Imagery

    NASA Technical Reports Server (NTRS)

    Baum, Bryan A.; Trepte, Qing

    1999-01-01

    The authors propose a grouped threshold method for scene identification in Advanced Very High Resolution Radiometer imagery that may contain clouds, fire, smoke, or snow. The philosophy of the approach is to build modules that contain groups of spectral threshold tests that are applied concurrently, not sequentially, to each pixel in an image. The purpose of each group of tests is to identify uniquely a specific class in the image, such as smoke. A strength of this approach is that insight into the limits used in the threshold tests may be gained through the use of radiative transfer theory. Methodology and examples are provided for two different scenes, one containing clouds, forest fires, and smoke; and the other containing clouds over snow in the central United States. For both scenes, a limited amount of supporting information is provided by surface observers.

  17. Use of AFIS for linking scenes of crime.

    PubMed

    Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David

    2016-05-01

    Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes.

  18. Abnormal events detection in crowded scenes by trajectory cluster

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zhang, Zhijiang; Zeng, Dan; Shen, Wei

    2015-02-01

    Abnormal events detection in crowded scenes has been a challenge due to volatility of the definitions for both normality and abnormality, the small number of pixels on the target, appearance ambiguity resulting from the dense packing, and severe inter-object occlusions. A novel framework was proposed for the detection of unusual events in crowded scenes using trajectories produced by moving pedestrians based on an intuition that the motion patterns of usual behaviors are similar to these of group activity, whereas unusual behaviors are not. First, spectral clustering is used to group trajectories with similar spatial patterns. Different trajectory clusters represent different activities. Then, unusual trajectories can be detected using these patterns. Furthermore, behavior of a mobile pedestrian can be defined by comparing its direction with these patterns, such as moving in the opposite direction of the group or traversing the group. Experimental results indicated that the proposed algorithm could be used to reliably locate the abnormal events in crowded scenes.

  19. Adaptively Combining Local with Global Information for Natural Scenes Categorization

    NASA Astrophysics Data System (ADS)

    Liu, Shuoyan; Xu, De; Yang, Xu

    This paper proposes the Extended Bag-of-Visterms (EBOV) to represent semantic scenes. In previous methods, most representations are bag-of-visterms (BOV), where visterms referred to the quantized local texture information. Our new representation is built by introducing global texture information to extend standard bag-of-visterms. In particular we apply the adaptive weight to fuse the local and global information together in order to provide a better visterm representation. Given these representations, scene classification can be performed by pLSA (probabilistic Latent Semantic Analysis) model. The experiment results show that the appropriate use of global information improves the performance of scene classification, as compared with BOV representation that only takes the local information into account.

  20. An analysis of Korean homicide crime-scene actions.

    PubMed

    Salfati, C Gabrielle; Park, Jisun

    2007-11-01

    Recent studies have focused on how different styles of homicides will be reflected in the different types of behaviors committed by offenders at a crime scene. It is suggested that these different types of behaviors best be understood using two frameworks, expressive/instrumental aggression and planned/unplanned violence, to analyze the way the offender acts at the crime scene. Multidimensional analysis is carried out on the crime-scene actions of 70 Korean homicides. The proposed frameworks are found to be a useful way of classifying homicide offenses, assigning 80% of homicides to a dominant theme. Results also indicate that behavioral differences can be related to the differences in the offender-victim relationship. Finally, implications and suggestions for future research are discussed.

  1. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  2. Scene interpretation and behavior planning for driver assistance

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Leefken, Iris M.; von Seelen, W.

    2000-06-01

    The scene interpretation and the behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering, angle and velocity. In this paper a scene interpretation and a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of a two-dimensional neural field for scene interpretation and two one-dimensional neural fields controlling steering angle and velocity. The stimuli of the fields are determined according to the sensor information.

  3. Transcranial magnetic stimulation to the transverse occipital sulcus affects scene but not object processing.

    PubMed

    Ganaden, Rachel E; Mullin, Caitlin R; Steeves, Jennifer K E

    2013-06-01

    Traditionally, it has been theorized that the human visual system identifies and classifies scenes in an object-centered approach, such that scene recognition can only occur once key objects within a scene are identified. Recent research points toward an alternative approach, suggesting that the global image features of a scene are sufficient for the recognition and categorization of a scene. We have previously shown that disrupting object processing with repetitive TMS to object-selective cortex enhances scene processing possibly through a release of inhibitory mechanisms between object and scene pathways [Mullin, C. R., & Steeves, J. K. E. TMS to the lateral occipital cortex disrupts object processing but facilitates scene processing. Journal of Cognitive Neuroscience, 23, 4174-4184, 2011]. Here we show the effects of TMS to the transverse occipital sulcus (TOS), an area implicated in scene perception, on scene and object processing. TMS was delivered to the TOS or the vertex (control site) while participants performed an object and scene natural/nonnatural categorization task. Transiently interrupting the TOS resulted in significantly lower accuracies for scene categorization compared with control conditions. This demonstrates a causal role of the TOS in scene processing and indicates its importance, in addition to the parahippocampal place area and retrosplenial cortex, in the scene processing network. Unlike TMS to object-selective cortex, which facilitates scene categorization, disrupting scene processing through stimulation of the TOS did not affect object categorization. Further analysis revealed a higher proportion of errors for nonnatural scenes that led us to speculate that the TOS may be involved in processing the higher spatial frequency content of a scene. This supports a nonhierarchical model of scene recognition.

  4. Primate Visual Perception: Motivated Attention in Naturalistic Scenes

    PubMed Central

    Frank, David W.; Sabatinelli, Dean

    2017-01-01

    Research has consistently revealed enhanced neural activation corresponding to attended cues coupled with suppression to unattended cues. This attention effect depends both on the spatial features of stimuli and internal task goals. However, a large majority of research supporting this effect involves circumscribed tasks that possess few ecologically relevant characteristics. By comparison, natural scenes have the potential to engage an evolved attention system, which may be characterized by supplemental neural processing and integration compared to mechanisms engaged during reduced experimental paradigms. Here, we describe recent animal and human studies of naturalistic scene viewing to highlight the specific impact of social and affective processes on the neural mechanisms of attention modulation. PMID:28265250

  5. Scene classification for weak devices using spatial oriented gradient indexing

    NASA Astrophysics Data System (ADS)

    Phung, Minh Tung; Tu, Trung Hieu

    2017-02-01

    We propose a novel lightweight method for classifying scene images, which fits well on weak machines, mobiles, or embedded devices. Our feature representation technique, which we call SOGI or Spatial Oriented Gradient Indexing, requires a small amount both of computational time and space. We show that, by capturing the spatial co-occurrence of gradient pairs, we provide sufficient amount of information for scene classification task. Despise the simplicity, experimental result shows that our method can still be comparable to other complicated methods on their own datasets.

  6. Randomized Probabilistic Latent Semantic Analysis for Scene Recognition

    NASA Astrophysics Data System (ADS)

    Rodner, Erik; Denzler, Joachim

    The concept of probabilistic Latent Semantic Analysis (pLSA) has gained much interest as a tool for feature transformation in image categorization and scene recognition scenarios. However, a major issue of this technique is overfitting. Therefore, we propose to use an ensemble of pLSA models which are trained using random fractions of the training data. We analyze empirically the influence of the degree of randomization and the size of the ensemble on the overall classification performance of a scene recognition task. A thoughtful evaluation shows the benefits of this approach compared to a single pLSA model.

  7. Robust pedestrian detection and tracking in crowded scenes

    NASA Astrophysics Data System (ADS)

    Lypetskyy, Yuriy

    2007-09-01

    This paper presents a vision based tracking system developed for very crowded situations like underground or railway stations. Our system consists on two main parts - searching of people candidates in single frames, and tracking them frame to frame over the scene. This paper concentrates mostly on the tracking part and describes its core components in detail. These are trajectories predictions using KLT vectors or Kalman filter, adaptive active shape model adjusting and texture matching. We show that combination of presented algorithms leads to robust people tracking even in complex scenes with permanent occlusions.

  8. Image Chunking: Defining Spatial Building Blocks for Scene Analysis.

    DTIC Science & Technology

    1987-04-01

    mumgs0.USmusa 7.AUWOJO 4. CIUTAC Rm6ANT Wuugme*j James V/. Mlahoney DACA? 6-85-C-00 10 NOQ 1 4-85-K-O 124 Artificial Inteligence Laboratory US USS 545...0197 672 IMAGE CHUWING: DEINING SPATIAL UILDING PLOCKS FOR 142 SCENE ANRLYSIS(U) MASSACHUSETTS INST OF TECH CAIIAIDGE ARTIFICIAL INTELLIGENCE LAO J...Technical Report 980 F-Image Chunking: Defining Spatial Building Blocks for Scene DTm -Analysis S ELECTED James V. Mahoney’ MIT Artificial Intelligence

  9. Improved canopy reflectance modeling and scene inference through improved understanding of scene pattern

    NASA Technical Reports Server (NTRS)

    Franklin, Janet; Simonett, David

    1988-01-01

    The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.

  10. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  11. ASC Weekly News Notes

    SciTech Connect

    Womble, David E.

    2016-05-01

    Unified collision operator demonstrated for both radiation transport and PIC-DSMC. A side-by-side comparison between the DSMC method and the radiation transport method was conducted for photon attenuation in the atmosphere over 2 kilometers in physical distance with a reduction of photon density of six orders of magnitude. Both DSMC and traditional radiation transport agreed with theory to two digits. This indicates that PIC-DSMC operators can be unified with the radiation transport collision operators into a single code base and that physics kernels can remain unique to the actual collision pairs. This simulation example provides an initial validation of the unified collision theory approach that will later be implemented into EMPIRE.

  12. A note on antidata

    NASA Astrophysics Data System (ADS)

    Kaufmann, Nissim

    2012-05-01

    Antidata in Bayesian inference was claimed in [1]. The idea was that when an entropic 1-prior (as defined in [1]) encodes an estimate of the unknown distribution parameters (for example the parameters of a Gaussian), this estimate could reduce the degrees of freedom in the posterior pdf (probability density function) of the parameters, as if it annihilated information in the likelihood. This would be in contrast to the (natural conjugate) 0-prior, where typically the estimate with weight α>0 acts like an additional α data points combined with the actual n data samples, thus increasing the degrees of freedom in the posterior by α and making the 0-posterior more informative. I became skeptical of antidata when I failed to see it in plots that I produced. I found that antidata, in the clearest scenario (when the sample statistic coincides with the parameter estimate), does not occur with the Bernoulli, Exponential, or Gaussian (when only ν is unknown or when μ and ν are both unknown) models. This was measured in terms of the parametrization-neutral information gain of the posteriors relative to a Uniform prior (or virtual Uniform prior). We correct some computations of the 1-prior published in [1]. These computations were not the source of the appearance of antidata; rather, an approximation to the 1-posterior and possibly the choice of parametrization were what allowed that. It remains to find antidata in other probability models, or to prove it does not occur at all. Our computations and reasoning suggest that usually (but not always) the 1-prior is actually the more informative entropic prior for inference.

  13. Sexual Fundamentalism and Performances of Masculinity: An Ethnographic Scene Study

    ERIC Educational Resources Information Center

    Gallagher, Kathleen

    2006-01-01

    The study on which this paper is based examined the experiences of students in order to develop a theoretical and empirically grounded account of the dynamic social forces of inclusion and exclusion experienced by youth in their unique contexts of North American urban schooling. The ethnographic scenes, organized into four "beats,"…

  14. Publishing in '63: Looking for Relevance in a Changing Scene

    ERIC Educational Resources Information Center

    Reynolds, Thomas

    2008-01-01

    In this article, the author examines various publications published in 1963 in an attempt to look for relevance in a changing publication scene. The author considers Gordon Parks's reportorial photographs and accompanying personal essay, "What Their Cry Means to Me," as an act of publishing with implications for the teaching of written…

  15. An Analysis of Korean Homicide Crime-Scene Actions

    ERIC Educational Resources Information Center

    Salfati, C. Gabrielle; Park, Jisun

    2007-01-01

    Recent studies have focused on how different styles of homicides will be reflected in the different types of behaviors committed by offenders at a crime scene. It is suggested that these different types of behaviors best be understood using two frameworks, expressive/instrumental aggression and planned/unplanned violence, to analyze the way the…

  16. Effects of self-motion on auditory scene analysis.

    PubMed

    Kondo, Hirohito M; Pressnitzer, Daniel; Toshima, Iwaki; Kashino, Makio

    2012-04-24

    Auditory scene analysis requires the listener to parse the incoming flow of acoustic information into perceptual "streams," such as sentences from a single talker in the midst of background noise. Behavioral and neural data show that the formation of streams is not instantaneous; rather, streaming builds up over time and can be reset by sudden changes in the acoustics of the scene. Here, we investigated the effect of changes induced by voluntary head motion on streaming. We used a telepresence robot in a virtual reality setup to disentangle all potential consequences of head motion: changes in acoustic cues at the ears, changes in apparent source location, and changes in motor or attentional processes. The results showed that self-motion influenced streaming in at least two ways. Right after the onset of movement, self-motion always induced some resetting of perceptual organization to one stream, even when the acoustic scene itself had not changed. Then, after the motion, the prevalent organization was rapidly biased by the binaural cues discovered through motion. Auditory scene analysis thus appears to be a dynamic process that is affected by the active sensing of the environment.

  17. Improving Perceptual Skills with Interactive 3-D VRML Scenes.

    ERIC Educational Resources Information Center

    Johns, Janet Faye

    1998-01-01

    Describes techniques developed to improve the perceptual skills of maintenance technicians who align shafts on rotating equipment. A 3-D practice environment composed of animated mechanical components and tools was enhanced with 3-D VRML (Virtual Reality Modeling Language) scenes. (Author/AEF)

  18. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  19. Audio scene segmentation for video with generic content

    NASA Astrophysics Data System (ADS)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  20. The Rescue Mission: Assigning Guilt to a Chaotic Scene.

    ERIC Educational Resources Information Center

    Procter, David E.

    1987-01-01

    Seeks to identify rhetorical distinctiveness of the rescue mission as a form of belligerency--examining presidential discourse justifying the 1985 Lebanon intervention, the 1965 Dominican intervention, and the 1983 Grenada intervention. Argues that the distinction is in guilt narrowly assigned to a chaotic scene and the concomitant call for…

  1. Semantic control of feature extraction from natural scenes.

    PubMed

    Neri, Peter

    2014-02-05

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.

  2. Dynamic Scene Stitching Driven by Visual Cognition Model

    PubMed Central

    2014-01-01

    Dynamic scene stitching still has a great challenge in maintaining the global key information without missing or deforming if multiple motion interferences exist in the image acquisition system. Object clips, motion blurs, or other synthetic defects easily occur in the final stitching image. In our research work, we proceed from human visual cognitive mechanism and construct a hybrid-saliency-based cognitive model to automatically guide the video volume stitching. The model consists of three elements of different visual stimuli, that is, intensity, edge contour, and scene depth saliencies. Combined with the manifold-based mosaicing framework, dynamic scene stitching is formulated as a cut path optimization problem in a constructed space-time graph. The cutting energy function for column width selections is defined according to the proposed visual cognition model. The optimum cut path can minimize the cognitive saliency difference throughout the whole video volume. The experimental results show that it can effectively avoid synthetic defects caused by different motion interferences and summarize the key contents of the scene without loss. The proposed method gives full play to the role of human visual cognitive mechanism for the stitching. It is of high practical value to environmental surveillance and other applications. PMID:24688451

  3. Number of perceptually distinct surface colors in natural scenes.

    PubMed

    Marín-Franch, Iván; Foster, David H

    2010-09-30

    The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.

  4. Memory, emotion, and pupil diameter: Repetition of natural scenes.

    PubMed

    Bradley, Margaret M; Lang, Peter J

    2015-09-01

    Recent studies have suggested that pupil diameter, like the "old-new" ERP, may be a measure of memory. Because the amplitude of the old-new ERP is enhanced for items encoded in the context of repetitions that are distributed (spaced), compared to massed (contiguous), we investigated whether pupil diameter is similarly sensitive to repetition. Emotional and neutral pictures of natural scenes were viewed once or repeated with massed (contiguous) or distributed (spaced) repetition during incidental free viewing and then tested on an explicit recognition test. Although an old-new difference in pupil diameter was found during successful recognition, pupil diameter was not enhanced for distributed, compared to massed, repetitions during either recognition or initial free viewing. Moreover, whereas a significant old-new difference was found for erotic scenes that had been seen only once during encoding, this difference was absent when erotic scenes were repeated. Taken together, the data suggest that pupil diameter is not a straightforward index of prior occurrence for natural scenes.

  5. Logical unit and scene detection: a comparative survey

    NASA Astrophysics Data System (ADS)

    Petersohn, Christian

    2008-01-01

    Logical units are semantic video segments above the shot level. Depending on the common semantics within the unit and data domain, different types of logical unit extraction algorithms have been presented in literature. Topic units are typically extracted for documentaries or news broadcasts while scenes are extracted for narrative-driven video such as feature films, sitcoms, or cartoons. Other types of logical units are extracted from home video and sports. Different algorithms in literature used for the extraction of logical units are reviewed in this paper based on the categories unit type, data domain, features used, segmentation method, and thresholds applied. A detailed comparative study is presented for the case of extracting scenes from narrative-driven video. While earlier comparative studies focused on scene segmentation methods only or on complete news-story segmentation algorithms, in this paper various visual features and segmentation methods with their thresholding mechanisms and their combination into complete scene detection algorithms are investigated. The performance of the resulting large set of algorithms is then evaluated on a set of video files including feature films, sitcoms, children's shows, a detective story, and cartoons.

  6. Scene Context Dependency of Pattern Constancy of Time Series Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2008-01-01

    A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure.

  7. Semantic Categorization Precedes Affective Evaluation of Visual Scenes

    ERIC Educational Resources Information Center

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2010-01-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…

  8. Helicopter emergency medical service scene communications made easy.

    PubMed

    Koval, Michael

    2014-01-01

    Narrowbanding has caused numerous communication issues. The solution is to use a mutual aid frequency like 123.025. That frequency is 155.3400, and every helicopter emergency medical service operator and emergency medical service agency should name this frequency as "EMS [Emergency Medical Services] Mutual Aid" and preset this frequency for all helicopter emergency medical service scene operations.

  9. The role of memory for visual search in scenes.

    PubMed

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.

  10. Coping with Perceived Ethnic Prejudice on the Gay Scene

    ERIC Educational Resources Information Center

    Jaspal, Rusi

    2017-01-01

    There has been only cursory research into the sociological and psychological aspects of ethnic/racial discrimination among ethnic minority gay and bisexual men, and none that focuses specifically upon British ethnic minority gay men. This article focuses on perceptions of intergroup relations on the gay scene among young British South Asian gay…

  11. Dimensionality of visual complexity in computer graphics scenes

    NASA Astrophysics Data System (ADS)

    Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce

    2008-02-01

    How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.

  12. LOFTrelated semiscale test scene. Water has been dyed red. Hot ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT-related semiscale test scene. Water has been dyed red. Hot steam blowdown exits semiscale at TAN-609 at A&M complex. Edge of building is along left edge of view. Date: 1971. INEEL negative no. 71-376 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  13. Behind the Scenes at Berkeley Lab - The Mechanical Fabrication Facility

    ScienceCinema

    Wells, Russell; Chavez, Pete; Davis, Curtis; Bentley, Brian

    2016-07-12

    Part of the Behind the Scenes series at Berkeley Lab, this video highlights the lab's mechanical fabrication facility and its exceptional ability to produce unique tools essential to the lab's scientific mission. Through a combination of skilled craftsmanship and precision equipment, machinists and engineers work with scientists to create exactly what's needed - whether it's measured in microns or meters.

  14. Forensic DNA Evidence at a Crime Scene: An Investigator's Commentary.

    PubMed

    Blozis, J

    2010-07-01

    The purpose of this article is twofold. The first is to present a law enforcement perspective of the importance of a crime scene, the value of probative evidence, and how to properly recognize, document, and collect evidence. The second purpose is to provide forensic scientists who primarily work in laboratories with the ability to gain insight on how law enforcement personnel process a crime scene. With all the technological advances in the various disciplines associated with forensic science, none have been more spectacular than those in the field of DNA. The development of sophisticated and sensitive instrumentation has led forensic scientists to be able to detect DNA profiles from minute samples of evidence in a much timelier manner. In forensic laboratories, safeguards and protocols associated with ASCLD/LAB International, Forensic Quality Services, and or ISO/IEC 17020:1998 accreditation have been established and implemented to ensure proper case analysis. But no scientist, no instrumentation, and no laboratory could come to a successful conclusion about evidence if that evidence had been compromised or simply missed at a crime scene. Evidence collectors must be trained thoroughly to process a scene and to be able to distinguish between probative evidence and non probative evidence. I am a firm believer of the phrase "garbage in is garbage out." One of the evidence collector's main goals is not only to recover enough DNA so that an eligible CODIS profile can be generated to identify an offender but also, more importantly, to recover sufficient DNA to exonerate the innocent.

  15. 3D Priors for Scene Learning from a Single View

    DTIC Science & Technology

    2008-05-01

    Conference on Articulated Motion and Deformable Objects, 2006. [13] Seemann, E., Leibe, B. and Schiele , B., "Multi-aspect detection of articulated objects...Seemann, E. and Schiele , B., "Pedestrian detection in crowded scenes." CVPR, 2005. [16] Wang, J. J. L. and Singh, S., "Video analysis of human

  16. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    PubMed Central

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  17. What's In a Note: Construction of a Suicide Note Corpus.

    PubMed

    Pestian, John P; Matykiewicz, Pawel; Linn-Gust, Michelle

    2012-01-01

    This paper reports on the results of an initiative to create and annotate a corpus of suicide notes that can be used for machine learning. Ultimately, the corpus included 1,278 notes that were written by someone who died by suicide. Each note was reviewed by at least three annotators who mapped words or sentences to a schema of emotions. This corpus has already been used for extensive scientific research.

  18. Comparative Analyses of Live-Action and Animated Film Remake Scenes: Finding Alternative Film-Based Teaching Resources

    ERIC Educational Resources Information Center

    Champoux, Joseph E.

    2005-01-01

    Live-action and animated film remake scenes can show many topics typically taught in organizational behaviour and management courses. This article discusses, analyses and compares such scenes to identify parallel film scenes useful for teaching. The analysis assesses the scenes to decide which scene type, animated or live-action, more effectively…

  19. Anticipatory scene representation in preschool children's recall and recognition memory.

    PubMed

    Kreindel, Erica; Intraub, Helene

    2016-09-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4-5-year-olds and adults. In Experiment 2, a new, forced-choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide-angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3-5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age-related differences observed.

  20. Realizing actual feedback control of complex network

    NASA Astrophysics Data System (ADS)

    Tu, Chengyi; Cheng, Yuhua

    2014-06-01

    In this paper, we present the concept of feedbackability and how to identify the Minimum Feedbackability Set of an arbitrary complex directed network. Furthermore, we design an estimator and a feedback controller accessing one MFS to realize actual feedback control, i.e. control the system to our desired state according to the estimated system internal state from the output of estimator. Last but not least, we perform numerical simulations of a small linear time-invariant dynamics network and a real simple food network to verify the theoretical results. The framework presented here could make an arbitrary complex directed network realize actual feedback control and deepen our understanding of complex systems.

  1. Notes from Nepal: Is There a Better Way to Provide Search and Rescue?

    PubMed

    Peleg, Kobi

    2015-12-01

    This article discusses a possibility for overcoming the limited efficiency of international search and rescue teams in saving lives after earthquakes, which was emphasized by the recent disaster in Nepal and in other earthquakes all over the world. Because most lives are actually saved by the locals themselves long before the international teams arrive on scene, many more lives could be saved by teaching the basics of light rescue to local students and citizens in threatened countries.

  2. The influence of scene context on object recognition is independent of attentional focus.

    PubMed

    Munneke, Jaap; Brentari, Valentina; Peelen, Marius V

    2013-01-01

    Humans can quickly and accurately recognize objects within briefly presented natural scenes. Previous work has provided evidence that scene context contributes to this process, demonstrating improved naming of objects that were presented in semantically consistent scenes (e.g., a sandcastle on a beach) relative to semantically inconsistent scenes (e.g., a sandcastle on a football field). The current study was aimed at investigating which processes underlie the scene consistency effect. Specifically, we tested: (1) whether the effect is due to increased visual feature and/or shape overlap for consistent relative to inconsistent scene-object pairs; and (2) whether the effect is mediated by attention to the background scene. Experiment 1 replicated the scene consistency effect of a previous report (Davenport and Potter, 2004). Using a new, carefully controlled stimulus set, Experiment 2 showed that the scene consistency effect could not be explained by low-level feature or shape overlap between scenes and target objects. Experiments 3a and 3b investigated whether focused attention modulates the scene consistency effect. By using a location cueing manipulation, participants were correctly informed about the location of the target object on a proportion of trials, allowing focused attention to be deployed toward the target object. Importantly, the effect of scene consistency on target object recognition was independent of spatial attention, and was observed both when attention was focused on the target object and when attention was focused on the background scene. These results indicate that a semantically consistent scene context benefits object recognition independently of the focus of attention. We suggest that the scene consistency effect is primarily driven by global scene properties, or "scene gist", that can be processed with minimal attentional resources.

  3. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  4. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  5. Children's Rights and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1982-01-01

    Educators need to seriously reflect upon the concept of children's rights. Though the idea of children's rights has been debated numerous times, the idea remains vague and shapeless; however, Maslow's theory of self-actualization can provide the children's rights idea with a needed theoretical framework. (Author)

  6. Group Counseling for Self-Actualization.

    ERIC Educational Resources Information Center

    Streich, William H.; Keeler, Douglas J.

    Self-concept, creativity, growth orientation, an integrated value system, and receptiveness to new experiences are considered to be crucial variables to the self-actualization process. A regular, year-long group counseling program was conducted with 85 randomly selected gifted secondary students in the Farmington, Connecticut Public Schools. A…

  7. Culture Studies and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1983-01-01

    True citizenship education is impossible unless students develop the habit of intelligently evaluating cultures. Abraham Maslow's theory of self-actualization, a theory of innate human needs and of human motivation, is a nonethnocentric tool which can be used by teachers and students to help them understand other cultures. (SR)

  8. Humanistic Education and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1984-01-01

    Stresses the need for theoretical justification for the development of humanistic education programs in today's schools. Explores Abraham Maslow's hierarchy of needs and theory of self-actualization. Argues that Maslow's theory may be the best available for educators concerned with educating the whole child. (JHZ)

  9. Developing Human Resources through Actualizing Human Potential

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    2012-01-01

    The key to human resource development is in actualizing individual and collective thinking, feeling and choosing potentials related to our minds, hearts and wills respectively. These capacities and faculties must be balanced and regulated according to the standards of truth, love and justice for individual, community and institutional development,…

  10. 50 CFR 253.16 - Actual cost.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Actual cost. 253.16 Section 253.16 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES FISHERIES ASSISTANCE PROGRAMS Fisheries Finance Program §...

  11. Bag of Lines (BoL) for Improved Aerial Scene Representation

    DOE PAGES

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less

  12. Bag of Lines (BoL) for Improved Aerial Scene Representation

    SciTech Connect

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scale invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.

  13. Contextual effects of scene on the visual perception of object orientation in depth.

    PubMed

    Niimi, Ryosuke; Watanabe, Katsumi

    2013-01-01

    We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle) of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1). When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2). This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3). Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.

  14. Online Class Size, Note Reading, Note Writing and Collaborative Discourse

    ERIC Educational Resources Information Center

    Qiu, Mingzhu; Hewitt, Jim; Brett, Clare

    2012-01-01

    Researchers have long recognized class size as affecting students' performance in face-to-face contexts. However, few studies have examined the effects of class size on exact reading and writing loads in online graduate-level courses. This mixed-methods study examined relationships among class size, note reading, note writing, and collaborative…

  15. Whiteheadian Actual Entitities and String Theory

    NASA Astrophysics Data System (ADS)

    Bracken, Joseph A.

    2012-06-01

    In the philosophy of Alfred North Whitehead, the ultimate units of reality are actual entities, momentary self-constituting subjects of experience which are too small to be sensibly perceived. Their combination into "societies" with a "common element of form" produces the organisms and inanimate things of ordinary sense experience. According to the proponents of string theory, tiny vibrating strings are the ultimate constituents of physical reality which in harmonious combination yield perceptible entities at the macroscopic level of physical reality. Given that the number of Whiteheadian actual entities and of individual strings within string theory are beyond reckoning at any given moment, could they be two ways to describe the same non-verifiable foundational reality? For example, if one could establish that the "superject" or objective pattern of self- constitution of an actual entity vibrates at a specific frequency, its affinity with the individual strings of string theory would be striking. Likewise, if one were to claim that the size and complexity of Whiteheadian 'societies" require different space-time parameters for the dynamic interrelationship of constituent actual entities, would that at least partially account for the assumption of 10 or even 26 instead of just 3 dimensions within string theory? The overall conclusion of this article is that, if a suitably revised understanding of Whiteheadian metaphysics were seen as compatible with the philosophical implications of string theory, their combination into a single world view would strengthen the plausibility of both schemes taken separately. Key words: actual entities, subject/superjects, vibrating strings, structured fields of activity, multi-dimensional physical reality.

  16. Scene-Selectivity and Retinotopy in Medial Parietal Cortex

    PubMed Central

    Silson, Edward H.; Steel, Adam D.; Baker, Chris I.

    2016-01-01

    Functional imaging studies in human reliably identify a trio of scene-selective regions, one on each of the lateral [occipital place area (OPA)], ventral [parahippocampal place area (PPA)], and medial [retrosplenial complex (RSC)] cortical surfaces. Recently, we demonstrated differential retinotopic biases for the contralateral lower and upper visual fields within OPA and PPA, respectively. Here, using functional magnetic resonance imaging, we combine detailed mapping of both population receptive fields (pRF) and category-selectivity, with independently acquired resting-state functional connectivity analyses, to examine scene and retinotopic processing within medial parietal cortex. We identified a medial scene-selective region, which was contained largely within the posterior and ventral bank of the parieto-occipital sulcus (POS). While this region is typically referred to as RSC, the spatial extent of our scene-selective region typically did not extend into retrosplenial cortex, and thus we adopt the term medial place area (MPA) to refer to this visually defined scene-selective region. Intriguingly MPA co-localized with a region identified solely on the basis of retinotopic sensitivity using pRF analyses. We found that MPA demonstrates a significant contralateral visual field bias, coupled with large pRF sizes. Unlike OPA and PPA, MPA did not show a consistent bias to a single visual quadrant. MPA also co-localized with a region identified by strong differential functional connectivity with PPA and the human face-selective fusiform face area (FFA), commensurate with its functional selectivity. Functional connectivity with OPA was much weaker than with PPA, and similar to that with face-selective occipital face area (OFA), suggesting a closer link with ventral than lateral cortex. Consistent with prior research, we also observed differential functional connectivity in medial parietal cortex for anterior over posterior PPA, as well as a region on the lateral

  17. Recognition of Natural Scenes from Global Properties: Seeing the Forest without Representing the Trees

    ERIC Educational Resources Information Center

    Greene, Michelle R.; Oliva, Aude

    2009-01-01

    Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects…

  18. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    ERIC Educational Resources Information Center

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  19. The Nesting of Search Contexts within Natural Scenes: Evidence from Contextual Cuing

    ERIC Educational Resources Information Center

    Brooks, Daniel I.; Rasmussen, Ian P.; Hollingworth, Andrew

    2010-01-01

    In a contextual cuing paradigm, we examined how memory for the spatial structure of a natural scene guides visual search. Participants searched through arrays of objects that were embedded within depictions of real-world scenes. If a repeated search array was associated with a single scene during study, then array repetition produced significant…

  20. Discrimination of features in natural scenes by a dragonfly neuron.

    PubMed

    Wiederman, Steven D; O'Carroll, David C

    2011-05-11

    Flying insects engage in spectacular high-speed pursuit of targets, requiring visual discrimination of moving objects against cluttered backgrounds. As a first step toward understanding the neural basis for this complex task, we used computational modeling of insect small target motion detector (STMD) neurons to predict responses to features within natural scenes and then compared this with responses recorded from an identified STMD neuron in the dragonfly brain (Hemicordulia tau). A surprising model prediction confirmed by our electrophysiological recordings is that even heavily cluttered scenes contain very few features that excite these neurons, due largely to their exquisite tuning for small features. We also show that very subtle manipulations of the image cause dramatic changes in the response of this neuron, because of the complex inhibitory and facilitatory interactions within the receptive field.

  1. Scene text detection based on probability map and hierarchical model

    NASA Astrophysics Data System (ADS)

    Zhou, Gang; Liu, Yuehu

    2012-06-01

    Scene text detection is an important step for the text-based information extraction system. This problem is challenging due to the variations of size, unknown colors, and background complexity. We present a novel algorithm to robustly detect text in scene images. To segment text candidate connected components (CC) from images, a text probability map consisting of the text position and scale information is estimated by a text region detector. To filter out the non-text CCs, a hierarchical model consisting of two classifiers in cascade is utilized. The first stage of the model estimates text probabilities with unary component features. The second stage classifier is trained with both probability features and similarity features. Since the proposed method is learning-based, there are very few manual parameters required. Experimental results on the public benchmark ICDAR dataset show that our algorithm outperforms other state-of-the-art methods.

  2. Optical slicing of large scenes by synthetic aperture integral imaging

    NASA Astrophysics Data System (ADS)

    Navarro, Héctor; Saavedra, Genaro; Molina, Ainhoa; Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Javidi, Bahram

    2010-04-01

    Integral imaging (InI) technology was created with the aim of providing the binocular observers of monitors, or matrix display devices, with auto-stereoscopic images of 3D scenes. However, along the last few years the inventiveness of researches has allowed to find many other interesting applications of integral imaging. Examples of this are the application of InI in object recognition, the mapping of 3D polarization distributions, or the elimination of occluding signals. One of the most interesting applications of integral imaging is the production of views focused at different depths of the 3D scene. This application is the natural result of the ability of InI to create focal stacks from a single input image. In this contribution we present new algorithm for this optical slicing application, and show that it is possible the 3D reconstruction with improved lateral resolution.

  3. An intercomparison of artificial intelligence approaches for polar scene identification

    NASA Technical Reports Server (NTRS)

    Tovinkere, V. R.; Penaloza, M.; Logar, A.; Lee, J.; Weger, R. C.; Berendes, T. A.; Welch, R. M.

    1993-01-01

    The following six different artificial-intelligence (AI) approaches to polar scene identification are examined: (1) a feed forward back propagation neural network, (2) a probabilistic neural network, (3) a hybrid neural network, (4) a 'don't care' feed forward perception model, (5) a 'don't care' feed forward back propagation neural network, and (6) a fuzzy logic based expert system. The ten classes into which six AVHRR local-coverage arctic scenes were classified were: water, solid sea ice, broken sea ice, snow-covered mountains, land, stratus over ice, stratus over water, cirrus over water, cumulus over water, and multilayer cloudiness. It was found that 'don't care' back propagation neural network produced the highest accuracies. This approach has also low CPU requirement.

  4. 3D Scene Restoration Using One Active PTZ Camera

    NASA Astrophysics Data System (ADS)

    Alexiev, K. M.; Nikolova, I. N.; Zapryanov, G. S.

    2009-10-01

    The paper considers the task of recovery of 3D information about the scene from single camera images. The basic idea is to extract the useful depth information from the images automatically and efficiently. Depth perception with single standard video surveillance camera is a challenging problem. The difficulties in deriving the distance to the observed objects in the scene can be partially overcome using active PTZ cameras and suitable control of camera parameters. There are several techniques for depth recovery. Here, the task of depth estimation in the context of the well known depth from defocus approach is considered. In this paper, it is proposed the problem to be solved as classical nonlinear line fitting optimization problem. The characteristics of the approach are discussed. Experimental studies, using test patterns and real objects are presented.

  5. Lateralization of Spatial Relation Processing in Natural Scenes

    PubMed Central

    van der Ham, Ineke J. M.; van Zandvoort, Martine J. E.; Postma, Albert

    2013-01-01

    Spatial relations between objects can be represented in a categorical and in a coordinate manner. Categorical representations reflect abstract relations, like ‘left of’ or ‘under’, whereas coordinate representations concern exact metric distances between objects. These two types of spatial relations are thought to be linked to a left hemisphere and a right hemisphere advantage, respectively. This lateralization pattern was examined in a visual search task, making use of natural scenes, in patients with unilateral brain damage and healthy controls. In addition, all participants performed a low-level spatial relation processing task. The results suggest that the lateralization pattern commonly found for spatial relation processing in low-level perceptual tasks is also applicable to the processing of complex visual scenes. PMID:22713416

  6. A combined feature latent semantic model for scene classification

    NASA Astrophysics Data System (ADS)

    Jiang, Yue; Wang, Runsheng

    2009-10-01

    Due to vast growth of image databases, scene image classification methods have become increasingly important in computer vision areas. We propose a new scene image classification framework based on combined feature and a latent semantic model which is based on the Latent Dirichlet Allocation (LDA) in the statistical text literature. Here the model is applied to visual words representation for images. We use Gibbs sampling for parameter estimation and use several different numbers of topics at the same time to obtain the latent topic representation of images. We densely extract multi-scale patches from images and get the combined feature on these patches. Our method is unsupervised. It can also well represent semantic characteristic of images. We demonstrate the effectiveness of our approach by comparing it to those used in previous work in this area. Experiments were conducted on three often used image databases, and our method got better results than the others.

  7. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  8. A Scene Text-Based Image Retrieval System

    DTIC Science & Technology

    2012-12-01

    problems need to be solved. Most of the previous studies in text detection can be classified into approaches based on edge , connected component, and...order to detect and to merge edges from letters in images [2][3]. Edge based methods is fast and can have a high recall. However, it often produces...algorithms can be used in image retrieval applications. Keywords— text detection , text binarization, scene text, image retrieval, image indexing I

  9. Auditory scene analysis following unilateral inferior colliculus infarct.

    PubMed

    Champoux, François; Paiement, Philippe; Vannasing, Phetsamone; Mercier, Claude; Gagné, Jean-Pierre; Lepore, Franco; Lassonde, Maryse

    2007-11-19

    Event-related potentials in the form of mismatch negativity were recorded to investigate auditory scene analysis capabilities in a person with a very circumscribed haemorrhagic lesion at the level of the right inferior colliculus. The results provide the first objective evidence that processing at the level of the inferior colliculus plays an important role in human auditory frequency discrimination. Moreover, the electrophysiological data suggest that following this unilateral lesion, the auditory pathways fail to reorganize efficiently.

  10. Visual attention and target detection in cluttered natural scenes

    NASA Astrophysics Data System (ADS)

    Itti, Laurent; Gold, Carl; Koch, Christof

    2001-09-01

    Rather than attempting to fully interpret visual scenes in a parallel fashion, biological systems appear to employ a serial strategy by which an attentional spotlight rapidly selects circumscribed regions in the scene for further analysis. The spatiotemporal deployment of attention has been shown to be controlled by both bottom-up (image-based) and top-down (volitional) cues. We describe a detailed neuromimetic computer implementation of a bottom-up scheme for the control of visual attention, focusing on the problem of combining information across modalities (orientation, intensity,l and color information) in a purely stimulus- driven manner. We have applied this model to a wide range of target detection tasks, using synthetic and natural stimuli. Performance has, however, remained difficult to objectively evaluate on natural scenes, because on objective reference was available for comparison. We present predicted search times for our model on the Search_2 database of rural scenes containing a military vehicle. Overall, we found a poor correlation between human and model search times. Further analysis, however, revealed that in 75% of the images, the model appeared to detect the target faster than humans (for comparison, we calibrated the model's arbitrary internal time frame such that 2 to 4 image locations were visited per second). It seems that this model, which had originally been designed not to find small, hidden military vehicles, but rather to find the few most obviously conspicuous objects in an image, performed as an efficient target detector on the Search_2 dataset. Further developments of the model are finally explored, in particular through a more formal treatment of the difficult problem of extracting suitable low-level features to be fed into the saliency map.

  11. Neural Correlates of Divided Attention in Natural Scenes.

    PubMed

    Fagioli, Sabrina; Macaluso, Emiliano

    2016-09-01

    Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes.

  12. Better Batteries for Transportation: Behind the Scenes @ Berkeley Lab

    ScienceCinema

    Battaglia, Vince

    2016-07-12

    Vince Battaglia leads a behind-the-scenes tour of Berkeley Lab's BATT, the Batteries for Advanced Transportation Technologies Program he leads, where researchers aim to improve batteries upon which the range, efficiency, and power of tomorrow's electric cars will depend. This is the first in a forthcoming series of videos taking viewers into the laboratories and research facilities that members of the public rarely get to see.

  13. Device for imaging scenes with very large ranges of intensity

    DOEpatents

    Deason, Vance Albert

    2011-11-15

    A device for imaging scenes with a very large range of intensity having a pair of polarizers, a primary lens, an attenuating mask, and an imaging device optically connected along an optical axis. Preferably, a secondary lens, positioned between the attenuating mask and the imaging device is used to focus light on the imaging device. The angle between the first polarization direction and the second polarization direction is adjustable.

  14. Better Batteries for Transportation: Behind the Scenes @ Berkeley Lab

    SciTech Connect

    Battaglia, Vince

    2011-01-01

    Vince Battaglia leads a behind-the-scenes tour of Berkeley Lab's BATT, the Batteries for Advanced Transportation Technologies Program he leads, where researchers aim to improve batteries upon which the range, efficiency, and power of tomorrow's electric cars will depend. This is the first in a forthcoming series of videos taking viewers into the laboratories and research facilities that members of the public rarely get to see.

  15. The Actual Apollo 13 Prime Crew

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The actual Apollo 13 lunar landing mission prime crew from left to right are: Commander, James A. Lovell Jr., Command Module pilot, John L. Swigert Jr.and Lunar Module pilot, Fred W. Haise Jr. The original Command Module pilot for this mission was Thomas 'Ken' Mattingly Jr. but due to exposure to German measles he was replaced by his backup, Command Module pilot, John L. 'Jack' Swigert Jr.

  16. The Anthropo-scene: A guide for the perplexed.

    PubMed

    Lorimer, Jamie

    2017-02-01

    The scientific proposal that the Earth has entered a new epoch as a result of human activities - the Anthropocene - has catalysed a flurry of intellectual activity. I introduce and review the rich, inchoate and multi-disciplinary diversity of this Anthropo-scene. I identify five ways in which the concept of the Anthropocene has been mobilized: scientific question, intellectual zeitgeist, ideological provocation, new ontologies and science fiction. This typology offers an analytical framework for parsing this diversity, for understanding the interactions between different ways of thinking in the Anthropo-scene, and thus for comprehending elements of its particular and peculiar sociabilities. Here I deploy this framework to situate Earth Systems Science within the Anthropo-scene, exploring both the status afforded science in discussions of this new epoch, and the various ways in which the other means of engaging with the concept come to shape the conduct, content and politics of this scientific enquiry. In conclusion the paper reflects on the potential of the Anthropocene for new modes of academic praxis.

  17. Auditory scene analysis: the sweet music of ambiguity.

    PubMed

    Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A

    2011-01-01

    In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music.

  18. The Hip-Hop club scene: Gender, grinding and sex.

    PubMed

    Muñoz-Laboy, Miguel; Weinstein, Hannah; Parker, Richard

    2007-01-01

    Hip-Hop culture is a key social medium through which many young men and women from communities of colour in the USA construct their gender. In this study, we focused on the Hip-Hop club scene in New York City with the intention of unpacking narratives of gender dynamics from the perspective of young men and women, and how these relate to their sexual experiences. We conducted a three-year ethnographic study that included ethnographic observations of Hip-Hop clubs and their social scene, and in-depth interviews with young men and young women aged 15-21. This paper describes how young people negotiate gender relations on the dance floor of Hip-Hop clubs. The Hip-Hop club scene represents a context or setting where young men's masculinities are contested by the social environment, where women challenge hypermasculine privilege and where young people can set the stage for what happens next in their sexual and emotional interactions. Hip-Hop culture therefore provides a window into the gender and sexual scripts of many urban minority youth. A fuller understanding of these patterns can offer key insights into the social construction of sexual risk, as well as the possibilities for sexual health promotion, among young people in urban minority populations.

  19. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  20. Encoding natural scenes with neural circuits with random thresholds.

    PubMed

    Lazar, Aurel A; Pnevmatikakis, Eftychios A; Zhou, Yiyin

    2010-10-28

    We present a general framework for the reconstruction of natural video scenes encoded with a population of spiking neural circuits with random thresholds. The natural scenes are modeled as space-time functions that belong to a space of trigonometric polynomials. The visual encoding system consists of a bank of filters, modeling the visual receptive fields, in cascade with a population of neural circuits, modeling encoding in the early visual system. The neuron models considered include integrate-and-fire neurons and ON-OFF neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed to be random. We demonstrate that neural spiking is akin to taking noisy measurements on the stimulus both for time-varying and space-time-varying stimuli. We formulate the reconstruction problem as the minimization of a suitable cost functional in a finite-dimensional vector space and provide an explicit algorithm for stimulus recovery. We also present a general solution using the theory of smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both synthetic video as well as for natural scenes and demonstrate that the quality of the reconstruction degrades gracefully as the threshold variability of the neurons increases.

  1. Auditory Scene Analysis: The Sweet Music of Ambiguity

    PubMed Central

    Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A.

    2011-01-01

    In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701

  2. Understanding the Radiant Scattering Behavior of Vegetated Scenes

    NASA Technical Reports Server (NTRS)

    Kimes, D. S. (Principal Investigator)

    1985-01-01

    Knowledge of the physics of the scattering behavior of vegetation will ultimately serve the remote sensing and earth science community in many ways. For example, it will provide: (1) insight and guidance in developing new extraction techniques of canopy characteristics, (2) a basis for better interpretation of off-nadir satellite and aircraft data, (3) a basis for defining specifications of future earth observing sensor systems, and (4) a basis for defining important aspects of physical and biological processes of the plant system. The overall objective of the three-year study is to improve our fundamental understanding of the dynamics of directional scattering properties of vegetation canopies through analysis of field data and model simulation data. The specific objectives are to: (1) collect directional reflectance data covering the entire exitance hemisphere for several common vegetation canopies with various geometric structure (both homogeneous and row crop structures), (2) develop a scene radiation model with a general mathematical framework which will treat 3-D variability in heterogeneous scenes and account for 3-D radiant interactions within the scene, (3) conduct validations of the model on collected data sets, and (4) test and expand proposed physical scattering mechanisms involved in reflectance distribution dynamics by analyzing both field and modeling data.

  3. Recovery of fingerprints from fire scenes and associated evidence.

    PubMed

    Deans, J

    2006-01-01

    A lack of information concerning the potential recovery of fingerprints from fire scenes and related evidence prompted several research projects. Latent prints from good secretors and visible prints (in blood) were placed on a variety of different surfaces and subsequently subjected to "real life" fires in fully furnished compartments used for fire investigation training purposes. The items were placed in various locations and at different heights within the compartments. After some initial success, further tests were undertaken using both latent and dirt/grease marks on different objects within the same types of fire compartments. Subsequent sets of tests involved the recovery of latent and visual fingerprints (in blood, dirt and grease) from different types of weapons, lighters, plastic bags, match boxes, tapers, plastic bottles and petrol bombs that had been subjected to the same fire conditions as previously. Throughout the entire series of projects one of the prime considerations was how the resultant findings could be put into practice by fire scene examiners in an attempt to assist the police in their investigations. This research demonstrates that almost one in five items recovered from fire scenes yielded fingerprint ridge detail following normal development treatments.

  4. Predicting the Valence of a Scene from Observers’ Eye Movements

    PubMed Central

    R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322

  5. Imagery rescripting: Is incorporation of the most aversive scenes necessary?

    PubMed

    Dibbets, Pauline; Arntz, Arnoud

    2016-01-01

    During imagery rescripting (ImRs) an aversive memory is relived and transformed to have a more positive outcome. ImRs is frequently applied in psychological treatment and is known to reduce intrusions and distress of the memory. However, little is known about the necessity to incorporate the central aversive parts of the memory in ImRs. To examine this necessity one hundred participants watched an aversive film and were subsequently randomly assigned to one of four experimental conditions: ImRs including the aversive scenes (Late ImRs), ImRs without the aversive scenes (Early ImRs), imaginal exposure (IE) or a control condition (Cont). Participants in the IE intervention reported the highest distress levels during the intervention; Cont resulted in the lowest levels of self-reported distress. For the intrusion frequency, only the late ImRs resulted in fewer intrusions compared to the Cont condition; Early ImRs produced significantly more intrusions than the Late ImRs or IE condition. Finally, the intrusions of the Late ImRs condition were reported as less vivid compared to the other conditions. To conclude, it seems beneficial including aversive scenes in ImRs after an analogue trauma induction.

  6. Touching and Hearing Unseen Objects: Multisensory Effects on Scene Recognition

    PubMed Central

    van Lier, Rob

    2016-01-01

    In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions. In Experiment 1, geometrical objects and simple sounds were used, while in Experiment 2, the objects comprised toy animals that were matched with semantically compatible animal sounds. In Experiment 3, we replicated Experiment 1, but now a tactile-auditory object identification task preceded the experiment in which the participants learned to identify the objects based on tactile and auditory input. For each experiment, the results revealed a significant performance increase only after the switch from bimodal to unimodal. Thus, it appears that the release of bimodal identification, from audio-tactile to tactile-only produces a benefit that is not achieved when having the reversed order in which sound was added after having experience with haptic-only. We conclude that task-related factors other than mere bimodal identification cause the facilitation when switching from bimodal to unimodal conditions. PMID:27698985

  7. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  8. A note on image degradation, disability glare, and binocular vision

    NASA Astrophysics Data System (ADS)

    Rajaram, Vandana; Lakshminarayanan, Vasudevan

    2013-08-01

    Disability glare due to scattering of light causes a reduction in visual performance due to a luminous veil over the scene. This causes problem such as contrast detection. In this note, we report a study of the effect of this veiling luminance on human stereoscopic vision. We measured the effect of glare on the horopter measured using the apparent fronto-parallel plane (AFPP) criterion. The empirical longitudinal horopter measured using the AFPP criterion was analyzed using the so-called analytic plot. The analytic plot parameters were used for quantitative measurement of binocular vision. Image degradation plays a major effect on binocular vision as measured by the horopter. Under the conditions tested, it appears that if vision is sufficiently degraded then the addition of disability glare does not seem to significantly cause any further compromise in depth perception as measured by the horopter.

  9. N400 brain responses to spoken phrases paired with photographs of scenes: implications for visual scene displays in AAC systems.

    PubMed

    Wilkinson, Krista M; Stutzman, Allyson; Seisler, Andrea

    2015-03-01

    Augmentative and alternative communication (AAC) systems are often implemented for individuals whose speech cannot meet their full communication needs. One type of aided display is called a Visual Scene Display (VSD). VSDs consist of integrated scenes (such as photographs) in which language concepts are embedded. Often, the representations of concepts on VSDs are perceptually similar to their referents. Given this physical resemblance, one may ask how well VSDs support development of symbolic functioning. We used brain imaging techniques to examine whether matches and mismatches between the content of spoken messages and photographic images of scenes evoke neural activity similar to activity that occurs to spoken or written words. Electroencephalography (EEG) was recorded from 15 college students who were shown photographs paired with spoken phrases that were either matched or mismatched to the concepts embedded within each photograph. Of interest was the N400 component, a negative deflecting wave 400 ms post-stimulus that is considered to be an index of semantic functioning. An N400 response in the mismatched condition (but not the matched) would replicate brain responses to traditional linguistic symbols. An N400 was found, exclusively in the mismatched condition, suggesting that mismatches between spoken messages and VSD-type representations set the stage for the N400 in ways similar to traditional linguistic symbols.

  10. Non-accidental properties underlie human categorization of complex natural scenes

    PubMed Central

    Shen, Dandan

    2013-01-01

    Humans can categorize complex natural scenes quickly and accurately. Which scene properties enable us to do this with such apparent ease? We extracted structural properties of contours (orientation, length, curvature) and contour junctions (types and angles) from line drawings of natural scenes. All of these properties contain information about scene category that can be exploited computationally. But, when comparing error patterns from computational scene categorization with those from a six-alternative forced-choice scene categorization experiment, we found that only junctions and curvature made significant contributions to human behavior. To further test the critical role of these properties we perturbed junctions in line drawings by randomly shifting contours. As predicted, we found a significant decrease in human categorization accuracy. We conclude that scene categorization by humans relies on curvature as well as the same non-accidental junction properties used for object recognition. These properties correspond to the visual features represented in area V2. PMID:24474725

  11. Exploring the role of gaze behavior and object detection in scene understanding

    PubMed Central

    Yun, Kiwon; Peng, Yifan; Samaras, Dimitris; Zelinsky, Gregory J.; Berg, Tamara L.

    2013-01-01

    We posit that a person's gaze behavior while freely viewing a scene contains an abundance of information, not only about their intent and what they consider to be important in the scene, but also about the scene's content. Experiments are reported, using two popular image datasets from computer vision, that explore the relationship between the fixations that people make during scene viewing, how they describe the scene, and automatic detection predictions of object categories in the scene. From these exploratory analyses, we then combine human behavior with the outputs of current visual recognition methods to build prototype human-in-the-loop applications for gaze-enabled object detection and scene annotation. PMID:24367348

  12. Explosive Percolation Transition is Actually Continuous

    NASA Astrophysics Data System (ADS)

    da Costa, R. A.; Dorogovtsev, S. N.; Goltsev, A. V.; Mendes, J. F. F.

    2010-12-01

    Recently a discontinuous percolation transition was reported in a new “explosive percolation” problem for irreversible systems [D. Achlioptas, R. M. D’Souza, and J. Spencer, Science 323, 1453 (2009)SCIEAS0036-807510.1126/science.1167782] in striking contrast to ordinary percolation. We consider a representative model which shows that the explosive percolation transition is actually a continuous, second order phase transition though with a uniquely small critical exponent of the percolation cluster size. We describe the unusual scaling properties of this transition and find its critical exponents and dimensions.

  13. Neoadjuvant Treatment in Rectal Cancer: Actual Status

    PubMed Central

    Garajová, Ingrid; Di Girolamo, Stefania; de Rosa, Francesco; Corbelli, Jody; Agostini, Valentina; Biasco, Guido; Brandi, Giovanni

    2011-01-01

    Neoadjuvant (preoperative) concomitant chemoradiotherapy (CRT) has become a standard treatment of locally advanced rectal adenocarcinomas. The clinical stages II (cT3-4, N0, M0) and III (cT1-4, N+, M0) according to International Union Against Cancer (IUCC) are concerned. It can reduce tumor volume and subsequently lead to an increase in complete resections (R0 resections), shows less toxicity, and improves local control rate. The aim of this review is to summarize actual approaches, main problems, and discrepancies in the treatment of locally advanced rectal adenocarcinomas. PMID:22295206

  14. Air resistance measurements on actual airplane parts

    NASA Technical Reports Server (NTRS)

    Weiselsberger, C

    1923-01-01

    For the calculation of the parasite resistance of an airplane, a knowledge of the resistance of the individual structural and accessory parts is necessary. The most reliable basis for this is given by tests with actual airplane parts at airspeeds which occur in practice. The data given here relate to the landing gear of a Siemanms-Schuckert DI airplane; the landing gear of a 'Luftfahrzeug-Gesellschaft' airplane (type Roland Dlla); landing gear of a 'Flugzeugbau Friedrichshafen' G airplane; a machine gun, and the exhaust manifold of a 269 HP engine.

  15. Validation of the ASTER instrument level 1A scene geometry

    USGS Publications Warehouse

    Kieffer, H.H.; Mullins, K.F.; MacKinnon, D.J.

    2008-01-01

    An independent assessment of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument geometry was undertaken by the U.S. ASTER Team, to confirm the geometric correction parameters developed and applied to Level 1A (radiometrically and geometrically raw with correction parameters appended) ASTER data. The goal was to evaluate the geometric quality of the ASTER system and the stability of the Terra spacecraft. ASTER is a 15-band system containing optical instruments with resolutions from 15- to 90-meters; all geometrically registered products are ultimately tied to the 15-meter Visible and Near Infrared (VNIR) sub-system. Our evaluation process first involved establishing a large database of Ground Control Points (GCP) in the mid-western United States; an area with features of an appropriate size for spacecraft instrument resolutions. We used standard U.S. Geological Survey (USGS) Digital Orthophoto Quads (DOQS) of areas in the mid-west to locate accurate GCPs by systematically identifying road intersections and recording their coordinates. Elevations for these points were derived from USGS Digital Elevation Models (DEMS). Road intersections in a swath of nine contiguous ASTER scenes were then matched to the GCPs, including terrain correction. We found no significant distortion in the images; after a simple image offset to absolute position, the RMS residual of about 200 points per scene was less than one-half a VNIR pixel. Absolute locations were within 80 meters, with a slow drift of about 10 meters over the entire 530-kilometer swath. Using strictly simultaneous observations of scenes 370 kilometers apart, we determined a stereo angle correction of 0.00134 degree with an accuracy of one microradian. The mid-west GCP field and the techniques used here should be widely applicable in assessing other spacecraft instruments having resolutions from 5 to 50-meters. ?? 2008 American Society for Photogrammetry and Remote Sensing.

  16. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  17. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

  18. Adaptive optimal spectral range for dynamically changing scene

    NASA Astrophysics Data System (ADS)

    Pinsky, Ephi; Siman-tov, Avihay; Peles, David

    2012-06-01

    A novel multispectral video system that continuously optimizes both its spectral range channels and the exposure time of each channel autonomously, under dynamic scenes, varying from short range-clear scene to long range-poor visibility, is currently being developed. Transparency and contrast of high scattering medium of channels with spectral ranges in the near infrared is superior to the visible channels, particularly to the blue range. Longer wavelength spectral ranges that induce higher contrast are therefore favored. Images of 3 spectral channels are fused and displayed for (pseudo) color visualization, as an integrated high contrast video stream. In addition to the dynamic optimization of the spectral channels, optimal real-time exposure time is adjusted simultaneously and autonomously for each channel. A criterion of maximum average signal, derived dynamically from previous frames of the video stream is used (Patent Application - International Publication Number: WO2009/093110 A2, 30.07.2009). This configuration enables dynamic compatibility with the optimal exposure time of a dynamically changing scene. It also maximizes the signal to noise ratio and compensates each channel for the specified value of daylight reflections and sensors response for each spectral range. A possible implementation is a color video camera based on 4 synchronized, highly responsive, CCD imaging detectors, attached to a 4CCD dichroic prism and combined with a common, color corrected, lens. Principal Components Analysis (PCA) technique is then applied for real time "dimensional collapse" in color space, in order to select and fuse, for clear color visualization, the 3 most significant principal channels out of at least 4 characterized by high contrast and rich details in the image data.

  19. Optic flow aided navigation and 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Rollason, Malcolm

    2013-10-01

    An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.

  20. Human matching performance of genuine crime scene latent fingerprints.

    PubMed

    Thompson, Matthew B; Tangen, Jason M; McCarthy, Duncan J

    2014-02-01

    There has been very little research into the nature and development of fingerprint matching expertise. Here we present the results of an experiment testing the claimed matching expertise of fingerprint examiners. Expert (n = 37), intermediate trainee (n = 8), new trainee (n = 9), and novice (n = 37) participants performed a fingerprint discrimination task involving genuine crime scene latent fingerprints, their matches, and highly similar distractors, in a signal detection paradigm. Results show that qualified, court-practicing fingerprint experts were exceedingly accurate compared with novices. Experts showed a conservative response bias, tending to err on the side of caution by making more errors of the sort that could allow a guilty person to escape detection than errors of the sort that could falsely incriminate an innocent person. The superior performance of experts was not simply a function of their ability to match prints, per se, but a result of their ability to identify the highly similar, but nonmatching fingerprints as such. Comparing these results with previous experiments, experts were even more conservative in their decision making when dealing with these genuine crime scene prints than when dealing with simulated crime scene prints, and this conservatism made them relatively less accurate overall. Intermediate trainees-despite their lack of qualification and average 3.5 years experience-performed about as accurately as qualified experts who had an average 17.5 years experience. New trainees-despite their 5-week, full-time training course or their 6 months experience-were not any better than novices at discriminating matching and similar nonmatching prints, they were just more conservative. Further research is required to determine the precise nature of fingerprint matching expertise and the factors that influence performance. The findings of this representative, lab-based experiment may have implications for the way fingerprint examiners testify in

  1. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed Central

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-01-01

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637

  2. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-11-07

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form.

  3. Scene recognition and colorization for vehicle infrared images

    NASA Astrophysics Data System (ADS)

    Hou, Junjie; Sun, Shaoyuan; Shen, Zhenyi; Huang, Zhen; Zhao, Haitao

    2016-10-01

    In order to make better use of infrared technology for driving assistance system, a scene recognition and colorization method is proposed in this paper. Various objects in a queried infrared image are detected and labelled with proper categories by a combination of SIFT-Flow and MRF model. The queried image is then colorized by assigning corresponding colors according to the categories of the objects appeared. The results show that the strategy here emphasizes important information of the IR images for human vision and could be used to broaden the application of IR images for vehicle driving.

  4. Photorealistic ray tracing to visualize automobile side mirror reflective scenes.

    PubMed

    Lee, Hocheol; Kim, Kyuman; Lee, Gang; Lee, Sungkoo; Kim, Jingu

    2014-10-20

    We describe an interactive visualization procedure for determining the optimal surface of a special automobile side mirror, thereby removing the blind spot, without the need for feedback from the error-prone manufacturing process. If the horizontally progressive curvature distributions are set to the semi-mathematical expression for a free-form surface, the surface point set can then be derived through numerical integration. This is then converted to a NURBS surface while retaining the surface curvature. Then, reflective scenes from the driving environment can be virtually realized using photorealistic ray tracing, in order to evaluate how these reflected images would appear to drivers.

  5. Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene

    NASA Astrophysics Data System (ADS)

    Bonner, David

    2010-05-01

    Connecting physics concepts with real-world events allows students to establish a strong conceptual foundation. When such events are particularly interesting to students, it can greatly impact their engagement and enthusiasm in an activity. Activities that involve studying real-world events of high interest can provide students a long-lasting understanding and positive memorable experiences, both of which heighten the learning experiences of those students. One such activity, described in depth in this paper, utilizes a murder mystery and crime scene investigation as an application of basic projectile motion.

  6. Portable X-ray Fluorescence Unit for Analyzing Crime Scenes

    NASA Astrophysics Data System (ADS)

    Visco, A.

    2003-12-01

    Goddard Space Flight Center and the National Institute of Justice have teamed up to apply NASA technology to the field of forensic science. NASA hardware that is under development for future planetary robotic missions, such as Mars exploration, is being engineered into a rugged, portable, non-destructive X-ray fluorescence system for identifying gunshot residue, blood, and semen at crime scenes. This project establishes the shielding requirements that will ensure that the exposure of a user to ionizing radiation is below the U.S. Nuclear Regulatory Commission's allowable limits, and also develops the benchtop model for testing the system in a controlled environment.

  7. The Poggendorff illusion explained by natural scene geometry.

    PubMed

    Howe, Catherine Q; Yang, Zhiyong; Purves, Dale

    2005-05-24

    One of the most intriguing of the many discrepancies between perceived spatial relationships and the physical structure of visual stimuli is the Poggendorff illusion, when an obliquely oriented line that is interrupted no longer appears collinear. Although many different theories have been proposed to explain this effect, there has been no consensus about its cause. Here, we use a database of range images (i.e., images that include the distance from the image plane of every pixel in the scene) to show that the probability distribution of the possible locations of line segments across an interval in natural environments can fully account for all of the behavior of this otherwise puzzling phenomenon.

  8. Probing the Natural Scene by Echolocation in Bats

    PubMed Central

    Moss, Cynthia F.; Surlykke, Annemarie

    2010-01-01

    Bats echolocating in the natural environment face the formidable task of sorting signals from multiple auditory objects, echoes from obstacles, prey, and the calls of conspecifics. Successful orientation in a complex environment depends on auditory information processing, along with adaptive vocal-motor behaviors and flight path control, which draw upon 3-D spatial perception, attention, and memory. This article reviews field and laboratory studies that document adaptive sonar behaviors of echolocating bats, and point to the fundamental signal parameters they use to track and sort auditory objects in a dynamic environment. We suggest that adaptive sonar behavior provides a window to bats’ perception of complex auditory scenes. PMID:20740076

  9. Discriminative genre-independent audio-visual scene change detection

    NASA Astrophysics Data System (ADS)

    Wilson, Kevin W.; Divakaran, Ajay

    2009-01-01

    We present a technique for genre-independent scene-change detection using audio and video features in a discriminative support vector machine (SVM) framework. This work builds on our previous work by adding a video feature based on the MPEG-7 "scalable color" descriptor. Adding this feature improves our detection rate over all genres by 5% to 15% for a fixed false positive rate of 10%. We also find that the genres that benefit the most are those with which the previous audio-only was least effective.

  10. Addressing Problems with Scene-Based Wave Front Sensing

    SciTech Connect

    Chan, C

    2003-08-05

    Scene-Based Wave Front Sensing uses the correlation between successive subimages to determine phase aberrations which blur digital images. Adaptive Optics technology uses deformable mirrors to correct for these phase aberrations and make the images clearer. The correlation between temporal subimages gives tip-tilt information. If these images do not have identical image content, tip-tilt estimations may be incorrect. Motion detection is necessary to help avoid errors initiated by dynamic subimage content. In this document, I will discuss why edge detection fails as a motion detection method on low resolution images and how thresholding the normalized variance of individual pixels is successful for motion detection.

  11. Recognition of natural scenes from global properties: seeing the forest without representing the trees.

    PubMed

    Greene, Michelle R; Oliva, Aude

    2009-03-01

    Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects of scene space (such as navigability or mean depth). In Experiment 1, we obtained ground truth rankings on global properties for use in Experiments 2-4. To what extent do human observers use global property information when rapidly categorizing natural scenes? In Experiment 2, we found that global property resemblance was a strong predictor of both false alarm rates and reaction times in a rapid scene categorization experiment. To what extent is global property information alone a sufficient predictor of rapid natural scene categorization? In Experiment 3, we found that the performance of a classifier representing only these properties is indistinguishable from human performance in a rapid scene categorization task in terms of both accuracy and false alarms. To what extent is this high predictability unique to a global property representation? In Experiment 4, we compared two models that represent scene object information to human categorization performance and found that these models had lower fidelity at representing the patterns of performance than the global property model. These results provide support for the hypothesis that rapid categorization of natural scenes may not be mediated primarily though objects and parts, but also through global properties of structure and affordance.

  12. Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory

    NASA Technical Reports Server (NTRS)

    Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.

    2005-01-01

    Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.

  13. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    PubMed

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  14. IR scene image generation from visual image based on thermal database

    NASA Astrophysics Data System (ADS)

    Liao, Binbin; Wang, Zhangye; Ke, Xiaodi; Xia, Yibin; Peng, Qunsheng

    2007-11-01

    In this paper, we propose a new method to generate complex IR scene image directly from the corresponding visual scene image based on material thermal database. For the input visual scene image, we realize an interactive tool based on the combined method of global magic wand and intelligent scissors to segment the object areas in the scene. And the thermal attributes are assigned to each object area from the thermal database of materials. By adopting the scene infrared signature model based on infrared Physics and Heat Transfer, the surface temperature distribution of the scene are calculated and the corresponding grayscale of each area in IR image is determined by our transformation rule. We also propose a pixel-based RGB spacial similarity model to determine the mixture grayscales of residual area in the scene image. To realistically simulate the IR scene, we develop an IR imager blur model considering the effect of different resolving power of visual and thermal imagers, IR atmospheric noise and the modulation transfer function of thermal imager. Finally, IR scene images at different intervals under different weather conditions are generated. Compared with real IR scene images, our simulated results are quite satisfactory and effective.

  15. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  16. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 10 2012-01-01 2012-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  17. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  18. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 10 2014-01-01 2014-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  19. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 10 2013-01-01 2013-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  20. The use of liquid latex for soot removal from fire scenes and attempted fingerprint development with Ninhydrin.

    PubMed

    Clutter, Susan Wright; Bailey, Robert; Everly, Jeff C; Mercer, Karl

    2009-11-01

    Throughout the United States, clearance rates for arson cases remain low due to fire's destructive nature, subsequent suppression, and a misconception by investigators that no forensic evidence remains. Recent research shows that fire scenes can yield fingerprints if soot layers are removed prior to using available fingerprinting processes. An experiment applying liquid latex to sooted surfaces was conducted to assess its potential to remove soot and yield fingerprints after the dried latex was peeled. Latent fingerprints were applied to glass and drywall surfaces, sooted in a controlled burn, and cooled. Liquid latex was sprayed on, dried, and peeled. Results yielded usable prints within the soot prior to removal techniques, but no further fingerprint enhancement was noted with Ninhydrin. Field studies using liquid latex will be continued by the (US) Virginia Fire Marshal Academy but it appears that liquid latex application is a suitable soot removal method for forensic applications.

  1. The actual status of Astronomy in Moldova

    NASA Astrophysics Data System (ADS)

    Gaina, A.

    The astronomical research in the Republic of Moldova after Nicolae Donitch (Donici)(1874-1956(?)) were renewed in 1957, when a satellites observations station was open in Chisinau. Fotometric observations and rotations of first Soviet artificial satellites were investigated under a program SPIN put in action by the Academy of Sciences of former Socialist Countries. The works were conducted by Assoc. prof. Dr. V. Grigorevskij, which conducted also research in variable stars. Later, at the beginning of 60-th, an astronomical Observatory at the Chisinau State University named after Lenin (actually: the State University of Moldova), placed in Lozovo-Ciuciuleni villages was open, which were coordinated by Odessa State University (Prof. V.P. Tsesevich) and the Astrosovet of the USSR. Two main groups worked in this area: first conducted by V. Grigorevskij (till 1971) and second conducted by L.I. Shakun (till 1988), both graduated from Odessa State University. Besides this research areas another astronomical observations were made: Comets observations, astroclimate and atmospheric optics in collaboration with the Institute of the Atmospheric optics of the Siberian branch of the USSR (V. Chernobai, I. Nacu, C. Usov and A.F. Poiata). Comets observations were also made since 1988 by D. I. Gorodetskij which came to Chisinau from Alma-Ata and collaborated with Ukrainean astronomers conducted by K.I. Churyumov. Another part of space research was made at the State University of Tiraspol since the beggining of 70-th by a group of teaching staff of the Tiraspol State Pedagogical University: M.D. Polanuer, V.S. Sholokhov. No a collaboration between Moldovan astronomers and Transdniestrian ones actually exist due to War in Transdniestria in 1992. An important area of research concerned the Radiophysics of the Ionosphere, which was conducted in Beltsy at the Beltsy State Pedagogical Institute by a group of teaching staff of the University since the beginning of 70-th: N. D. Filip, E

  2. A simulation study of scene confusion factors in sensing soil moisture from orbital radar

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.; Roth, F. T.

    1983-01-01

    Simulated C-band radar imagery for a 124-km by 108-km test site in eastern Kansas is used to classify soil moisture. Simulated radar resolutions are 100 m by 100 m, 1 km by 1km, and 3 km by 3 km. Distributions of actual near-surface soil moisture are established daily for a 23-day accounting period using a water budget model. Within the 23-day period, three orbital radar overpasses are simulated roughly corresponding to generally moist, wet, and dry soil moisture conditions. The radar simulations are performed by a target/sensor interaction model dependent upon a terrain model, land-use classification, and near-surface soil moisture distribution. The accuracy of soil-moisture classification is evaluated for each single-date radar observation and also for multi-date detection of relative soil moisture change. In general, the results for single-date moisture detection show that 70% to 90% of cropland can be correctly classified to within +/- 20% of the true percent of field capacity. For a given radar resolution, the expected classification accuracy is shown to be dependent upon both the general soil moisture condition and also the geographical distribution of land-use and topographic relief. An analysis of cropland, urban, pasture/rangeland, and woodland subregions within the test site indicates that multi-temporal detection of relative soil moisture change is least sensitive to classification error resulting from scene complexity and topographic effects.

  3. Discrete and continuous description of a three-dimensional scene for quality control of radiotherapy treatment planning systems

    NASA Astrophysics Data System (ADS)

    Denis, Eloise; Guédon, JeanPierre; Beaumont, Stéphane; Normand, Nicolas

    2006-03-01

    Quality Control (QC) procedures are mandatory to achieve accuracy in radiotherapy treatments. For that purpose, classical methods generally use physical phantoms that are acquired by the system in place of the patient. In this paper, the use of digital test objects (DTO) replace the actual acquisition1. A DTO is a 3D scene description composed of simple and complex shapes from which discrete descriptions can be obtained. For QC needs, both the DICOM format (for Treatment Planning System (TPS) inputs) as well as continuous descriptions are required. The aim of this work is to define an equivalence model between a continuous description of the three dimensional (3D) scene used to define the DTO, and the DTO characteristics. The purpose is to have an XML- DTO description in order to compute discrete calculations from a continuous description. The defined structure allows also to obtain the three dimensional matrix of the DTO and then the series of slices stored in the DICOM format. Thus, it is shown how possibly design DTO for quality control in CT simulation and dosimetry.

  4. Super-Resolution of Dynamic Scenes Using Sampling Rate Diversity.

    PubMed

    Salem, Faisal; Yagle, Andrew E

    2016-08-01

    In earlier work, we proposed a super-resolution (SR) method that required the availability of two low resolution (LR) sequences corresponding to two different sampling rates, where images from one sequence were used as a basis to represent the polyphase components (PPCs) of the high resolution (HR) image, while the other LR sequences provided the reference LR image (to be super-resolved). The (simple) algorithm implemented by Salem and Yagle is only applicable when the scene is static. In this paper, we recast our approach to SR as a two-stage example-based algorithm to process dynamic scenes. We employ feature selection to create, from the LR frames, local LR dictionaries to represent PPCs of HR patches. To enforce sparsity, we implement Gaussian generative models as an efficient alternative to L1-norm minimization. Estimation errors are further reduced using what we refer to as the anchors, which are based on the relationship between PPCs corresponding to different sampling rates. In the second stage, we revert to simple single frame SR (applied to each frame), using HR dictionaries extracted from the super-resolved sequence of the previous stage. The second stage is thus a reiteration of the sparsity coding scheme, using only one LR sequence, and without involving PPCs. The ability of the modified algorithm to super-resolve challenging LR sequences reintroduces sampling rate diversity as a prerequisite of robust multiframe SR.

  5. Scene understanding based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2005-05-01

    New generations of smart weapons and unmanned vehicles must have reliable perceptual systems that are similar to human vision. Instead of precise computations of 3-dimensional models, a network-symbolic system converts image information into an "understandable" Network-Symbolic format, which is similar to relational knowledge models. Logic of visual scenes can be captured in the Network-Symbolic models and used for the disambiguation of visual information. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation that can be better interpreted by higher-level knowledge structures. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views.

  6. Reversed effects of spatial compatibility in natural scenes.

    PubMed

    Müsseler, Jochen; Aschersleben, Gisa; Arning, Katrin; Proctor, Robert W

    2009-01-01

    Effects of spatial stimulus-response compatibility are often attributed to automatic position-based activation of the response elicited by a stimulus. Three experiments examined this assumption in natural scenes. In Experiments 1 and 2, participants performed simulated driving, and a person appeared periodically on either side of the road. Participants were to turn toward a person calling a taxi and away from a person carelessly entering the street. The spatially incompatible response was faster than the compatible response, but neutral stimuli showed a typical benefit for spatially compatible responses. Placing the people further in the visual periphery eliminated the advantage for the incompatible response and showed an advantage for the compatible response. In Experiment 3, participants made left-right joystick responses to a vicious dog or puppy in a walking scenario. Instructions were to avoid the vicious dog and approach the puppy or vice versa. Results again showed an advantage for the spatially incompatible response. Thus, the typically observed advantage of spatially compatible responses was reversed for dangerous situations in natural scenes.

  7. Perceptual expertise improves category detection in natural scenes.

    PubMed

    Reeder, Reshanne R; Stein, Timo; Peelen, Marius V

    2016-02-01

    There is much debate about how detection, categorization, and within-category identification relate to one another during object recognition. Whether these tasks rely on partially shared perceptual mechanisms may be determined by testing whether training on one of these tasks facilitates performance on another. In the present study we asked whether expertise in discriminating objects improves the detection of these objects in naturalistic scenes. Self-proclaimed car experts (N = 34) performed a car discrimination task to establish their level of expertise, followed by a visual search task where they were asked to detect cars and people in hundreds of photographs of natural scenes. Results revealed that expertise in discriminating cars was strongly correlated with car detection accuracy. This effect was specific to objects of expertise, as there was no influence of car expertise on person detection. These results indicate a close link between object discrimination and object detection performance, which we interpret as reflecting partially shared perceptual mechanisms and neural representations underlying these tasks: the increased sensitivity of the visual system for objects of expertise - as a result of extensive discrimination training - may benefit both the discrimination and the detection of these objects. Alternative interpretations are also discussed.

  8. Effects of sex and age on auditory spatial scene analysis.

    PubMed

    Lewald, Jörg; Hausmann, Markus

    2013-05-01

    Recently, it has been demonstrated that men outperform women in spatial analysis of complex auditory scenes (Zündorf et al., 2011). The present study investigated the relation between the effects of ageing and sex on the spatial segregation of concurrent sounds in younger and middle-aged adults. The experimental design allowed simultaneous presentation of target and distractor sound sources at different locations. The resulting spatial "pulling" effect (that is, the bias of target localization toward that of the distractor) was used as a measure of performance. The pulling effect was stronger in middle-aged than younger subjects, and female than male subjects. This indicates lower performance of the middle-aged women in the sensory and attentional mechanisms extracting spatial information about the acoustic event of interest from the auditory scene than both younger and male subjects. Moreover, age-specific differences were most prominent for conditions with targets in right hemispace and distractors in left hemispace, suggesting bilateral asymmetries underlying the effect of ageing.

  9. A scheme for automatic text rectification in real scene images

    NASA Astrophysics Data System (ADS)

    Wang, Baokang; Liu, Changsong; Ding, Xiaoqing

    2015-03-01

    Digital camera is gradually replacing traditional flat-bed scanner as the main access to obtain text information for its usability, cheapness and high-resolution, there has been a large amount of research done on camera-based text understanding. Unfortunately, arbitrary position of camera lens related to text area can frequently cause perspective distortion which most OCR systems at present cannot manage, thus creating demand for automatic text rectification. Current rectification-related research mainly focused on document images, distortion of natural scene text is seldom considered. In this paper, a scheme for automatic text rectification in natural scene images is proposed. It relies on geometric information extracted from characters themselves as well as their surroundings. For the first step, linear segments are extracted from interested region, and a J-Linkage based clustering is performed followed by some customized refinement to estimate primary vanishing point(VP)s. To achieve a more comprehensive VP estimation, second stage would be performed by inspecting the internal structure of characters which involves analysis on pixels and connected components of text lines. Finally VPs are verified and used to implement perspective rectification. Experiments demonstrate increase of recognition rate and improvement compared with some related algorithms.

  10. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  11. Using Bayesian neural networks to classify forest scenes

    NASA Astrophysics Data System (ADS)

    Vehtari, Aki; Heikkonen, Jukka; Lampinen, Jouko; Juujarvi, Jouni

    1998-10-01

    We present results that compare the performance of Bayesian learning methods for neural networks on the task of classifying forest scenes into trees and background. Classification task is demanding due to the texture richness of the trees, occlusions of the forest scene objects and diverse lighting conditions under operation. This makes it difficult to determine which are optimal image features for the classification. A natural way to proceed is to extract many different types of potentially suitable features, and to evaluate their usefulness in later processing stages. One approach to cope with large number of features is to use Bayesian methods to control the model complexity. Bayesian learning uses a prior on model parameters, combines this with evidence from a training data, and the integrates over the resulting posterior to make predictions. With this method, we can use large networks and many features without fear of overfitting. For this classification task we compare two Bayesian learning methods for multi-layer perceptron (MLP) neural networks: (1) The evidence framework of MacKay uses a Gaussian approximation to the posterior weight distribution and maximizes with respect to hyperparameters. (2) In a Markov Chain Monte Carlo (MCMC) method due to Neal, the posterior distribution of the network parameters is numerically integrated using the MCMC method. As baseline classifiers for comparison we use (3) MLP early stop committee, (4) K-nearest-neighbor and (5) Classification And Regression Tree.

  12. Text Detection in Natural Scene Images by Stroke Gabor Words.

    PubMed

    Yi, Chucai; Tian, Yingli

    2011-01-01

    In this paper, we propose a novel algorithm, based on stroke components and descriptive Gabor filters, to detect text regions in natural scene images. Text characters and strings are constructed by stroke components as basic units. Gabor filters are used to describe and analyze the stroke components in text characters or strings. We define a suitability measurement to analyze the confidence of Gabor filters in describing stroke component and the suitability of Gabor filters on an image window. From the training set, we compute a set of Gabor filters that can describe principle stroke components of text by their parameters. Then a K -means algorithm is applied to cluster the descriptive Gabor filters. The clustering centers are defined as Stroke Gabor Words (SGWs) to provide a universal description of stroke components. By suitability evaluation on positive and negative training samples respectively, each SGW generates a pair of characteristic distributions of suitability measurements. On a testing natural scene image, heuristic layout analysis is applied first to extract candidate image windows. Then we compute the principle SGWs for each image window to describe its principle stroke components. Characteristic distributions generated by principle SGWs are used to classify text or nontext windows. Experimental results on benchmark datasets demonstrate that our algorithm can handle complex backgrounds and variant text patterns (font, color, scale, etc.).

  13. Intelligence-led crime scene processing. Part I: Forensic intelligence.

    PubMed

    Ribaux, Olivier; Baylon, Amélie; Roux, Claude; Delémont, Olivier; Lock, Eric; Zingg, Christian; Margot, Pierre

    2010-02-25

    Forensic science is generally defined as the application of science to address questions related to the law. Too often, this view restricts the contribution of science to one single process which eventually aims at bringing individuals to court while minimising risk of miscarriage of justice. In order to go beyond this paradigm, we propose to refocus the attention towards traces themselves, as remnants of a criminal activity, and their information content. We postulate that traces contribute effectively to a wide variety of other informational processes that support decision making in many situations. In particular, they inform actors of new policing strategies who place the treatment of information and intelligence at the centre of their systems. This contribution of forensic science to these security oriented models is still not well identified and captured. In order to create the best condition for the development of forensic intelligence, we suggest a framework that connects forensic science to intelligence-led policing (part I). Crime scene attendance and processing can be envisaged within this view. This approach gives indications about how to structure knowledge used by crime scene examiners in their effective practice (part II).

  14. No emotional "pop-out" effect in natural scene viewing.

    PubMed

    Acunzo, David J; Henderson, John M

    2011-10-01

    It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided.

  15. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  16. The Influence of Familiarity on Affective Responses to Natural Scenes

    NASA Astrophysics Data System (ADS)

    Sanabria Z., Jorge C.; Cho, Youngil; Yamanaka, Toshimasa

    This kansei study explored how familiarity with image-word combinations influences affective states. Stimuli were obtained from Japanese print advertisements (ads), and consisted of images (e.g., natural-scene backgrounds) and their corresponding headlines (advertising copy). Initially, a group of subjects evaluated their level of familiarity with images and headlines independently, and stimuli were filtered based on the results. In the main experiment, a different group of subjects rated their pleasure and arousal to, and familiarity with, image-headline combinations. The Self-Assessment Manikin (SAM) scale was used to evaluate pleasure and arousal, and a bipolar scale was used to evaluate familiarity. The results showed a high correlation between familiarity and pleasure, but low correlation between familiarity and arousal. The characteristics of the stimuli, and their effect on the variables of pleasure, arousal and familiarity, were explored through ANOVA. It is suggested that, in the case of natural-scene ads, familiarity with image-headline combinations may increase the pleasure response to the ads, and that certain components in the images (e.g., water) may increase arousal levels.

  17. Scene-aware joint global and local homographic video coding

    NASA Astrophysics Data System (ADS)

    Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.

    2016-09-01

    Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.

  18. Selective visual attention in object recognition and scene analysis

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; de Almeida Neves, Evelina M.; Frere, Annie F.

    1998-10-01

    An important feature of human vision system is the ability of selective visual attention. The stimulus that reaches the primate retina is processed in two different cortical pathways; one is specialized for object vision (`What') and the other for spatial vision (`Where'). By this, the visual system is able to recognize objects independently where they appear in the visual field. There are two major theories to explain the human visual attention. According to the Object- Based theory there is a limit on the isolated objects that could be perceived simultaneously and by the Space-Based theory there is a limit on the spatial areas from which the information could be taken up. This paper deals with the Object-Based theory that states the visual world occurs in two stages. The scene is segmented into isolated objects by region growing techniques in the pre-attentive stage. Invariant features (moments) are extracted and used as input of an Artificial Neural Network giving the probable object location (`Where'). In the focal-stage, particular objects are analyzed in detail through another neural network that performs the object recognition (`What'). The number of analyzed objects is based on a top-down process doing a consistent scene interpretation. With Visual Attention is possible the development of more efficient and flexible interfaces between low sensory information and high level process.

  19. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  20. Building the gist of a scene: the role of global image features in recognition.

    PubMed

    Oliva, Aude; Torralba, Antonio

    2006-01-01

    Humans can recognize the gist of a novel image in a single glance, independent of its complexity. How is this remarkable feat accomplished? On the basis of behavioral and computational evidence, this paper describes a formal approach to the representation and the mechanism of scene gist understanding, based on scene-centered, rather than object-centered primitives. We show that the structure of a scene image can be estimated by the mean of global image features, providing a statistical summary of the spatial layout properties (Spatial Envelope representation) of the scene. Global features are based on configurations of spatial scales and are estimated without invoking segmentation or grouping operations. The scene-centered approach is not an alternative to local image analysis but would serve as a feed-forward and parallel pathway of visual processing, able to quickly constrain local feature analysis and enhance object recognition in cluttered natural scenes.

  1. Short report: the effect of expertise in hiking on recognition memory for mountain scenes.

    PubMed

    Kawamura, Satoru; Suzuki, Sae; Morikawa, Kazunori

    2007-10-01

    The nature of an expert memory advantage that does not depend on stimulus structure or chunking was examined, using more ecologically valid stimuli in the context of a more natural activity than previously studied domains. Do expert hikers and novice hikers see and remember mountain scenes differently? In the present experiment, 18 novice hikers and 17 expert hikers were presented with 60 photographs of scenes from hiking trails. These scenes differed in the degree of functional aspects that implied some action possibilities or dangers. The recognition test revealed that the memory performance of experts was significantly superior to that of novices for scenes with highly functional aspects. The memory performance for the scenes with few functional aspects did not differ between novices and experts. These results suggest that experts pay more attention to, and thus remember better, scenes with functional meanings than do novices.

  2. Some Notes on a Functional Equation. Classroom Notes

    ERIC Educational Resources Information Center

    Ren, Zhong-Pu; Wu, Zhi-Qin; Zhou, Qi-Fa; Guo, Bai-Ni; Qi, Feng

    2004-01-01

    In this short note, a mathematical proposition on a functional equation for f(xy)=xf(y) + yf(x)for x,y [does not equal] 0, which is encountered in calculus, is generalized step by step. These steps involve continuity, differentiability, a functional equation, an ordinary differential linear equation of the first order, and relationships between…

  3. Caustic-Side Solvent Extraction: Prediction of Cesium Extraction for Actual Wastes and Actual Waste Simulants

    SciTech Connect

    Delmau, L.H.; Haverlock, T.J.; Sloop, F.V., Jr.; Moyer, B.A.

    2003-02-01

    This report presents the work that followed the CSSX model development completed in FY2002. The developed cesium and potassium extraction model was based on extraction data obtained from simple aqueous media. It was tested to ensure the validity of the prediction for the cesium extraction from actual waste. Compositions of the actual tank waste were obtained from the Savannah River Site personnel and were used to prepare defined simulants and to predict cesium distribution ratios using the model. It was therefore possible to compare the cesium distribution ratios obtained from the actual waste, the simulant, and the predicted values. It was determined that the predicted values agree with the measured values for the simulants. Predicted values also agreed, with three exceptions, with measured values for the tank wastes. Discrepancies were attributed in part to the uncertainty in the cation/anion balance in the actual waste composition, but likely more so to the uncertainty in the potassium concentration in the waste, given the demonstrated large competing effect of this metal on cesium extraction. It was demonstrated that the upper limit for the potassium concentration in the feed ought to not exceed 0.05 M in order to maintain suitable cesium distribution ratios.

  4. Notes on the beating fantasy.

    PubMed

    Sirois, François J

    2010-06-01

    This theoretical paper revisits the beating fantasy, which constitutes a crossroads of the psychic economy in that it condenses three primal phantasies, namely the primal scene, castration and seduction. Two forms of the phantasy have been distinguished: a 'fixed' form, apparently associated with the masochistic perversion, and a 'transitory' form, probably bound up with libidinal development. In Freud 's (1919) paper these two aspects are intertwined. The present contribution confines itself to the transitory form of the phantasy and its significance in the libidinal development of the girl, notably in the organization of passivity. With this in mind, particular attention is paid to the phantasy's third phase in this context, and an attempt made to show how this phase epitomizes the transformation of the instinctual pressure and might therefore be looked upon in this connection as the intermediate phase of the phantasy.

  5. Viewing nature scenes positively affects recovery of autonomic function following acute-mental stress.

    PubMed

    Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F

    2013-06-04

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.

  6. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.

    PubMed

    Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  7. Remembering faces and scenes: The mixed-category advantage in visual working memory.

    PubMed

    Jiang, Yuhong V; Remington, Roger W; Asaad, Anthony; Lee, Hyejin J; Mikkalson, Taylor C

    2016-09-01

    We examined the mixed-category memory advantage for faces and scenes to determine how domain-specific cortical resources constrain visual working memory. Consistent with previous findings, visual working memory for a display of 2 faces and 2 scenes was better than that for a display of 4 faces or 4 scenes. This pattern was unaffected by manipulations of encoding duration. However, the mixed-category advantage was carried solely by faces: Memory for scenes was not better when scenes were encoded with faces rather than with other scenes. The asymmetry between faces and scenes was found when items were presented simultaneously or sequentially, centrally, or peripherally, and when scenes were drawn from a narrow category. A further experiment showed a mixed-category advantage in memory for faces and bodies, but not in memory for scenes and objects. The results suggest that unique category-specific interactions contribute significantly to the mixed-category advantage in visual working memory. (PsycINFO Database Record

  8. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    PubMed

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  9. Notes.

    ERIC Educational Resources Information Center

    Physics Teacher, 1979

    1979-01-01

    Some topics included are: the relative merits of a programmable calculator and a microcomputer; the advantages of acquiring a sound-level meter for the laboratory; how to locate a virtual image in a plane mirror; center of gravity of a student; and how to demonstrate interference of light using two cords.

  10. Possibilities of lasers within NOTES.

    PubMed

    Stepp, Herbert; Sroka, Ronald

    2010-10-01

    Lasers possess unique properties that render them versatile light sources particularly for NOTES. Depending on the laser light sources used, diagnostic as well as therapeutic purposes can be achieved. The diagnostic potential offered by innovative concepts such as new types of ultra-thin endoscopes and optical probes supports the physician with optical information of ultra-high resolution, tissue discrimination and manifold types of fluorescence detection. In addition, the potential 3-D capability promises enhanced recognition of tissue type and pathological status. These diagnostic techniques might enable or at least contribute to accurate and safe procedures within the spatial restrictions inherent with NOTES. The therapeutic potential ranges from induction of phototoxic effects over tissue welding, coagulation and tissue cutting to stone fragmentation. As proven in many therapeutic laser endoscopic treatment concepts, laser surgery is potentially bloodless and transmits the energy without mechanical forces. Specialized NOTES endoscopes will likely incorporate suitable probes for improving diagnostic procedures, laser fibres with advantageous light delivery possibility or innovative laser beam manipulation systems. NOTES training centres may support the propagation of the complex handling and the safety aspects for clinical use to the benefit of the patient.

  11. EndNote at Lehigh.

    ERIC Educational Resources Information Center

    Siegler, Sharon; Simboli, Brian

    2002-01-01

    Describes the experiences of librarians at Lehigh University in implementing campus-wide use of EndNote, a citation management software package that allows users to create a searchable library of downloaded or manually entered references for any type of publication to be able to insert citations and format footnotes or endnotes within a…

  12. A brief note regarding randomization.

    PubMed

    Senn, Stephen

    2013-01-01

    This note argues, contrary to claims in this journal, that the possible existence of indefinitely many causal factors does not invalidate randomization. The effect of such factors has to be bounded by outcome, and since inference is based on a ratio of between-treatment-group to within-treatment-group variation, randomization remains valid.

  13. Applied Fluid Mechanics. Lecture Notes.

    ERIC Educational Resources Information Center

    Gregg, Newton D.

    This set of lecture notes is used as a supplemental text for the teaching of fluid dynamics, as one component of a thermodynamics course for engineering technologists. The major text for the course covered basic fluids concepts such as pressure, mass flow, and specific weight. The objective of this document was to present additional fluids…

  14. Lunar nomenclature: A dissenting note

    USGS Publications Warehouse

    Arthur, D.W.G.

    1976-01-01

    This note reviews the nature of the traditional (Ma??dler) lunar nomenclature and the recent developments based on the use of more than 2000 named provinces. It appears that the new nomenclature is less efficient than the old in many cases and may lead to an impossible publication situation. The unnecessary break with the past is especially critized. ?? 1976.

  15. A Note on Hamiltonian Graphs

    ERIC Educational Resources Information Center

    Skurnick, Ronald; Davi, Charles; Skurnick, Mia

    2005-01-01

    Since 1952, several well-known graph theorists have proven numerous results regarding Hamiltonian graphs. In fact, many elementary graph theory textbooks contain the theorems of Ore, Bondy and Chvatal, Chvatal and Erdos, Posa, and Dirac, to name a few. In this note, the authors state and prove some propositions of their own concerning Hamiltonian…

  16. Notes for Serials Cataloging. Second Edition.

    ERIC Educational Resources Information Center

    Geer, Beverley, Ed.; Caraway, Beatrice L., Ed.

    Notes are indispensable to serials cataloging. Researchers, reference librarians, and catalogers regularly use notes on catalog records and, as the audience for these notes has expanded from the local library community to the global Internet community, the need for notes to be cogent, clear, and useful is greater than ever. This book is a…

  17. Consequences of Predicted or Actual Asteroid Impacts

    NASA Astrophysics Data System (ADS)

    Chapman, C. R.

    2003-12-01

    Earth impact by an asteroid could have enormous physical and environmental consequences. Impactors larger than 2 km diameter could be so destructive as to threaten civilization. Since such events greatly exceed any other natural or man-made catastrophe, much extrapolation is necessary just to understand environmental implications (e.g. sudden global cooling, tsunami magnitude, toxic effects). Responses of vital elements of the ecosystem (e.g. agriculture) and of human society to such an impact are conjectural. For instance, response to the Blackout of 2003 was restrained, but response to 9/11 terrorism was arguably exaggerated and dysfunctional; would society be fragile or robust in the face of global catastrophe? Even small impacts, or predictions of impacts (accurate or faulty), could generate disproportionate responses, especially if news media reports are hyped or inaccurate or if responsible entities (e.g. military organizations in regions of conflict) are inadequately aware of the phenomenology of small impacts. Asteroid impact is the one geophysical hazard of high potential consequence with which we, fortunately, have essentially no historical experience. It is thus important that decision makers familiarize themselves with the hazard and that society (perhaps using a formal procedure, like a National Academy of Sciences study) evaluate the priority of addressing the hazard by (a) further telescopic searches for dangerous but still-undiscovered asteroids and (b) development of mitigation strategies (including deflection of an oncoming asteroid and on- Earth civil defense). I exemplify these issues by discussing several representative cases that span the range of parameters. Many of the specific physical consequences of impact involve effects like those of other geophysical disasters (flood, fire, earthquake, etc.), but the psychological and sociological aspects of predicted and actual impacts are distinctive. Standard economic cost/benefit analyses may not

  18. Infrared imaging of the crime scene: possibilities and pitfalls.

    PubMed

    Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G

    2013-09-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively.

  19. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    NASA Astrophysics Data System (ADS)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  20. Extracting scene feature vectors through modeling, volume 3

    NASA Technical Reports Server (NTRS)

    Berry, J. K.; Smith, J. A.

    1976-01-01

    The remote estimation of the leaf area index of winter wheat at Finney County, Kansas was studied. The procedure developed consists of three activities: (1) field measurements; (2) model simulations; and (3) response classifications. The first activity is designed to identify model input parameters and develop a model evaluation data set. A stochastic plant canopy reflectance model is employed to simulate reflectance in the LANDSAT bands as a function of leaf area index for two phenological stages. An atmospheric model is used to translate these surface reflectances into simulated satellite radiance. A divergence classifier determines the relative similarity between model derived spectral responses and those of areas with unknown leaf area index. The unknown areas are assigned the index associated with the closest model response. This research demonstrated that the SRVC canopy reflectance model is appropriate for wheat scenes and that broad categories of leaf area index can be inferred from the procedure developed.

  1. Video Sensor-Based Complex Scene Analysis with Granger Causality

    PubMed Central

    Fan, Yawen; Yang, Hua; Zheng, Shibao; Su, Hang; Wu, Shuang

    2013-01-01

    In this report, we propose a novel framework to explore the activity interactions and temporal dependencies between activities in complex video surveillance scenes. Under our framework, a low-level codebook is generated by an adaptive quantization with respect to the activeness criterion. The Hierarchical Dirichlet Processes (HDP) model is then applied to automatically cluster low-level features into atomic activities. Afterwards, the dynamic behaviors of the activities are represented as a multivariate point-process. The pair-wise relationships between activities are explicitly captured by the non-parametric Granger causality analysis, from which the activity interactions and temporal dependencies are discovered. Then, each video clip is labeled by one of the activity interactions. The results of the real-world traffic datasets show that the proposed method can achieve a high quality classification performance. Compared with traditional K-means clustering, a maximum improvement of 19.19% is achieved by using the proposed causal grouping method. PMID:24152928

  2. Inverting a dispersive scene's side-scanned image

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1983-01-01

    Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.

  3. When anticipation beats accuracy: Threat alters memory for dynamic scenes.

    PubMed

    Greenstein, Michael; Franklin, Nancy; Martins, Mariana; Sewack, Christine; Meier, Markus A

    2016-05-01

    Threat frequently leads to the prioritization of survival-relevant processes. Much of the work examining threat-related processing advantages has focused on the detection of static threats or long-term memory for details. In the present study, we examined immediate memory for dynamic threatening situations. We presented participants with visually neutral, dynamic stimuli using a representational momentum (RM) paradigm, and manipulated threat conceptually. Although the participants in both the threatening and nonthreatening conditions produced classic RM effects, RM was stronger for scenarios involving threat (Exps. 1 and 2). Experiments 2 and 3 showed that this effect does not generalize to the nonthreatening objects within a threatening scene, and that it does not extend to arousing happy situations. Although the increased RM effect for threatening objects by definition reflects reduced accuracy, we argue that this reduced accuracy may be offset by a superior ability to predict, and thereby evade, a moving threat.

  4. Complete Scene Recovery and Terrain Classification in Textured Terrain Meshes

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh. PMID:23112653

  5. Saliency-based abnormal event detection in crowded scenes

    NASA Astrophysics Data System (ADS)

    Shi, Yanjiao; Liu, Yunxiang; Zhang, Qing; Yi, Yugen; Li, Wenju

    2016-11-01

    Abnormal event detection plays a critical role for intelligent video surveillance, and detection in crowded scenes is a challenging but more practical task. We present an abnormal event detection method for crowded video. Region-wise modeling is proposed to address the inconsistent detected motion of the same object due to different depths of field. Comparing to traditional block-wise modeling, the region-wise method not only can reduce heavily the number of models to be built but also can enrich the samples for training the normal events model. In order to reduce the computational burden and make the region-based anomaly detection feasible, a saliency detection technique is adopted in this paper. By identifying the salient parts of the image sequences, the irrelevant blocks are ignored, which removes the disturbance and improves the detection performance further. Experiments on the benchmark dataset and comparisons with the state-of-the-art algorithms validate the advantages of the proposed method.

  6. Human supervisory approach to modeling industrial scenes using geometric primitives

    SciTech Connect

    Luck, J.P.; Little, C.Q.; Roberts, R.S.

    1997-11-19

    A three-dimensional world model is crucial for many robotic tasks. Modeling techniques tend to be either fully manual or autonomous. Manual methods are extremely time consuming but also highly accurate and flexible. Autonomous techniques are fast but inflexible and, with real-world data, often inaccurate. The method presented in this paper combines the two, yielding a highly efficient, flexible, and accurate mapping tool. The segmentation and modeling algorithms that compose the method are specifically designed for industrial environments, and are described in detail. A mapping system based on these algorithms has been designed. It enables a human supervisor to quickly construct a fully defined world model from unfiltered and unsegmented real-world range imagery. Examples of how industrial scenes are modeled with the mapping system are provided.

  7. Scene Depth Perception Based on Omnidirectional Structured Light.

    PubMed

    Jia, Tong; Wang, BingNan; Zhou, ZhongXuan; Meng, Haixiu

    2016-07-11

    A depth perception method combining omnidirectional images and encoding structured light was proposed. Firstly, a new structured light pattern was presented by using monochromatic light. The primitive of the pattern consists of "Four-Direction Sand Clock-like" (FDSC) image. FDSC can provide more robust and accurate position compared with conventional pattern primitive. Secondly, on the basis of multiple reference planes, a calibration method of projector was proposed to significantly simplify projector calibration in the constructed omnidirectional imaging system. Thirdly, a depth point cloud matching algorithm based on the principle of prior constraint iterative closest point under mobile condition was proposed to avoid the effect of occlusion. Experimental results demonstrated that the proposed method can acquire omnidirectional depth information about large-scale scenes. The error analysis of 16 groups of depth data reported a maximum measuring error of 0.53 mm and an average measuring error of 0.25 mm.

  8. Real-time and reliable human detection in clutter scene

    NASA Astrophysics Data System (ADS)

    Tan, Yumei; Luo, Xiaoshu; Xia, Haiying

    2013-10-01

    To solve the problem that traditional HOG approach for human detection can not achieve real-time detection due to its time-consuming detection, an efficient algorithm based on first segmentation then identify method for real-time human detection is proposed to achieve real-time human detection in clutter scene. Firstly, the ViBe algorithm is used to segment all possible human target regions quickly, and more accurate moving objects is obtained by using the YUV color space to eliminate the shadow; secondly, using the body geometry knowledge can help to found the valid human areas by screening the regions of interest; finally, linear support vector machine (SVM) classifier and HOG are applied to train for human body classifier, to achieve accurate positioning of human body's locations. The results of our comparative experiments demonstrated that the approach proposed can obtain high accuracy, good real-time performance and strong robustness.

  9. Projection collimator optics for DMD-based infrared scene simulator

    NASA Astrophysics Data System (ADS)

    Zheng, Yawei; Hu, Yu; Li, Junnan; Huang, Meili; Gao, Jiaobo; Wang, Jun; Sun, Kefeng; Li, Jianjun; Zhang, Fang

    2016-10-01

    The design of the collimator for dynamic infrared (IR) scene simulation based on the digital micro-mirror devices (DMD) is present in this paper. The collimator adopts a reimaging configuration to limit in physical size availability and cost. The aspheric lens is used in the relay optics to improve the image quality and simplify the optics configuration. The total internal reflection (TIR) prisms is located between the last surface of the optics and the DMD to fold the raypaths of the IR light source. The optics collimates the output from 1024×768 element DMD in the 8 10.3μm waveband and enables an imaging system to be tested out of 8° Field Of View (FOV). The long pupil distance of 800mm ensures the remote location seekers under the test.

  10. Contrast enhancement in natural scenes using multiband polarization methods

    NASA Astrophysics Data System (ADS)

    Duggin, Michael J.; Kinn, Gerald J.; Bohling, Edward H.

    1997-10-01

    Relatively little work has been performed to investigate the potential of polarization techniques to provide contrast enhancement in natural scenes. Largely, this is because film is less accurate radiometrically than digital CCD FPA sensing devices. Such enhancement is additional to that provided by between-band differences for multiband data. Recently, Kodak has developed several digital imaging cameras which were intended for professional photographers. The variant we used obtained images in the green, red and near infrared, simulating CIR film. However, the application of linear drivers to rad the data from the camera into the computer has resulted in a device which can be used as a multiband imaging polarimeter. Here we examine the potential of digital image acquisition as a potential quantitative method to obtain new information additional to that obtained by multiband or even hyperspectral imaging methods. We present an example of an active on-going research program.

  11. Vegetative target enhancement in natural scenes using multiband polarization methods

    NASA Astrophysics Data System (ADS)

    Duggin, Michael J.; Kinn, Gerald J.; Bohling, Edward H.

    1997-10-01

    Relatively little work has been performed to investigate the potential of polarization techniques to provide contrast enhancement in natural scenes. Largely, this is because film is less accurate radiometrically than digital CCD FPA sensing devices. Such enhancement is additional to that provided by between-band differences for multiband data. Recently, Kodak has developed several digital imaging cameras which were intended for professional photographers. The variant we used produced images in the green, red and near IR, simulating CIR film. However, the application of linear drivers to read the data from the camera into the computer has resulted in a device which can be used as a multiband imaging polarimeter. Here we examine the potential of digital image acquisition as a potential quantitative method to obtain new information additional to that obtained by multiband or even hyperspectral imaging methods. We present an example of an active on-going research program.

  12. Vegetative target enhancement in natural scenes using multiband polarization methods

    NASA Astrophysics Data System (ADS)

    Duggin, Michael J.; Kinn, Gerald J.

    2002-01-01

    Relatively little work has been performed to investigate the potential of polarization techniques to provide contrast enhancement in natural scenes. Historically, this has been because film is less accurate radiometrically than digital CCD FPA sensing devices. Such enhancement is additional to that provided by between-band differences for multiband data. In them id 1990s, Kodak developed several digital imaging cameras, which were intended for professional photographers. The variant we used produced images in the green red and near IR, simulating CIR film. However, the application of linear drivers to read the data from the camera into the computer resulted in a device, which can be used as a portable multiband imaging polarimeter. Here we present examples to examine the potential of digital image acquisition as potential quantitative method to obtain new information on natural landscapes additional to that obtained by multiband or even hyperspectral imaging methods.

  13. Parking lot process model incorporated into DIRSIG scene simulation

    NASA Astrophysics Data System (ADS)

    Sun, Jiangqin; Messinger, David

    2012-06-01

    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a rst principles-based synthetic image generation model, developed at the Rochester Institute of Technology (RIT) over the past 20+ years. By calculating the sensor reaching radiance between the bandpass 0.2 to 20mm, it produces multi or hyperspectral remote sensing images. By integrating independent rst principles based sub-models, such as MODTRAN, DIRSIG generates a representation of what a sensor would see with high radiometric delity. In order to detect temporal changes in a process within the scene, currently the eort is devoted to enhance the capacity of DIRSIG by incorporating process models. The parking lot process model is interesting to many applications. Therefore, this paper builds a parking lot process model PARKVIEW based on the statistical description of the parking lot which includes parking lot occupancy, parking duration and parking spot preference. The output of PARKVIEW could then be fed into DIRSIG to enhance the scene simulation capacity of DIRSIG in terms of including temporal information of the parking lot. In order to show an accurate and ecient way of extracting the statistical description of the parking lot, an experiment is set up to record the distribution of cars in several parking lots on the RIT campus during one weekday by taking photos every ve minutes. The image data are processed to extract the parking spot status of the parking lot for each frame taken from the experiment. The parking spot status information is then described in a statistical way.

  14. The extension of endmember extraction to multispectral scenes

    NASA Astrophysics Data System (ADS)

    Gruninger, John H.; Ratkowski, Anthony J.; Hoke, Michael L.

    2004-08-01

    A multiple simplex endmember extraction method has been developed. Unlike convex methods that rely on a single simplex, the number of endmembers is not restricted by the number of linearly independent spectral channels. The endmembers are identified as the extreme points in the data set. The algorithm for finding the endmembers can simultaneously find endmember abundance maps. Multispectral and hyperspectral scenes can be complex and contain many materials under a variety of illumination and environmental conditions, but individual pixels typically contain only a few materials in a small subset of the illumination and environmental conditions which exist in the scene. This forms the physical basis for the approach that restricts the number of endmembers that combine to model a single pixel. No restriction is placed on the total number of endmembers, however. The algorithm for finding the endmembers and their abundances maps is sequential. Extreme points are identified based on the angle they make with the existing set. The point making the maximum angle with the existing set is chosen as the next endmember to add to enlarge the endmember set. The maximum number of endmembers that are allowed to be in a subset model for individual pixels is controlled by an input parameter. The subset selection algorithm is sequential and takes place simultaneously with the overall endmember extraction. The algorithm updates the abundances of previous endmembers and ensures that the abundances of previous and current endmembers remain positive or zero. The method offers advantages in multispectral data sets where the limited number of channels impairs material un-mixing by standard techniques. A description of the method is presented herein and applied to real and synthetic hyperspectral and multispectral data sets.

  15. MISR empirical stray light corrections in high-contrast scenes

    NASA Astrophysics Data System (ADS)

    Limbacher, J. A.; Kahn, R. A.

    2015-07-01

    We diagnose the potential causes for the Multi-angle Imaging SpectroRadiometer's (MISR) persistent high aerosol optical depth (AOD) bias at low AOD with the aid of coincident MODerate-resolution Imaging Spectroradiometer (MODIS) imagery from NASA's Terra satellite. Stray light in the MISR instrument is responsible for a large portion of the high AOD bias in high-contrast scenes, such as broken-cloud scenes that are quite common over ocean. Discrepancies among MODIS and MISR nadir-viewing blue, green, red, and near-infrared images are used to optimize seven parameters individually for each wavelength, along with a background reflectance modulation term that is modeled separately, to represent the observed features. Independent surface-based AOD measurements from the AErosol RObotic NETwork (AERONET) and the Marine Aerosol Network (MAN) are compared with MISR research aerosol retrieval algorithm (RA) AOD retrievals for 1118 coincidences to validate the corrections when applied to the nadir and off-nadir cameras. With these corrections, plus the baseline RA corrections and enhanced cloud screening applied, the median AOD bias for all data in the mid-visible (green, 558 nm) band decreases from 0.006 (0.020 for the MISR standard algorithm (SA)) to 0.000, and the RMSE decreases by 5 % (27 % compared to the SA). For AOD558 nm < 0.10, which includes about half the validation data, 68th percentile absolute AOD558 nm errors for the RA have dropped from 0.022 (0.034 for the SA) to < 0.02 (~ 0.018).

  16. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  17. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset.

  18. The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.

    PubMed

    Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc

    2014-12-01

    A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task.

  19. The relative facts interpretation and Everett's note added in proof

    NASA Astrophysics Data System (ADS)

    Conroy, Christina

    2012-05-01

    In the published version of Hugh Everett III's doctoral dissertation, he inserted what has become a famous footnote, the "note added in proof". This footnote is often the strongest evidence given for any of various interpretations of Everett (the many worlds, many minds, many histories and many threads interpretations). In this paper I will propose a new interpretation of the footnote. One that is supported by evidence found in letters written to and by Everett; one that is suggested by a new interpretation of Everett, an interpretation that takes seriously the central position of relative states in Everett's pure wave mechanics: the relative facts interpretation. Of central interest in this paper is how to make sense of Everett's claim in the "note added in proof" that "all elements of a superposition (all "branches") are "actual," none any more "real" than the rest."

  20. The Interplay of Episodic and Semantic Memory in Guiding Repeated Search in Scenes

    ERIC Educational Resources Information Center

    Vo, Melissa L.-H.; Wolfe, Jeremy M.

    2013-01-01

    It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers…

  1. Distortion-invariant composite filter for detecting a target in nonoverlapping scene noise

    NASA Astrophysics Data System (ADS)

    Javidi, Bahram; Wang, Jun

    1995-02-01

    A composite filter is designed for distortion-invariant detection of a target in the presence of nonoverlapping scene noise. The performance of the filter is illustrated by the use of computer simulation for the in-plane rotation of a target in nonoverlapping scene noise.

  2. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  3. Priming of Simple and Complex Scene Layout: Rapid Function from the Intermediate Level

    ERIC Educational Resources Information Center

    Sanocki, Thomas; Sulman, Noah

    2009-01-01

    Three experiments examined the time course of layout priming with photographic scenes varying in complexity (number of objects). Primes were presented for varying durations (800-50 ms) before a target scene with 2 spatial probes; observers indicated whether the left or right probe was closer to viewpoint. Reaction time was the main measure. Scene…

  4. How affective information from faces and scenes interacts in the brain.

    PubMed

    Van den Stock, Jan; Vandenbulcke, Mathieu; Sinke, Charlotte B A; Goebel, Rainer; de Gelder, Beatrice

    2014-10-01

    Facial expression perception can be influenced by the natural visual context in which the face is perceived. We performed an fMRI experiment presenting participants with fearful or neutral faces against threatening or neutral background scenes. Triangles and scrambled scenes served as control stimuli. The results showed that the valence of the background influences face selective activity in the right anterior parahippocampal place area (PPA) and subgenual anterior cingulate cortex (sgACC) with higher activation for neutral backgrounds compared to threatening backgrounds (controlled for isolated background effects) and that this effect correlated with trait empathy in the sgACC. In addition, the left fusiform gyrus (FG) responds to the affective congruence between face and background scene. The results show that valence of the background modulates face processing and support the hypothesis that empathic processing in sgACC is inhibited when affective information is present in the background. In addition, the findings reveal a pattern of complex scene perception showing a gradient of functional specialization along the posterior-anterior axis: from sensitivity to the affective content of scenes (extrastriate body area: EBA and posterior PPA), over scene emotion-face emotion interaction (left FG) via category-scene interaction (anterior PPA) to scene-category-personality interaction (sgACC).

  5. Temporal dynamics of eye movements are related to differences in scene complexity and clutter.

    PubMed

    Wu, David W-L; Anderson, Nicola C; Bischof, Walter F; Kingstone, Alan

    2014-08-11

    Recent research has begun to explore not just the spatial distribution of eye fixations but also the temporal dynamics of how we look at the world. In this investigation, we assess how scene characteristics contribute to these fixation dynamics. In a free-viewing task, participants viewed three scene types: fractal, landscape, and social scenes. We used a relatively new method, recurrence quantification analysis (RQA), to quantify eye movement dynamics. RQA revealed that eye movement dynamics were dependent on the scene type viewed. To understand the underlying cause for these differences we applied a technique known as fractal analysis and discovered that complexity and clutter are two scene characteristics that affect fixation dynamics, but only in scenes with meaningful content. Critically, scene primitives-revealed by saliency analysis-had no impact on performance. In addition, we explored how RQA differs from the first half of the trial to the second half, as well as the potential to investigate the precision of fixation targeting by changing RQA radius values. Collectively, our results suggest that eye movement dynamics result from top-down viewing strategies that vary according to the meaning of a scene and its associated visual complexity and clutter.

  6. Was That Levity or Livor Mortis? Crime Scene Investigators' Perspectives on Humor and Work

    ERIC Educational Resources Information Center

    Vivona, Brian D.

    2012-01-01

    Humor is common and purposeful in most work settings. Although researchers have examined humor and joking behavior in various work settings, minimal research has been done on humor applications in the field of crime scene investigation. The crime scene investigator encounters death, trauma, and tragedy in a more intimate manner than any other…

  7. Speed Limits: Orientation and Semantic Context Interactions Constrain Natural Scene Discrimination Dynamics

    ERIC Educational Resources Information Center

    Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen

    2008-01-01

    The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…

  8. Eye Movements when Looking at Unusual/Weird Scenes: Are There Cultural Differences?

    ERIC Educational Resources Information Center

    Rayner, Keith; Castelhano, Monica S.; Yang, Jinmian

    2009-01-01

    Recent studies have suggested that eye movement patterns while viewing scenes differ for people from different cultural backgrounds and that these differences in how scenes are viewed are due to differences in the prioritization of information (background or foreground). The current study examined whether there are cultural differences in how…

  9. 22 CFR 102.10 - Rendering assistance at the scene of the accident.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... accident. 102.10 Section 102.10 Foreign Relations DEPARTMENT OF STATE ECONOMIC AND OTHER FUNCTIONS CIVIL AVIATION United States Aircraft Accidents Abroad § 102.10 Rendering assistance at the scene of the accident... the scene of the accident in order to insure that proper protection is afforded United States...

  10. 22 CFR 102.10 - Rendering assistance at the scene of the accident.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... accident. 102.10 Section 102.10 Foreign Relations DEPARTMENT OF STATE ECONOMIC AND OTHER FUNCTIONS CIVIL AVIATION United States Aircraft Accidents Abroad § 102.10 Rendering assistance at the scene of the accident... the scene of the accident in order to insure that proper protection is afforded United States...

  11. The Effect of Scene Variation on the Redundant Use of Color in Definite Reference

    ERIC Educational Resources Information Center

    Koolen, Ruud; Goudbeek, Martijn; Krahmer, Emiel

    2013-01-01

    This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly…

  12. [Perception of objects and scenes in age-related macular degeneration].

    PubMed

    Tran, T H C; Boucart, M

    2012-01-01

    Vision related quality of life questionnaires suggest that patients with AMD exhibit difficulties in finding objects and in mobility. In the natural environment, objects seldom appear in isolation. They appear in a spatial context which may obscure them in part or place obstacles in the patient's path. Furthermore, the luminance of a natural scene varies as a function of the hour of the day and the light source, which can alter perception. This study aims to evaluate recognition of objects and natural scenes by patients with AMD, by using photographs of such scenes. Studies demonstrate that AMD patients are able to categorize scenes as nature scenes or urban scenes and to discriminate indoor from outdoor scenes with a high degree of precision. They detect objects better in isolation, in color, or against a white background than in their natural contexts. These patients encounter more difficulties than normally sighted individuals in detecting objects in a low-contrast, black-and-white scene. These results may have implications for rehabilitation, for layout of texts and magazines for the reading-impaired and for the rearrangement of the spatial environment of older AMD patients in order to facilitate mobility, finding objects and reducing the risk of falls.

  13. 77 FR 45378 - Guidelines for Cases Requiring On-Scene Death Investigation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-31

    ... of Justice Programs Guidelines for Cases Requiring On-Scene Death Investigation AGENCY: National... Institute of Justice, Scientific Working Group for Medicolegal Death Investigation will make available to the general public a draft document entitled, ``Guidelines for Cases Requiring On-Scene...

  14. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    ERIC Educational Resources Information Center

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  15. Categorical implicit learning in real-world scenes: evidence from contextual cueing.

    PubMed

    Goujon, Annabelle

    2011-05-01

    The present study examined the extent to which learning mechanisms are deployed on semantic-categorical regularities during a visual searching within real-world scenes. The contextual cueing paradigm was used with photographs of indoor scenes in which the semantic category did or did not predict the target position on the screen. No evidence of a facilitation effect was observed in the predictive condition compared to the nonpredictive condition when participants were merely instructed to search for a target T or L (Experiment 1). However, a rapid contextual cueing effect occurred when each display containing the search target was preceded by a preview of the scene on which participants had to make a decision regarding the scene's category (Experiment 2). A follow-up explicit memory task indicated that this benefit resulted from implicit learning. Similar implicit contextual cueing effects were also obtained when the scene to categorize was different from the subsequent search scene (Experiment 3) and when a mere preview of the search scene preceded the visual searching (Experiment 4). These results suggested that if enhancing the processing of the scene was required with the present material, such implicit semantic learning can nevertheless take place when the category is task irrelevant.

  16. The Role of Visual Experience on the Representation and Updating of Novel Haptic Scenes

    ERIC Educational Resources Information Center

    Pasqualotto, Achille; Newell, Fiona N.

    2007-01-01

    We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to…

  17. "It's ELEMENTary, My Dear Watson": A Crime Scene Investigation with a Technological Twist

    ERIC Educational Resources Information Center

    Albert, Jennifer; Blanchard, Margaret; Grable, Lisa; Reed, Rebecca

    2010-01-01

    The Crime Scene Labs is a technology-enhanced unit with seven laboratory stations. Probes at many of the stations facilitate students collecting and analyzing their own data (some lessons are adapted from Volz and Sapatka 2000). The labs are designed to build 21st-century skills and model reform-based practices (NRC 1996). The crime scene allows…

  18. Auditory and Cognitive Effects of Aging on Perception of Environmental Sounds in Natural Auditory Scenes

    ERIC Educational Resources Information Center

    Gygi, Brian; Shafiro, Valeriy

    2013-01-01

    Purpose: Previously, Gygi and Shafiro (2011) found that when environmental sounds are semantically incongruent with the background scene (e.g., horse galloping in a restaurant), they can be identified more accurately by young normal-hearing listeners (YNH) than sounds congruent with the scene (e.g., horse galloping at a racetrack). This study…

  19. Fundamental remote sensing science research program. Part 1: Scene radiation and atmospheric effects characterization project

    NASA Technical Reports Server (NTRS)

    Murphy, R. E.; Deering, D. W.

    1984-01-01

    Brief articles summarizing the status of research in the scene radiation and atmospheric effect characterization (SRAEC) project are presented. Research conducted within the SRAEC program is focused on the development of empirical characterizations and mathematical process models which relate the electromagnetic energy reflected or emitted from a scene to the biophysical parameters of interest.

  20. The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes

    ERIC Educational Resources Information Center

    Gygi, Brian; Shafiro, Valeriy

    2011-01-01

    The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five…

  1. Target detection in complex scene of SAR image based on existence probability

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Cao, Zongjie; Wu, Honggang; Pi, Yiming; Yang, Haiyi

    2016-12-01

    This study proposes a target detection approach based on the target existence probability in complex scenes of a synthetic aperture radar image. Superpixels are the basic unit throughout the approach and are labelled into each classified scene by a texture feature. The original and predicted saliency depth values for each scene are derived through self-information of all the labelled superpixels in each scene. Thereafter, the target existence probability is estimated based on the comparison of two saliency depth values. Lastly, an improved visual attention algorithm, in which the scenes of the saliency map are endowed with different weights related to the existence probabilities, derives the target detection result. This algorithm enhances the attention for the scene that contains the target. Hence, the proposed approach is self-adapting for complex scenes and the algorithm is substantially suitable for different detection missions as well (e.g. vehicle, ship or aircraft detection in the related scenes of road, harbour or airport, respectively). Experimental results on various data show the effectiveness of the proposed method.

  2. Kindergarten Quantum Mechanics: Lecture Notes

    SciTech Connect

    Coecke, Bob

    2006-01-04

    These lecture notes survey some joint work with Samson Abramsky as it was presented by me at several conferences in the summer of 2005. It concerns 'doing quantum mechanics using only pictures of lines, squares, triangles and diamonds'. This picture calculus can be seen as a very substantial extension of Dirac's notation, and has a purely algebraic counterpart in terms of so-called Strongly Compact Closed Categories (introduced by Abramsky and I which subsumes my Logic of Entanglement. For a survey on the 'what', the 'why' and the 'hows' I refer to a previous set of lecture notes. In a last section we provide some pointers to the body of technical literature on the subject.

  3. A note on "Kepler's equation".

    NASA Astrophysics Data System (ADS)

    Dutka, J.

    1997-07-01

    This note briefly points out the formal similarity between Kepler's equation and equations developed in Hindu and Islamic astronomy for describing the lunar parallax. Specifically, an iterative method for calculating the lunar parallax has been developed by the astronomer Habash al-Hasib al-Marwazi (about 850 A.D., Turkestan), which is surprisingly similar to the iterative method for solving Kepler's equation invented by Leonhard Euler (1707 - 1783).

  4. Recognizing Exponential Growth. Classroom Notes

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2004-01-01

    Two heuristic and three rigorous arguments are given for the fact that functions of the form Ce[kx], with C an arbitrary constant, are the only solutions of the equation dy/dx=ky where k is constant. Various of the proofs in this self-contained note could find classroom use in a first-year calculus course, an introductory course on differential…

  5. Rapid biologically-inspired scene classification using features shared with visual attention.

    PubMed

    Siagian, Christian; Itti, Laurent

    2007-02-01

    We describe and validate a simple context-based scene recognition algorithm for mobile robotics applications. The system can differentiate outdoor scenes from various sites on a college campus using a multiscale set of early-visual features, which capture the "gist" of the scene into a low-dimensional signature vector. Distinct from previous approaches, the algorithm presents the advantage of being biologically plausible and of having low-computational complexity, sharing its low-level features with a model for visual attention that may operate concurrently on a robot. We compare classification accuracy using scenes filmed at three outdoor sites on campus (13,965 to 34,711 frames per site). Dividing each site into nine segments, we obtain segment classification rates between 84.21 percent and 88.62 percent. Combining scenes from all sites (75,073 frames in total) yields 86.45 percent correct classification, demonstrating the generalization and scalability of the approach.

  6. Sexual sadism in the context of rape and sexual homicide: an examination of crime scene indicators.

    PubMed

    Healey, Jay; Lussier, Patrick; Beauregard, Eric

    2013-04-01

    This study investigates the convergent and predictive validity of behavioral crime scene indicators of sexual sadism in the context of rape and sexual homicide. The study is based on a sample of 268 adult males sentenced to a federal penitentiary in Canada. Information regarding crime scene behaviors was gathered from police records, a clinical interview with a psychologist, and a semistructured interview with the offender. A series of logistic regressions were performed to determine whether behavioral crime scene indicators of sexual sadism were associated with an official diagnosis of sexual sadism and were able to distinguish between sexual aggressors against women and sexual murderers. Findings suggest that several crime scene behaviors overlap with an official diagnosis of sexual sadism as well as being able to distinguish between sexual aggressors of women and sexual murderers. Importantly, the majority of crime scene behaviors associated with a clinical diagnosis of sexual sadism are not the same as those associated with sexual homicide.

  7. External Validity of Contingent Valuation: Comparing Hypothetical and Actual Payments.

    PubMed

    Ryan, Mandy; Mentzakis, Emmanouil; Jareinpituk, Suthi; Cairns, John

    2016-10-09

    Whilst contingent valuation is increasingly used in economics to value benefits, questions remain concerning its external validity that is do hypothetical responses match actual responses? We present results from the first within sample field test. Whilst Hypothetical No is always an Actual No, Hypothetical Yes exceed Actual Yes responses. A constant rate of response reversals across bids/prices could suggest theoretically consistent option value responses. Certainty calibrations (verbal and numerical response scales) minimise hypothetical-actual discrepancies offering a useful solution. Helping respondents resolve uncertainty may reduce the discrepancy between hypothetical and actual payments and thus lead to more accurate policy recommendations. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Residual abilities in age-related macular degeneration to process spatial frequencies during natural scene categorization.

    PubMed

    Musel, Benoit; Hera, Ruxandra; Chokron, Sylvie; Alleysson, David; Chiquet, Christophe; Romanet, Jean-Paul; Guyader, Nathalie; Peyrin, Carole

    2011-11-01

    Age-related macular degeneration (AMD) is characterized by a central vision loss. We explored the relationship between the retinal lesions in AMD patients and the processing of spatial frequencies in natural scene categorization. Since the lesion on the retina is central, we expected preservation of low spatial frequency (LSF) processing and the impairment of high spatial frequency (HSF) processing. We conducted two experiments that differed in the set of scene stimuli used and their exposure duration. Twelve AMD patients and 12 healthy age-matched participants in Experiment 1 and 10 different AMD patients and 10 healthy age-matched participants in Experiment 2 performed categorization tasks of natural scenes (Indoors vs. Outdoors) filtered in LSF and HSF. Experiment 1 revealed that AMD patients made more no-responses to categorize HSF than LSF scenes, irrespective of the scene category. In addition, AMD patients had longer reaction times to categorize HSF than LSF scenes only for indoors. Healthy participants' performance was not differentially affected by spatial frequency content of the scenes. In Experiment 2, AMD patients demonstrated the same pattern of errors as in Experiment 1. Furthermore, AMD patients had longer reaction times to categorize HSF than LSF scenes, irrespective of the scene category. Again, spatial frequency processing was equivalent for healthy participants. The present findings point to a specific deficit in the processing of HSF information contained in photographs of natural scenes in AMD patients. The processing of LSF information is relatively preserved. Moreover, the fact that the deficit is more important when categorizing HSF indoors, may lead to new perspectives for rehabilitation procedures in AMD.

  9. SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.

    PubMed

    Öhlschläger, Sabine; Võ, Melissa Le-Hoa

    2016-10-31

    Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .

  10. Application of multi-resolution 3D techniques in crime scene documentation with bloodstain pattern analysis.

    PubMed

    Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub

    2016-10-01

    In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned.

  11. Scene Analysis Using Recursive Frequency Domain Correlation with Energy Normalization

    DTIC Science & Technology

    1984-12-01

    applications of Image pPtern recogni- tion lie primarily in four major areas: (Ref 8) 1. D-cument processing 2. Industrial automation 3. Medicine and...tput files. Option 16, FILL, a used to eliminate unvant md noise In a templnte video 71. Notes on libary PICAUF Use: Virtual Memory System for Digital... libary of Fortran ’subroutines which Implements a rirtual memory system capable of storing and accessing two- dimentional digits! data. The PiCBUF

  12. Lecture notes for criticality safety

    SciTech Connect

    Fullwood, R.

    1992-03-01

    These lecture notes for criticality safety are prepared for the training of Department of Energy supervisory, project management, and administrative staff. Technical training and basic mathematics are assumed. The notes are designed for a two-day course, taught by two lecturers. Video tapes may be used at the options of the instructors. The notes provide all the materials that are necessary but outside reading will assist in the fullest understanding. The course begins with a nuclear physics overview. The reader is led from the macroscopic world into the microscopic world of atoms and the elementary particles that constitute atoms. The particles, their masses and sizes and properties associated with radioactive decay and fission are introduced along with Einstein`s mass-energy equivalence. Radioactive decay, nuclear reactions, radiation penetration, shielding and health-effects are discussed to understand protection in case of a criticality accident. Fission, the fission products, particles and energy released are presented to appreciate the dangers of criticality. Nuclear cross sections are introduced to understand the effectiveness of slow neutrons to produce fission. Chain reactors are presented as an economy; effective use of the neutrons from fission leads to more fission resulting in a power reactor or a criticality excursion. The six-factor formula is presented for managing the neutron budget. This leads to concepts of material and geometric buckling which are used in simple calculations to assure safety from criticality. Experimental measurements and computer code calculations of criticality are discussed. To emphasize the reality, historical criticality accidents are presented in a table with major ones discussed to provide lessons-learned. Finally, standards, NRC guides and regulations, and DOE orders relating to criticality protection are presented.

  13. STAC: a comprehensive sensor fusion model for scene characterization

    NASA Astrophysics Data System (ADS)

    Kira, Zsolt; Wagner, Alan R.; Kennedy, Chris; Zutty, Jason; Tuell, Grady

    2015-05-01

    We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion algorithms for moving target identification (MTI) applications, what is the best way to combine image data having different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe and illustrate the STAC model, and compare it to other existing models.

  14. Autonomous tracking of designated persons in crowded scenes

    NASA Astrophysics Data System (ADS)

    Heidary, Kaveh; Johnson, R. Barry

    2013-09-01

    This paper develops an algorithm for autonomous tracking of a person (target) within a crowded and temporally dynamic scene using a multispectral imaging system. The camera is stationary, the field of view is static, and the sensor pixel footprint is on the order of one inch. The operator designates the target to be tracked by selecting a single target-pixel in the first image frame, preferably close to the center of mass of the observable portion of the target in that particular frame. Following the initial designation, the algorithm provides tracking of the target in real-time autonomously with minimal latency. The tracking algorithm is based on a novel temporally adaptive spatial-spectral filter bank used to detect target presence or lack thereof in the field-of-regard of the video frame produced by the multispectral camera. The theory of the temporally adaptive spatial-spectral filter is based on an extension of our earlier work on the enhanced matched filter bank (EMFB). The concept of EMFB is founded on the theory of spatial matched filters, which is the optimal correlation filter for detection of a known image corrupted by noise.

  15. Sequential auditory scene analysis is preserved in normal aging adults.

    PubMed

    Snyder, Joel S; Alain, Claude

    2007-03-01

    Normal aging is accompanied by speech perception difficulties, especially in adverse listening situations such as a cocktail party. To assess whether such difficulties might be related to impairments in sequential auditory scene analysis, event-related brain potentials were recorded from normal-hearing young, middle-aged, and older adults during presentation of low (A) tones, high (B) tones, and silences (--) in repeating 3 tone triplets (ABA--). The likelihood of reporting hearing 2 streams increased as a function of the frequency difference between A and B tones (Delta f) to the same extent for all 3 age groups and was paralleled by enhanced sensory-evoked responses over the frontocentral scalp regions. In all 3 age groups, there was also a progressive buildup in brain activity from the beginning to the end of the sequence of triplets, which was characterized by an enhanced positivity that peaked at about 200 ms after the onset of each ABA--triplet. Similar Delta f- and buildup-related activity also occurred over the right temporal cortex, but only for young adults. We conclude that age-related difficulties in separating competing speakers are unlikely to arise from deficits in streaming and might instead reflect less efficient concurrent sound segregation.

  16. Computational Models of Auditory Scene Analysis: A Review.

    PubMed

    Szabó, Beáta T; Denham, Susan L; Winkler, István

    2016-01-01

    Auditory scene analysis (ASA) refers to the process (es) of parsing the complex acoustic input into auditory perceptual objects representing either physical sources or temporal sound patterns, such as melodies, which contributed to the sound waves reaching the ears. A number of new computational models accounting for some of the perceptual phenomena of ASA have been published recently. Here we provide a theoretically motivated review of these computational models, aiming to relate their guiding principles to the central issues of the theoretical framework of ASA. Specifically, we ask how they achieve the grouping and separation of sound elements and whether they implement some form of competition between alternative interpretations of the sound input. We consider the extent to which they include predictive processes, as important current theories suggest that perception is inherently predictive, and also how they have been evaluated. We conclude that current computational models of ASA are fragmentary in the sense that rather than providing general competing interpretations of ASA, they focus on assessing the utility of specific processes (or algorithms) for finding the causes of the complex acoustic signal. This leaves open the possibility for integrating complementary aspects of the models into a more comprehensive theory of ASA.

  17. Attention effects on auditory scene analysis in children.

    PubMed

    Sussman, Elyse; Steinschneider, Mitchell

    2009-02-01

    Auditory scene analysis begins in infancy, making it possible for the baby to distinguish its mother's voice from other noises in the environment. Despite the importance of this process for human behavior, the question of how perceptual sound organization develops during childhood is not well understood. The current study investigated the role of attention for perceiving sound streams in a group of school-aged children and young adults. We behaviorally determined the frequency separation at which a set of sounds was detected as one integrated or two separated streams and compared these measures with passively and actively obtained electrophysiological indices (mismatch negativity (MMN) and P3b) of the same sounds. In adults, there was a high degree of concordance between passive and active electrophysiological indices of stream segregation that matched with perception. In contrast, there was a large disparity in children. Active electrophysiological indices of streaming were concordant with behavioral measures of perception, whereas passive indices were not. In addition, children required larger frequency separations to perceive two streams compared to adults. Our results suggest that differences in stream segregation between children and adults reflect an under-development of basic auditory processing mechanisms, and indicate a developmental role of attention for shaping physiological responses that optimize processes engaged during passive audition.

  18. Diverse cortical codes for scene segmentation in primate auditory cortex.

    PubMed

    Malone, Brian J; Scott, Brian H; Semple, Malcolm N

    2015-04-01

    The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory "edges," particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex.

  19. Effects of attentional load on auditory scene analysis.

    PubMed

    Alain, Claude; Izenberg, Aaron

    2003-10-01

    The effects of attention on the neural processes underlying auditory scene analysis were investigated through the manipulation of auditory task load. Participants were asked to focus their attention on tuned and mistuned stimuli presented to one ear and to ignore similar stimuli presented to the other ear. For both tuned and mistuned sounds, long (standard) and shorter (deviant) duration stimuli were presented in both ears. Auditory task load was manipulated by varying task instructions. In the easier condition, participants were asked to press a button for deviant sounds (target) at the attended location, irrespective of tuning. In the harder condition, participants were further asked to identify whether the targets were tuned or mistuned. Participants were faster in detecting targets defined by duration only than by both duration and tuning. At the unattended location, deviant stimuli generated a mismatch negativity wave at frontocentral sites whose amplitude decreased with increasing task demand. In comparison, standard mistuned stimuli generated an object-related negativity at central sites whose amplitude was not affected by task difficulty. These results show that the processing of sound sequences is differentially affected by attentional load than is the processing of sounds that occur simultaneously (i.e., sequential vs. simultaneous grouping processes), and that they each recruit distinct neural networks.

  20. Ultrahigh-temperature emitter pixel development for scene projectors

    NASA Astrophysics Data System (ADS)

    Sparkman, Kevin; LaVeigne, Joe; McHugh, Steve; Lannon, John; Goodwin, Scott

    2014-05-01

    To meet the needs of high fidelity infrared sensors, under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) has developed new infrared emitter materials capable of achieving extremely high temperatures. The current state of the art arrays based on the MIRAGE-XL generation of scene projectors is capable of producing imagery with mid-wave infrared (MWIR) apparent temperatures up to 700K with response times of 5 ms. The Test Resource Management Center (TRMC) Test and Evaluation/Science and Technology (TandE/SandT) Program through the U.S. Army Program Executive Office for Simulation, Training and Instrumentations (PEO STRI) has contracted with SBIR and its partners to develop a new resistive array based on these new materials, using a high current Read-In Integrated Circuit (RIIC) capable of achieving higher temperatures as well as faster frame rates. The status of that development will be detailed within this paper, including performance data from prototype pixels.

  1. Computational Models of Auditory Scene Analysis: A Review

    PubMed Central

    Szabó, Beáta T.; Denham, Susan L.; Winkler, István

    2016-01-01

    Auditory scene analysis (ASA) refers to the process (es) of parsing the complex acoustic input into auditory perceptual objects representing either physical sources or temporal sound patterns, such as melodies, which contributed to the sound waves reaching the ears. A number of new computational models accounting for some of the perceptual phenomena of ASA have been published recently. Here we provide a theoretically motivated review of these computational models, aiming to relate their guiding principles to the central issues of the theoretical framework of ASA. Specifically, we ask how they achieve the grouping and separation of sound elements and whether they implement some form of competition between alternative interpretations of the sound input. We consider the extent to which they include predictive processes, as important current theories suggest that perception is inherently predictive, and also how they have been evaluated. We conclude that current computational models of ASA are fragmentary in the sense that rather than providing general competing interpretations of ASA, they focus on assessing the utility of specific processes (or algorithms) for finding the causes of the complex acoustic signal. This leaves open the possibility for integrating complementary aspects of the models into a more comprehensive theory of ASA. PMID:27895552

  2. Mobile infrared scene projection for aviation applications: issues and experiences

    NASA Astrophysics Data System (ADS)

    Zabel, Kenneth W.; Stumpf, Richard; Casey, Mark A.; Martin, Larry

    2002-07-01

    The U.S. Army Aviation Technical Test Center (ATTC) provides developmental test support to the Army's aviation community. An increasing dependence on modeling and simulation activities has been required to obtain more data as funding decreases for traditional flight-testing. The Mobile Infrared Scene Projector (MIRSP) system, maintained and operated by ATTC, is being used to gather initial data to measure the progress of developmental Forward Looking IR (FLIR) system activities. The Army continues to upgrade and add new features and algorithms to their FLIR sensors. The history with MIRSP shows that it can benefit the FLIR system development engineers with immediate feedback on algorithm changes. ATTC is also heavily involved with testing pilotage FLIR sensors that typically are less algorithm intensive. The more subjective nature of the pilotage sensor performance specifications requires a unique test approach when using IRSP technologies. This paper will highlight areas where IRSP capabilities have benefited the aviation community to date, describe lessons that ATTC has gained using a mobile system, and outline the areas being planned for upgrades and future support efforts to include pilotage sensors.

  3. Development of a high-definition IR LED scene projector

    NASA Astrophysics Data System (ADS)

    Norton, Dennis T.; LaVeigne, Joe; Franks, Greg; McHugh, Steve; Vengel, Tony; Oleson, Jim; MacDougal, Michael; Westerfeld, David

    2016-05-01

    Next-generation Infrared Focal Plane Arrays (IRFPAs) are demonstrating ever increasing frame rates, dynamic range, and format size, while moving to smaller pitch arrays.1 These improvements in IRFPA performance and array format have challenged the IRFPA test community to accurately and reliably test them in a Hardware-In-the-Loop environment utilizing Infrared Scene Projector (IRSP) systems. The rapidly-evolving IR seeker and sensor technology has, in some cases, surpassed the capabilities of existing IRSP technology. To meet the demands of future IRFPA testing, Santa Barbara Infrared Inc. is developing an Infrared Light Emitting Diode IRSP system. Design goals of the system include a peak radiance >2.0W/cm2/sr within the 3.0-5.0μm waveband, maximum frame rates >240Hz, and >4million pixels within a form factor supported by pixel pitches <=32μm. This paper provides an overview of our current phase of development, system design considerations, and future development work.

  4. Effects of mild cognitive impairment on emotional scene memory.

    PubMed

    Waring, J D; Dimsdale-Zucker, H R; Flannery, S; Budson, A E; Kensinger, E A

    2017-02-01

    Young and older adults experience benefits in attention and memory for emotional compared to neutral information, but this memory benefit is greatly diminished in Alzheimer's disease (AD). Little is known about whether this impairment arises early or late in the time course between healthy aging and AD. This study compared memory for positive, negative, and neutral items with neutral backgrounds between patients with mild cognitive impairment (MCI) and healthy older adults. We also used a divided attention condition in older adults as a possible model for the deficits observed in MCI patients. Results showed a similar pattern of selective memory for emotional items while forgetting their backgrounds in older adults and MCI patients, but MCI patients had poorer memory overall. Dividing attention during encoding disproportionately reduced memory for backgrounds (versus items) relative to a full attention condition. Participants performing in the lower half on the divided attention task qualitatively and quantitatively mirrored the results in MCI patients. Exploratory analyses comparing lower- and higher-performing MCI patients showed that only higher-performing MCI patients had the characteristic scene memory pattern observed in healthy older adults. Together, these results suggest that the effects of emotion on memory are relatively well preserved for patients with MCI, although emotional memory patterns may start to be altered once memory deficits become more pronounced.

  5. Automatic detection and recognition of signs from natural scenes.

    PubMed

    Chen, Xilin; Yang, Jie; Zhang, Jing; Waibel, Alex

    2004-01-01

    In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English.

  6. An intercomparison of artificial intelligence approaches for polar scene identification

    NASA Astrophysics Data System (ADS)

    Tovinkere, V. R.; Penaloza, M.; Logar, A.; Lee, J.; Weger, R. C.; Berendes, T. A.; Welch, R. M.

    1993-03-01

    Six advanced very high resolution radiometer local area coverage arctic scenes are classified into 10 classes. These include water, solid sea ice, broken sea ice, snow-covered mountains, land, stratus over ice, stratus over water, cirrus over ice, cumulus over water, and multilayer cloudiness. Eight spectral and textural features are computed. The textural features are based upon the gray level difference vector method. Six different artificial intelligence classifiers are examined: (1) the feed forward back propagation neural network; (2) the probabilistic neural network; (3) the hybrid back propagation neural network; (4) the "don't care" perceptron network; (5) the "don't care" back propagation neural network; and (6) a fuzzy logic-based expert system. Accuracies in excess of 95% are obtained for all but the hybrid neural network. The "don't care" back propagation neural network produces the highest accuracies and also has low CPU requirements. Thin fog/stratus over ice is the class consistently with the lowest accuracy, often misclassified as broken sea ice. Water, land, cirrus over ice, and snow-covered mountains are all classified with high accuracy (≥98%). The high accuracy achieved in the present study can be traced to (1) accurate classifiers; (2) an excellent choice for the feature vector; and (3) accurate labeling. A sophisticated new interactive visual image classification system is used for the labeling.

  7. An intercomparison of artificial intelligence approaches for polar scene identification

    SciTech Connect

    Tovinkere, V.R.; Penaloza, M.; Logar, A.; Lee, J.; Weger, R.C.; Berendes, T.A.; Welch, R.M. )

    1993-03-20

    Six advanced very high resolution radiometer local area coverage arctic scenes are classified into 10 classes. These include water, solid sea ice, broken sea ice, snow-covered mountains, land, stratus over ice, stratus over water, cirrus over ice, cumulus over water, and multilayer cloudiness. Eight spectral and textural features are computed. The textural features are based upon the gray level difference vector method. Six different artificial intelligence classifiers are examined: (1) the feed forward back propagation neural network; (2) the probabilistic neural network; (3) the hybrid back propagation neural network; (4) the [open quotes]don't care[close quotes] perceptron network; (5) the [open quotes]don't care[close quotes] back propagation neural network; and (6) a fuzzy logic-based expert system. Accuracies in excess of 95% are obtained for all but the hybrid neural network. The [open quotes]don't care[close quotes] back propagation neural network produces the highest accuracies and also has low CPU requirements. Thin fog/stratus over ice is the class consistently with the lowest accuracy, often misclassified as broken sea ice. Water, land, cirrus over ice, and snow-covered mountains are all classified with high accuracy ([ge]98%). The high accuracy achieved in the present study can be traced to (1) accurate classifiers; (2) an excellent choice for the feature vector, and (3) accurate labeling. A sophisticated new interactive visual image classification system is used for the labeling. 33 refs., 8 figs., 7 tabs.

  8. A methodology for analyzing an acoustic scene in sensor arrays

    NASA Astrophysics Data System (ADS)

    Man, Hong; Hohil, Myron E.; Desai, Sachi

    2007-10-01

    Presented here is a novel clustering method for Hidden Markov Models (HMMs) and its application in acoustic scene analysis. In this method, HMMs are clustered based on a similarity measure for stochastic models defined as the generalized probability product kernel (GPPK), which can be efficiently evaluated according to a fast algorithm introduced by Chen and Man (2005) [1]. Acoustic signals from various sources are partitioned into small frames. Frequency features are extracted from each of the frames to form observation vectors. These frames are further grouped into segments, and an HMM is trained from each of such segments. An unknown segment is categorized with a known event if its HMM has the closest similarity with the HMM from the corresponding labeled segment. Experiments are conducted on an underwater acoustic dataset from Steven Maritime Security Laboratory, Data set contains a swimmer signature, a noise signature from the Hudson River, and a test sequence with a swimmer in the Hudson River. Experimental results show that the proposed method can successfully associate the test sequence with the swimmer signature at very high confidence, despite their different time behaviors.

  9. Napping and the selective consolidation of negative aspects of scenes.

    PubMed

    Payne, Jessica D; Kensinger, Elizabeth A; Wamsley, Erin J; Spreng, R Nathan; Alger, Sara E; Gibler, Kyle; Schacter, Daniel L; Stickgold, Robert

    2015-04-01

    After information is encoded into memory, it undergoes an offline period of consolidation that occurs optimally during sleep. The consolidation process not only solidifies memories, but also selectively preserves aspects of experience that are emotionally salient and relevant for future use. Here, we provide evidence that an afternoon nap is sufficient to trigger preferential memory for emotional information contained in complex scenes. Selective memory for negative emotional information was enhanced after a nap compared with wakefulness in 2 control conditions designed to carefully address interference and time-of-day confounds. Although prior evidence has connected negative emotional memory formation to REM sleep physiology, we found that non-REM delta activity and the amount of slow wave sleep (SWS) in the nap were robustly related to the selective consolidation of negative information. These findings suggest that the mechanisms underlying memory consolidation benefits associated with napping and nighttime sleep are not always the same. Finally, we provide preliminary evidence that the magnitude of the emotional memory benefit conferred by sleep is equivalent following a nap and a full night of sleep, suggesting that selective emotional remembering can be economically achieved by taking a nap.

  10. Ultrafast scene detection and recognition with limited visual information.

    PubMed

    Hagmann, Carl Erick; Potter, Mary C

    2016-01-01

    Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the "Fast M" hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13-80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing.

  11. DNA methylation: the future of crime scene investigation?

    PubMed

    Gršković, Branka; Zrnec, Dario; Vicković, Sanja; Popović, Maja; Mršić, Gordan

    2013-07-01

    Proper detection and subsequent analysis of biological evidence is crucial for crime scene reconstruction. The number of different criminal acts is increasing rapidly. Therefore, forensic geneticists are constantly on the battlefield, trying hard to find solutions how to solve them. One of the essential defensive lines in the fight against the invasion of crime is relying on DNA methylation. In this review, the role of DNA methylation in body fluid identification and other DNA methylation applications are discussed. Among other applications of DNA methylation, age determination of the donor of biological evidence, analysis of the parent-of-origin specific DNA methylation markers at imprinted loci for parentage testing and personal identification, differentiation between monozygotic twins due to their different DNA methylation patterns, artificial DNA detection and analyses of DNA methylation patterns in the promoter regions of circadian clock genes are the most important ones. Nevertheless, there are still a lot of open chapters in DNA methylation research that need to be closed before its final implementation in routine forensic casework.

  12. Radiative transfer solution for rugged and heterogeneous scene observations.

    PubMed

    Miesch, C; Briottet, X; Kerr, Y H; Cabot, F

    2000-12-20

    A physical algorithm is developed to solve the radiative transfer problem in the solar reflective spectral domain. This new code, Advanced Modeling of the Atmospheric Radiative Transfer for Inhomogeneous Surfaces (AMARTIS), takes into account the relief, the spatial heterogeneity, and the bidirectional reflectances of ground surfaces. The resolution method consists of first identifying the irradiance and radiance components at ground and sensor levels and then modeling these components separately, the rationale being to find the optimal trade off between accuracy and computation times. The validity of the various assumptions introduced in the AMARTIS model are checked through comparisons with a reference Monte Carlo radiative transfer code for various ground scenes: flat ground with two surface types, a linear sand dune landscape, and an extreme mountainous configuration. The results show a divergence of less than 2% between the AMARTIS code and the Monte Carlo reference code for the total signals received at satellite level. In particular, it is demonstrated that the environmental and topographic effects are properly assessed by the AMARTIS model even for situations in which the effects become dominant.

  13. Task relevance predicts gaze in videos of real moving scenes.

    PubMed

    Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C

    2011-09-01

    Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.

  14. Online anomaly detection in crowd scenes via structure analysis.

    PubMed

    Yuan, Yuan; Fang, Jianwu; Wang, Qi

    2015-03-01

    Abnormal behavior detection in crowd scenes is continuously a challenge in the field of computer vision. For tackling this problem, this paper starts from a novel structure modeling of crowd behavior. We first propose an informative structural context descriptor (SCD) for describing the crowd individual, which originally introduces the potential energy function of particle's interforce in solid-state physics to intuitively conduct vision contextual cueing. For computing the crowd SCD variation effectively, we then design a robust multi-object tracker to associate the targets in different frames, which employs the incremental analytical ability of the 3-D discrete cosine transform (DCT). By online spatial-temporal analyzing the SCD variation of the crowd, the abnormality is finally localized. Our contribution mainly lies on three aspects: 1) the new exploration of abnormal detection from structure modeling where the motion difference between individuals is computed by a novel selective histogram of optical flow that makes the proposed method can deal with more kinds of anomalies; 2) the SCD description that can effectively represent the relationship among the individuals; and 3) the 3-D DCT multi-object tracker that can robustly associate the limited number of (instead of all) targets which makes the tracking analysis in high density crowd situation feasible. Experimental results on several publicly available crowd video datasets verify the effectiveness of the proposed method.

  15. Ocfentanil overdose fatality in the recreational drug scene.

    PubMed

    Coopman, Vera; Cordonnier, Jan; De Leeuw, Marc; Cirimele, Vincent

    2016-09-01

    This paper describes the first reported death involving ocfentanil, a potent synthetic opioid and structure analogue of fentanyl abused as a new psychoactive substance in the recreational drug scene. A 17-year-old man with a history of illegal substance abuse was found dead in his home after snorting a brown powder purchased over the internet with bitcoins. Acetaminophen, caffeine and ocfentanil were identified in the powder by gas chromatography mass spectrometry and reversed-phase liquid chromatography with diode array detector. Quantitation of ocfentanil in biological samples was performed using a target analysis based on liquid-liquid extraction and ultra performance liquid chromatography tandem mass spectrometry. In the femoral blood taken at the external body examination, the following concentrations were measured: ocfentanil 15.3μg/L, acetaminophen 45mg/L and caffeine 0.23mg/L. Tissues sampled at autopsy were analyzed to study the distribution of ocfentanil. The comprehensive systematic toxicological analysis on the post-mortem blood and tissue samples was negative for other compounds. Based on circumstantial evidence, autopsy findings and the results of the toxicological analysis, the medical examiner concluded that the cause of death was an acute intoxication with ocfentanil. The manner of death was assumed to be accidental after snorting the powder.

  16. Smart unattended sensor networks with scene understanding capabilities

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2006-05-01

    Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.

  17. Character segmentation and thresholding in low-contrast scene images

    NASA Astrophysics Data System (ADS)

    Winger, Lowell L.; Jernigan, M. Ed; Robinson, John A.

    1996-03-01

    We are developing a portable text-to-speech system for the vision impaired. The input image is acquired with a lightweight CCD camera that may be poorly focused and aimed, and perhaps taken under inadequate and uneven illumination. We therefore require efficient and effective thresholding and segmentation methods which are robust with respect to character contrast, font, size, and format. In this paper, we present a fast thresholding scheme which combines a local variance measure with a logical stroke-width method. An efficient post- thresholding segmentation scheme utilizing Fisher's linear discriminant to distinguish noise and character components functions as an effective pre-processing step for the application of commercial segmentation and character recognition methods. The performance of this fast new method compared favorably with other methods for the extraction of characters from uncontrolled illumination, omnifont scene images. We demonstrate the suitability of this method for use in an automated portable reader through a software implementation running on a laptop 486 computer in our prototype device.

  18. Note on new KLT relations

    NASA Astrophysics Data System (ADS)

    Feng, Bo; He, Song; Huang, Rijun; Jia, Yin

    2010-10-01

    In this short note, we present two results about KLT relations discussed in recent several papers. Our first result is the re-derivation of Mason-Skinner MHV amplitude by applying the S n-3 permutation symmetric KLT relations directly to MHV amplitude. Our second result is the equivalence proof of the newly discovered S n-2 permutation symmetric KLT relations and the well-known S n-3 permutation symmetric KLT relations. Although both formulas have been shown to be correct by BCFW recursion relations, our result is the first direct check using the regularized definition of the new formula.

  19. The effects of scene content parameters, compression, and frame rate on the performance of analytics systems

    NASA Astrophysics Data System (ADS)

    Tsifouti, A.; Triantaphillidou, S.; Larabi, M. C.; Doré, G.; Bilissi, E.; Psarrou, A.

    2015-01-01

    In this investigation we study the effects of compression and frame rate reduction on the performance of four video analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack) entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4 AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems. Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence. Findings could contribute to the improvement of VA systems.

  20. Large patch convolutional neural networks for the scene classification of high spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Fei, Feng; Zhang, Liangpei

    2016-04-01

    The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the "semantic gap." Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets.