Science.gov

Sample records for actual scene note

  1. Exocentric direction judgements in computer-generated displays and actual scenes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Smith, Stephen; Mcgreevy, Michael W.; Grunwald, Arthur J.

    1989-01-01

    One of the most remarkable perceptual properties of common experience is that the perceived shapes of known objects are constant despite movements about them which transform their projections on the retina. This perceptual ability is one aspect of shape constancy (Thouless, 1931; Metzger, 1953; Borresen and Lichte, 1962). It requires that the viewer be able to sense and discount his or her relative position and orientation with respect to a viewed object. This discounting of relative position may be derived directly from the ranging information provided from stereopsis, from motion parallax, from vestibularly sensed rotation and translation, or from corollary information associated with voluntary movement. It is argued that: (1) errors in exocentric judgements of the azimuth of a target generated on an electronic perspective display are not viewpoint-independent, but are influenced by the specific geometry of their perspective projection; (2) elimination of binocular conflict by replacing electronic displays with actual scenes eliminates a previously reported equidistance tendency in azimuth error, but the viewpoint dependence remains; (3) the pattern of exocentrically judged azimuth error in real scenes viewed with a viewing direction depressed 22 deg and rotated + or - 22 deg with respect to a reference direction could not be explained by overestimation of the depression angle, i.e., a slant overestimation.

  2. Considerations for the Composition of Visual Scene Displays: Potential Contributions of Information from Visual and Cognitive Sciences (Forum Note)

    PubMed Central

    Wilkinson, Krista M.; Light, Janice; Drager, Kathryn

    2013-01-01

    Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing – that is, how a user attends, perceives, and makes sense of the visual information on the display – therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations. PMID:22946989

  3. Noted

    ERIC Educational Resources Information Center

    Nunberg, Geoffrey

    2013-01-01

    Considering how much attention people lavish on the technologies of writing--scroll, codex, print, screen--it's striking how little they pay to the technologies for digesting and regurgitating it. One way or another, there's no sector of the modern world that is not saturated with note-taking--the bureaucracy, the liberal professions, the…

  4. LANDSAT Scene-to-scene Registration Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Anderson, J. E.

    1984-01-01

    Initial results obtained from the registration of LANDSAT-4 data to LANDSAT-2 MSS data are documented and compared with results obtained from a LANDSAT-2 MSS-to-LANDSAT-2 scene-to-scene registration (using the same LANDSAT-2 MSS data as the base data set in both procedures). RMS errors calculated on the control points used in the establishment of scene-to-scene mapping equations are compared to error computed from independently chosen verification points. Models developed to estimate actual scene-to-scene registration accuracy based on the use of electrostatic plots are also presented. Analysis of results indicates a statistically significant difference in the RMS errors for the element contribution. Scan line errors were not significantly different. It appears that a modification to the LANDSAT-4 MSS scan mirror coefficients is required to correct the situation.

  5. Diacria Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image provides a representative view of the vast martian northern plains in the Diacria region near 52.8oN, 184.7oW. This is what the plains looked like in late northern spring in August 2004, after the seasonal winter frost had sublimed away and dust devils began to leave dark streaks on the surface. Many of the dark dust devil streaks in this image are concentrated near a low mound -- the location of a shallowly-filled and buried impact crater. The picture covers an area about 3 km (1.9 mi) wide. Sunlight illuminates the scene from the lower left.

  6. Three-dimensional imaging in crime scene investigations

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    Law enforcement is responsible for investigating crimes, identifying and arresting the suspects, and presenting evidence to a judge and jury in court. In order to objectively perform these duties, police need to gather accurate information and clearly explain the crime scene and physical evidence in a court of law. Part of this information includes the documentation of the incident. Documenting an incident has always been divided into three categories: notes, sketch, and photographs. This method of recording crime scenes has been the standard for years. The major drawback, however, is that the visual documents of sketches and photographs are two dimensional. This greatly restricts the actual visualization of the incident requiring a careful cross referencing of the details in order to understand it.

  7. Analyzing crime scene videos

    NASA Astrophysics Data System (ADS)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  8. Hydrological AnthropoScenes

    NASA Astrophysics Data System (ADS)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y

  9. Infant death scene investigation.

    PubMed

    Tabor, Pamela D; Ragan, Krista

    2015-01-01

    The sudden unexpected death of an infant is a tragedy to the family, a concern to the community, and an indicator of national health. To accurately determine the cause and manner of the infant's death, a thorough and accurate death scene investigation by properly trained personnel is key. Funding and resources are directed based on autopsy reports, which are only as accurate as the scene investigation. The investigation should include a standardized format, body diagrams, and a photographed or videotaped scene recreation utilizing doll reenactment. Forensic nurses, with their basic nursing knowledge and additional forensic skills and abilities, are optimally suited to conduct infant death scene investigations as well as train others to properly conduct death scene investigations. Currently, 49 states have child death review teams, which is an idea avenue for a forensic nurse to become involved in death scene investigations. PMID:25642921

  10. Animal Detection Precedes Access to Scene Category

    PubMed Central

    Crouzet, Sébastien M.; Joubert, Olivier R.; Thorpe, Simon J.; Fabre-Thorpe, Michèle

    2012-01-01

    The processes underlying object recognition are fundamental for the understanding of visual perception. Humans can recognize many objects rapidly even in complex scenes, a task that still presents major challenges for computer vision systems. A common experimental demonstration of this ability is the rapid animal detection protocol, where human participants earliest responses to report the presence/absence of animals in natural scenes are observed at 250–270 ms latencies. One of the hypotheses to account for such speed is that people would not actually recognize an animal per se, but rather base their decision on global scene statistics. These global statistics (also referred to as spatial envelope or gist) have been shown to be computationally easy to process and could thus be used as a proxy for coarse object recognition. Here, using a saccadic choice task, which allows us to investigate a previously inaccessible temporal window of visual processing, we showed that animal – but not vehicle – detection clearly precedes scene categorization. This asynchrony is in addition validated by a late contextual modulation of animal detection, starting simultaneously with the availability of scene category. Interestingly, the advantage for animal over scene categorization is in opposition to the results of simulations using standard computational models. Taken together, these results challenge the idea that rapid animal detection might be based on early access of global scene statistics, and rather suggests a process based on the extraction of specific local complex features that might be hardwired in the visual system. PMID:23251545

  11. The etiological significance of the primal scene in perversions.

    PubMed

    Peto, A

    1975-01-01

    The etiological significance of the actually observed primal scene in fetishism and other perversions is discussed. The impact of the primal scene on the pathology of part object relationships, self and object image, and on the development of superego structures in perversion is stressed. PMID:1129388

  12. Underwater Scene Composition

    ERIC Educational Resources Information Center

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  13. Automated synthetic scene generation

    NASA Astrophysics Data System (ADS)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  14. Navigating the auditory scene: an expert role for the hippocampus.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner. PMID:22933806

  15. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.

    1975-01-01

    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.

  16. Crime Scene Investigation.

    ERIC Educational Resources Information Center

    Harris, Barbara; Kohlmeier, Kris; Kiel, Robert D.

    Casting students in grades 5 through 12 in the roles of reporters, lawyers, and detectives at the scene of a crime, this interdisciplinary activity involves participants in the intrigue and drama of crime investigation. Using a hands-on, step-by-step approach, students work in teams to investigate a crime and solve a mystery. Through role-playing…

  17. Color scene analysis

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet

    1994-05-01

    This paper describes a color scene analysis method for the object surfaces appearing in the noisy and imperfect images of natural scenes. It is developed based on the spatial and spectral grouping property of the human visual system. The uniformly colored surfaces are recognized by their monomodal 3-D color distributions and extracted in the spatial domain using the lightness and chromaticity network of the Munsell system. The textured image regions are identified by their irregular histogram distributions and isolated in the image plane using the Julesz connectivity detection rules. The method is applied to various color images corrupted by noise and degraded heavily by under-sampling and low color-contrast imperfections. The method was able to detect all the uniformly colored and heavily textured object areas in these images.

  18. The primal scene and symbol formation.

    PubMed

    Niedecken, Dietmut

    2016-06-01

    This article discusses the meaning of the primal scene for symbol formation by exploring its way of processing in a child's play. The author questions the notion that a sadomasochistic way of processing is the only possible one. A model of an alternative mode of processing is being presented. It is suggested that both ways of processing intertwine in the "fabric of life" (D. Laub). Two clinical vignettes, one from an analytic child psychotherapy and the other from the analysis of a 30 year-old female patient, illustrate how the primal scene is being played out in the form of a terzet. The author explores whether the sadomasochistic way of processing actually precedes the "primal scene as a terzet". She discusses if it could even be regarded as a precondition for the formation of the latter or, alternatively, if the "combined parent-figure" gives rise to ways of processing. The question is being left open. Finally, it is shown how both modes of experiencing the primal scene underlie the discoursive and presentative symbol formation, respectively. PMID:27437623

  19. Sinus Sabaeus Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    25 October 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows old, light-toned, large ripples on a smoothly mantled surface in the Sinus Sabaeus region, south of Schiaparelli Basin. This image is located near 6.4oS, 341.8oW. The image covers an area about 3 km (1.9 mi) wide. Sunlight illuminates the scene from the upper left.

  20. Video Scene Recognition System

    NASA Astrophysics Data System (ADS)

    Wong, Robert Y.; Sallak, Rashid M.

    1983-03-01

    Microprocessors are used to show a possible implementation of a multiprocessoi system for video scene recognition operations. The system was designed in the multiple input stream and multiple data stream (MIMD) configuration. "Autonomous cooperation" among the working processors is supervised by a global operating system, the heart of which is the scheduler. The design of the scheduler and the overall operations of the system are discussed.

  1. Capturing, processing, and rendering real-world scenes

    NASA Astrophysics Data System (ADS)

    Nyland, Lars S.; Lastra, Anselmo A.; McAllister, David K.; Popescu, Voicu; McCue, Chris; Fuchs, Henry

    2000-12-01

    While photographs vividly capture a scene from a single viewpoint, it is our goal to capture a scene in such a way that a viewer can freely move to any viewpoint, just as he or she would in an actual scene. We have built a prototype system to quickly digitize a scene using a laser rangefinder and a high-resolution digital camera that accurately captures a panorama of high-resolution range and color information. With real-world scenes, we have provided data to fuel research in many area, including representation, registration, data fusion, polygonization, rendering, simplification, and reillumination. The real-world scene data can be used for many purposes, including immersive environments, immersive training, re-engineering and engineering verification, renovation, crime-scene and accident capture and reconstruction, archaeology and historic preservation, sports and entertainment, surveillance, remote tourism and remote sales. We will describe our acquisition system, the necessary processing to merge data from the multiple input devices and positions. We will also describe high quality rendering using the data we have collected. Issues about specific rendering accelerators and algorithms will also be presented. We will conclude by describing future uses and methods of collection for real- world scene data.

  2. Thermal infrared scene simulation

    SciTech Connect

    Warnick, J.S.; Shor, E.; Schott, J.R.

    1990-01-01

    The complexity and interplay between the thermodynamic and radiometric phenomena associated with longwave infrared (LWIR) images make the analyses of these images quite difficult and the development of algorithms for image analysis quite complex. This image analysis process is further complicated when the algorithms are part of a real-time targeting, tracking, or positioning system because the sensor's electro-optical system can have a significant and variable impact on the image. As a result, it is often desirable to perform evaluations of fully packaged thermal infrared imaging systems against dynamic scenes. The high cost of field testing these systems prohibits this approach in all but the research and development and early engineering stages. Even in the research and development stage the scenarios required for full system testing are often difficult to acquire. These factors have led to the search for a capability to produce a synthetically generated, self-emitting thermal infrared scene which can be dynamically updated. Sensors or algorithms exposed to this simulator could then be tested in an end-to-end (buttoned up) configuration to evaluate system performance in as close to a real world scenario as practical. One major goal of this effort was to assemble and test the performance characteristics of a system for generating dynamic self-emitting scenes. The system consisted of an argon laser source, a spatial light modulator to generate a brightness image and a two-dimensional visible-to-infrared transducer to convert the monochromatic laser energy into a broad band self-emitting thermal infrared image. 61 refs., 32 figs., 4 tabs.

  3. South Polar Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    5 February 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a portion of the south polar residual cap. Sunlight illuminates this scene from the upper left, thus the somewhat kidney bean-shaped features are pits, not mounds. These pits and their neighboring polygonal cracks are formed in a material composed mostly of carbon dioxide ice. The image is located near 87.0oS, 5.7oW, and covers an area 3 km (1.9 mi) wide.

  4. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  5. Use of Data Mining Techniques to Model Crime Scene Investigator Performance

    NASA Astrophysics Data System (ADS)

    Adderley, Richard; Townsley, Michael; Bond, John

    This paper examines how data mining techniques can assist the monitoring of Crime Scene Investigator performance. The findings show that Investigators can be placed in one of four groups according to their ability to recover DNA and fingerprints from crime scenes. They also show that their ability to predict which crime scenes will yield the best opportunity of recovering forensic samples has no correlation to their actual ability to recover those samples.

  6. Northern Meridiani Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    19 November 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows eroded remnants of layered sedimentary rock in northern Sinus Meridiani. The layering is best seen in the circular feature at the center/right, which is an old meteor impact crater that was once filled and buried beneath the sedimentary rocks, then later exhumed and eroded to its present state. All of the sedimentary rocks exposed in this portion of northern Sinus Meridiani are probably older than the rocks in central Sinus Meridiani that have been examined this year by the Mars Exploration Rover, Opportunity. Like the rocks visited by the rover, these, too, may contain detailed clues regarding a wetter Mars in the distant past. These landforms are located near 6.0oN, 2.0oW. The image covers an area approximately 3 km (1.9 mi) wide. Sunlight illuminates the scene from the left/lower left.

  7. A new approach to wideband scene projection

    NASA Astrophysics Data System (ADS)

    Kurtz, Russell M.; Parfenov, Alexander V.; Pradhan, Ranjit D.; Aye, Tin M.; Savant, Gajendra D.; Tun, Nay; Win, Tin M.; Holmstedt, Jason; Schindler, Axel

    2005-03-01

    Advances in the development of imaging sensors depend upon (among other things) the testing capabilities of research laboratories. Sensors and sensor suites need to be rigorously tested under laboratory and field conditions before being put to use. Real-time dynamic simulation of real targets is a key component of such testing, as actual full-scale tests with real targets are extremely expensive and time consuming and are not suitable for early stages of development. Dynamic projectors simulate tactical images and scenes. Several technologies exist for projecting IR and visible scenes to simulate tactical battlefield patterns - large format resistor arrays, liquid crystal light valves, Eidophor type projecting systems, and micromirror arrays, for example. These technologies are slow, or are restricted either in the modulator array size or in spectral bandwidth. In addition, many operate only in specific bandwidth regions. Physical Optics Corporation is developing an alternative to current scene projectors. This projector is designed to operate over the visible, near-IR, MWIR, and LWIR spectra simultaneously, from 300 nm to 20 μm. The resolution is 2 megapixels, and the designed frame rate is 120 Hz (40 Hz in color). To ensure high-resolution visible imagery and pixel-to-pixel apparent temperature difference of 100°C, the contrast between adjacent pixels is >100:1 in the visible to near-IR, MWIR, and LWIR. This scene projector is designed to produce a flickerless analog signal, suitable for staring and scanning arrays, and to be capable of operation in a hardware-in-the-loop test system. Tests performed on an initial prototype demonstrated contrast of 250:1 in the visible with non-optimized hardware.

  8. CAD programs: a tool for crime scene processing and reconstruction

    NASA Astrophysics Data System (ADS)

    Boggiano, Daniel; De Forest, Peter R.; Sheehan, Francis X.

    1997-02-01

    Computer aided drafting (CAD) programs have great potential for helping the forensic scientist. One of their most direct and useful applications is crime scene documentation, as an aid in rendering neat, unambiguous line drawings of crime scenes. Once the data has been entered, it can easily be displayed, printed, or plotted in a variety of formats. Final renditions from this initial data entry can take multiple forms and can have multiple uses. As a demonstrative aid, a CAD program can produce two dimensional (2-D) drawings of the scene from one's notes to scale. These 2-D renditions are court display quality and help to make the forensic scientists's testimony easily understood. Another use for CAD is as an analytical tool for scene reconstruction. More than just a drawing aid, CAD can generate useful information from the data input. It can help reconstruct bullet paths or locations of furniture in a room when it is critical to the reconstruction. Data entry at the scene, on a notebook computer, can assist in framing and answering questions so that the forensic scientist can test hypotheses while actively documenting the scene. Further, three dimensional (3-D) renditions of items can be viewed from many 'locations' by using the program to rotate the object and the observers' viewpoint.

  9. TOPMS Cockpit Scene

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Advanced technology to fly with pilots of the future- New concepts for aircraft controls and displays are tested in the NASA transport systems research vehicle (TRSV) simulator at NASA Langley Research Center, Hampton, Va. Information displayed on the bottom screen (left of pilots hand) and projected onto a simulated out-the-window runway scene as a heads-up display are part of a new Invention called the take off and land more safely. Langley researchers developed the system to predict where on the runway important takeoff events (such as rotation or stopping) will occur and to provide advisory information on whether the takeoff should be continued or aborted. The event locations and performance information are displayed on electronic screens as symbols and numbers superimposed on a scaled graphic of the runway. Prediction and advisories are updated in real time based on sensed conditions and performance during the takeoff roll. Langley research in both ground and flight facilities indicates that the TOPMS would enhance takeoff safety by providing the pilot with previously unavailable information related to his takeoff/abort decision. The TRSV simulator is one of several facilities that are used at Langley for the development of automated piloting aids. The aids complement evolving ground-based air traffic control (ATC) concepts for improved safety, communications and traffic flow.

  10. One-step reconstruction of assembled 3D holographic scenes

    NASA Astrophysics Data System (ADS)

    Velez Zea, Alejandro; Barrera-Ramírez, John Fredy; Torroba, Roberto

    2015-12-01

    We present a new experimental approach for reconstructing in one step 3D scenes otherwise not feasible in a single snapshot from standard off-axis digital hologram architecture, due to a lack of illuminating resources or a limited setup size. Consequently, whenever a scene could not be wholly illuminated or the size of the scene surpasses the available setup disposition, this protocol can be implemented to solve these issues. We need neither to alter the original setup in every step nor to cover the whole scene by the illuminating source, thus saving resources. With this technique we multiplex the processed holograms of actual diffuse objects composing a scene using a two-beam off-axis holographic setup in a Fresnel approach. By registering individually the holograms of several objects and applying a spatial filtering technique, the filtered Fresnel holograms can then be added to produce a compound hologram. The simultaneous reconstruction of all objects is performed in one step using the same recovering procedure employed for single holograms. Using this technique, we were able to reconstruct, for the first time to our knowledge, a scene by multiplexing off-axis holograms of the 3D objects without cross talk. This technique is important for quantitative visualization of optically packaged multiple images and is useful for a wide range of applications. We present experimental results to support the method.

  11. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Barrow, H. G.; Weyl, S. A.

    1976-01-01

    Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography.

  12. Raise two effects with one scene: scene contexts have two separate effects in visual working memory of target faces

    PubMed Central

    Tanabe-Ishibashi, Azumi; Ikeda, Takashi; Osaka, Naoyuki

    2014-01-01

    Many people have experienced the inability to recognize a familiar face in a changed context, a phenomenon known as the “butcher-on-the-bus” effect. Whether this context effect is a facilitation of memory by old contexts or a disturbance of memory by novel contexts is of great debate. Here, we investigated how two types of contextual information associated with target faces influence the recognition performance of the faces using meaningful (scene) or meaningless (scrambled scene) backgrounds. The results showed two different effects of contexts: (1) disturbance on face recognition by changes of scene backgrounds and (2) weak facilitation of face recognition by the re-presentation of the same backgrounds, be it scene or scrambled. The results indicate that the facilitation and disturbance of context effects are actually caused by two different subcomponents of the background information: semantic information available from scene backgrounds and visual array information commonly included in a scene and its scrambled picture. This view suggests visual working memory system can control such context information, so that it switches the way to deal with the contexts information; inhibiting it as a distracter or activating it as a cue for recognizing the current target. PMID:24847299

  13. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  14. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  15. Simulating Scenes In Outer Space

    NASA Technical Reports Server (NTRS)

    Callahan, John D.

    1989-01-01

    Multimission Interactive Picture Planner, MIP, computer program for scientifically accurate and fast, three-dimensional animation of scenes in deep space. Versatile, reasonably comprehensive, and portable, and runs on microcomputers. New techniques developed to perform rapidly calculations and transformations necessary to animate scenes in scientifically accurate three-dimensional space. Written in FORTRAN 77 code. Primarily designed to handle Voyager, Galileo, and Space Telescope. Adapted to handle other missions.

  16. Sample size and scene identification (cloud) - Effect on albedo

    NASA Technical Reports Server (NTRS)

    Vemury, S. K.; Stowe, L.; Jacobowitz, H.

    1984-01-01

    Scan channels on the Nimbus 7 Earth Radiation Budget instrument sample radiances from underlying earth scenes at a number of incident and scattering angles. A sampling excess toward measurements at large satellite zenith angles is noted. Also, at large satellite zenith angles, the present scheme for scene selection causes many observations to be classified as cloud, resulting in higher flux averages. Thus the combined effect of sampling bias and scene identification errors is to overestimate the computed albedo. It is shown, using a process of successive thresholding, that observations with satellite zenith angles greater than 50-60 deg lead to incorrect cloud identification. Elimination of these observations has reduced the albedo from 32.2 to 28.8 percent. This reduction is very nearly the same and in the right direction as the discrepancy between the albedoes derived from the scanner and the wide-field-of-view channels.

  17. Monocular visual scene understanding: understanding multi-object traffic scenes.

    PubMed

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset. PMID:22889818

  18. Attentional Allocation During the Perception of Scenes

    ERIC Educational Resources Information Center

    Gordon, Robert D.

    2004-01-01

    Semantic influences on attention during the 1st fixation on a scene were explored in 3 experiments. Subjects viewed briefly presented scenes; following scene presentation, a spatial probe was presented at the location of an object whose identity was consistent or inconsistent with the scene category. Responses to the probe served as an index of…

  19. Optical neural nets for scene analysis

    NASA Astrophysics Data System (ADS)

    Casasent, David

    1991-04-01

    This project involves hybrid optical/digital neural nets (NNs) with attention to one of the most formidable NN problems: scene analysis and pattern recognition. Our research is unique in its attention to a hybrid optical/digital NN architecture that is very multifunctional. We describe the various novel uses for optics we employ within a NN and how our hybrid architecture can implement most major NNs (specifically: associative processors, optimization NNs, NNs to handle multiple objects in the field of view, and adaptive learning NNs). We also include new matrix inversion NN concepts. Our scene analysis algorithm work includes a new feature space, a new hybrid pattern recognition/neural net algorithm (the ACNNO, our symbolic correlator production system NN (that handles multiple objects in the field of view in parallel), and an advanced piecewise quadratic NN (PQNN) concept. Our major thrust has been the optical laboratory realization of these NN algorithms. Our initial work in this area is noted and includes: new error source modeling simulations of our initial and new real time optical laboratory system, a description of our newest optical laboratory system, and initial test results obtained with it.

  20. On the psychology and psychopathology of primal-scene experience.

    PubMed

    Hoyt, M F

    1980-07-01

    The importance of primal-scene experience is suggested by the wide range of attention it has received, with a multitude of derivative phenomena being attributed to its influence. Emphasis has been on possible psychiatric problems, and almost all available reports are clinical and anecdotal. The classical psychoanalytic view has been that such stimulation, be it through actual witnessing or fantasy, results (especially in children) in experience of anxiety, intense eroticization, and sadomasochistic confusions about sexuality. It is suggested here that issues of affectional love and fears of aloneness and feelings of vulnerability may often be the focus of primal-scene reactions. A wide range of evidence has been presented here to support the view that primal-scene experience per se is not necessarily deleterious, and that traumatic or pathogenic effects usually occur only within a context of general brutality or disturbed family relationships. In contradistinction, some emphasis here has been placed on possible positive effects of primal-scene experience. There is a clear need for further study, especially among nonpsychiatrically selected persons, for understanding to be advanced regarding the vicissitudes of both normal and pathological primal-scene experience. PMID:7410144

  1. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1980-01-01

    Presents four notes that report new equipment and techniques of interest to physics teachers. These notes deal with collosions of atoms in solids, determining the viscosity of a liquid, measuring the speed of sound and demonstrating Doppler effect. (HM)

  2. Toward integrated scene text reading.

    PubMed

    Weinman, Jerod J; Butler, Zachary; Knoll, Dugan; Feild, Jacqueline

    2014-02-01

    The growth in digital camera usage combined with a worldly abundance of text has translated to a rich new era for a classic problem of pattern recognition, reading. While traditional document processing often faces challenges such as unusual fonts, noise, and unconstrained lexicons, scene text reading amplifies these challenges and introduces new ones such as motion blur, curved layouts, perspective projection, and occlusion among others. Reading scene text is a complex problem involving many details that must be handled effectively for robust, accurate results. In this work, we describe and evaluate a reading system that combines several pieces, using probabilistic methods for coarsely binarizing a given text region, identifying baselines, and jointly performing word and character segmentation during the recognition process. By using scene context to recognize several words together in a line of text, our system gives state-of-the-art performance on three difficult benchmark data sets. PMID:24356356

  3. Multi- and hyperspectral scene modeling

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  4. Suicide notes.

    PubMed

    O'Donnell, I; Farmer, R; Catalan, J

    1993-07-01

    Detailed case reports of incidents of suicide and attempted suicide on the London Underground railway system between 1985 and 1989 were examined for the presence of suicide notes. The incidence of note-leaving was 15%. Notes provided little insight into the causes of suicide as subjectively perceived, or strategies for suicide prevention. PMID:8353698

  5. Categorization of Natural Dynamic Audiovisual Scenes

    PubMed Central

    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville

    2014-01-01

    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database. PMID:24788808

  6. Creating Three-Dimensional Scenes

    ERIC Educational Resources Information Center

    Krumpe, Norm

    2005-01-01

    Persistence of Vision Raytracer (POV-Ray), a free computer program for creating photo-realistic, three-dimensional scenes and a link for Mathematica users interested in generating POV-Ray files from within Mathematica, is discussed. POV-Ray has great potential in secondary mathematics classrooms and helps in strengthening students' visualization…

  7. How to Make a Scene

    ERIC Educational Resources Information Center

    Varian, Hal R.

    2004-01-01

    Each Thursday, the New York Times publishes a column called "Economic Scene" on page C2 of the Business Section. The authorship of the column rotates among four individuals: Alan Krueger, Virginia Postrel, Jeff Madrick, and the author. This essay is about how he came to be a columnist and how he goes about writing the columns.

  8. Improving AIRS radiance spectra in high contrast scenes using MODIS

    NASA Astrophysics Data System (ADS)

    Pagano, Thomas S.; Aumann, Hartmut H.; Manning, Evan M.; Elliott, Denis A.; Broberg, Steven E.

    2015-09-01

    The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on May 4, 2002. AIRS acquires hyperspectral infrared radiances in 2378 channels ranging in wavelength from 3.7-15.4 um with spectral resolution of better than 1200, and spatial resolution of 13.5 km with global daily coverage. The AIRS is designed to measure temperature and water vapor profiles for improvement in weather forecast accuracy and improved understanding of climate processes. As with most instruments, the AIRS Point Spread Functions (PSFs) are not the same for all detectors. When viewing a non-uniform scene, this causes a significant radiometric error in some channels that is scene dependent and cannot be removed without knowledge of the underlying scene. The magnitude of the error depends on the combination of non-uniformity of the AIRS spatial response for a given channel and the non-uniformity of the scene, but is typically only noticeable in about 1% of the scenes and about 10% of the channels. The current solution is to avoid those channels when performing geophysical retrievals. In this effort we use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument to provide information on the scene uniformity that is used to correct the AIRS data. For the vast majority of channels and footprints the technique works extremely well when compared to a Principal Component (PC) reconstruction of the AIRS channels. In some cases where the scene has high inhomogeneity in an irregular pattern, and in some channels, the method can actually degrade the spectrum. Most of the degraded channels appear to be slightly affected by random noise introduced in the process, but those with larger degradation may be affected by alignment errors in the AIRS relative to MODIS or uncertainties in the PSF. Despite these errors, the methodology shows the ability to correct AIRS radiances in non-uniform scenes under some of the worst case conditions and improves the ability to match

  9. Scene-of-crime analysis by a 3-dimensional optical digitizer: a useful perspective for forensic science.

    PubMed

    Sansoni, Giovanna; Cattaneo, Cristina; Trebeschi, Marco; Gibelli, Daniele; Poppa, Pasquale; Porta, Davide; Maldarella, Monica; Picozzi, Massimo

    2011-09-01

    Analysis and detailed registration of the crime scene are of the utmost importance during investigations. However, this phase of activity is often affected by the risk of loss of evidence due to the limits of traditional scene of crime registration methods (ie, photos and videos). This technical note shows the utility of the application of a 3-dimensional optical digitizer on different crime scenes. This study aims in fact at verifying the importance and feasibility of contactless 3-dimensional reconstruction and modeling by optical digitization to achieve an optimal registration of the crime scene. PMID:21811148

  10. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  11. Cultural Changes and the Drug Scene

    ERIC Educational Resources Information Center

    Gregory, Robert J.

    1975-01-01

    An anthropological perspective on the American society and the contemporary drug scene reveals shifts in value orientation, in social structure, and in activity. In this view, the drug scene appears to be in the vanguard of social change. (Author)

  12. Crime scene interpretation: back to basics

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    This presentation is a review of the basics involved in the interpretation of the crime scene based on facts derived from the physical and testimonial evidence obtained from the scene. This presentation will demonstrate the need to thoroughly document the scene to prove the interpretation. Part of this documentation is based on photography and crime scene sketches. While the methodology is simple and well demonstrated in this presentation this aspect is one of the tasks least completed by most law enforcement agencies.

  13. Scanning scene tunnel for city traversing.

    PubMed

    Zheng, Jiang Yu; Zhou, Yu; Milli, Panayiotis

    2006-01-01

    This paper proposes a visual representation named scene tunnel for capturing urban scenes along routes and visualizing them on the Internet. We scan scenes with multiple cameras or a fish-eye camera on a moving vehicle, which generates a real scene archive along streets that is more complete than previously proposed route panoramas. Using a translating spherical eye, properly set planes of scanning, and unique parallel-central projection, we explore the image acquisition of the scene tunnel from camera selection and alignment, slit calculation, scene scanning, to image integration. The scene tunnels cover high buildings, ground, and various viewing directions and have uniformed resolutions along the street. The sequentially organized scene tunnel benefits texture mapping onto the urban models. We analyze the shape characteristics in the scene tunnels for designing visualization algorithms. After combining this with a global panorama and forward image caps, the capped scene tunnels can provide continuous views directly for virtual or real navigation in a city. We render scene tunnel dynamically by view warping, fast transmission, and flexible interaction. The compact and continuous scene tunnel facilitates model construction, data streaming, and seamless route traversing on the Internet and mobile devices. PMID:16509375

  14. Where Do Objects Become Scenes?

    PubMed Central

    Biederman, Irving

    2011-01-01

    Regions tuned to individual visual categories, such as faces and objects, have been discovered in the later stages of the ventral visual pathway in the cortex. But most visual experience is composed of scenes, where multiple objects are interacting. Such interactions are readily described by prepositions or verb forms, for example, a bird perched on a birdhouse. At what stage in the pathway does sensitivity to such interactions arise? Here we report that object pairs shown as interacting, compared with their side-by-side depiction (e.g., a bird besides a birdhouse), elicit greater activity in the lateral occipital complex, the earliest cortical region where shape is distinguished from texture. Novelty of the interactions magnified this gain, an effect that was absent in the side-by-side depictions. Scene-like relations are thus likely achieved simultaneously with the specification of object shape. PMID:21148087

  15. Chemistry Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1980

    1980-01-01

    Presents 12 chemistry notes for British secondary school teachers. Some of these notes are: (1) a simple device for testing pH-meters; (2) portable fume cupboard safety screen; and (3) Mass spectroscopy-analysis of a mass peak. (HM)

  16. Dynamosaicing: mosaicing of dynamic scenes.

    PubMed

    Rav-Acha, Alex; Pritch, Yael; Lischinski, Dani; Peleg, Shmuel

    2007-10-01

    This paper explores the manipulation of time in video editing, enabling to control the chronological time of events. These time manipulations include slowing down (or postponing) some dynamic events while speeding up (or advancing) others. When a video camera scans a scene, aligning all the events to a single time interval will result in a panoramic movie. Time manipulations are obtained by first constructing an aligned space-time volume from the input video, and then sweeping a continuous 2D slice (time front) through that volume, generating a new sequence of images. For dynamic scenes, aligning the input video frames poses an important challenge. We propose to align dynamic scenes using a new notion of "dynamics constancy", which is more appropriate for this task than the traditional assumption of "brightness constancy". Another challenge is to avoid visual seams inside moving objects and other visual artifacts resulting from sweeping the space-time volumes with time fronts of arbitrary geometry. To avoid such artifacts, we formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods. PMID:17699923

  17. Dynamic Scene Classification Using Redundant Spatial Scenelets.

    PubMed

    Du, Liang; Ling, Haibin

    2016-09-01

    Dynamic scene classification started drawing an increasing amount of research efforts recently. While existing arts mainly rely on low-level features, little work addresses the need of exploring the rich spatial layout information in dynamic scene. Motivated by the fact that dynamic scenes are characterized by both dynamic and static parts with spatial layout priors, we propose to use redundant spatial grouping of a large number of spatiotemporal patches, named scenelet, to represent a dynamic scene. Specifically, each scenelet is associated with a category-dependent scenelet model to encode the likelihood of a specific scene category. All scenelet models for a scene category are jointly learned to encode the spatial interactions and redundancies among them. Subsequently, a dynamic scene sequence is represented as a collection of category likelihoods estimated by these scenelet models. Such presentation effectively encodes the spatial layout prior together with associated semantic information, and can be used for classifying dynamic scenes in combination with a standard learning algorithm such as k -nearest neighbor or linear support vector machine. The effectiveness of our approach is clearly demonstrated using two dynamic scene benchmarks and a related application for violence video classification. In the nearest neighbor classification framework, for dynamic scene classification, our method outperforms previous state-of-the-arts on both Maryland "in the wild" dataset and "stabilized" dynamic scene dataset. For violence video classification on a benchmark dataset, our method achieves a promising classification rate of 87.08%, which significantly improves previous best result of 81.30%. PMID:26302526

  18. Crime scene investigation, reporting, and reconstuction (CSIRR)

    NASA Astrophysics Data System (ADS)

    Booth, John F.; Young, Jeffrey M.; Corrigan, Paul

    1997-02-01

    Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.

  19. Blue Note

    ScienceCinema

    Murray Gibson

    2010-01-08

    Argonne's Murray Gibson is a physicist whose life's work includes finding patterns among atoms. The love of distinguishing patterns also drives Gibson as a musician and Blues enthusiast."Blue" notes are very harmonic notes that are missing from the equal temperament scale.The techniques of piano blues and jazz represent the melding of African and Western music into something totally new and exciting.

  20. Blue Note

    SciTech Connect

    Murray Gibson

    2007-04-27

    Argonne's Murray Gibson is a physicist whose life's work includes finding patterns among atoms. The love of distinguishing patterns also drives Gibson as a musician and Blues enthusiast."Blue" notes are very harmonic notes that are missing from the equal temperament scale.The techniques of piano blues and jazz represent the melding of African and Western music into something totally new and exciting.

  1. Scene segmentation through region growing

    NASA Technical Reports Server (NTRS)

    Latty, R. S.

    1984-01-01

    A computer algorithm to segment Landsat Thematic Mapper (TM) images into areas representing surface features is described. The algorithm is based on a region growing approach and uses edge elements and edge element orientation to define the limits of the surface features. Adjacent regions which are not separated by edges are linked to form larger regions. Some of the advantages of scene segmentation over conventional TM image extraction algorithms are discussed, including surface feature analysis on a pixel-by-pixel basis, and faster identification of the pixels in each region. A detailed flow diagram of region growing algorithm is provided.

  2. Scene-based contextual cueing in pigeons.

    PubMed

    Wasserman, Edward A; Teng, Yuejia; Brooks, Daniel I

    2014-10-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target, which could appear in 1 of 4 locations on color photographs of real-world scenes. On half of the trials, each of 4 scenes was consistently paired with 1 of 4 possible target locations; on the other half of the trials, each of 4 different scenes was randomly paired with the same 4 possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  3. Associative Processing Is Inherent in Scene Perception

    PubMed Central

    Aminoff, Elissa M.; Tarr, Michael J.

    2015-01-01

    How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional

  4. The Self Actualized Reader.

    ERIC Educational Resources Information Center

    Marino, Michael; Moylan, Mary Elizabeth

    A study examined the commonalities that "voracious" readers share, and how their experiences can guide parents, teachers, and librarians in assisting children to become self-actualized readers. Subjects, 25 adults ranging in age from 20 to 67 years, completed a questionnaire concerning their reading histories and habits. Respondents varied in…

  5. Audiovisual integration facilitates unconscious visual scene processing.

    PubMed

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration. PMID:26076179

  6. Biology Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1984

    1984-01-01

    Presents information on the teaching of nutrition (including new information relating to many current O-level syllabi) and part 16 of a reading list for A- and S-level biology. Also includes a note on using earthworms as a source of material for teaching meiosis. (JN)

  7. Classroom Notes

    ERIC Educational Resources Information Center

    International Journal of Mathematical Education in Science and Technology, 2007

    2007-01-01

    In this issue's "Classroom Notes" section, the following papers are discussed: (1) "Constructing a line segment whose length is equal to the measure of a given angle" (W. Jacob and T. J. Osler); (2) "Generating functions for the powers of Fibonacci sequences" (D. Terrana and H. Chen); (3) "Evaluation of mean and variance integrals without…

  8. Classroom Notes

    ERIC Educational Resources Information Center

    International Journal of Mathematical Education in Science and Technology, 2007

    2007-01-01

    In this issue's "Classroom Notes" section, the following papers are described: (1) "Sequences of Definite Integrals" by T. Dana-Picard; (2) "Structural Analysis of Pythagorean Monoids" by M.-Q Zhan and J. Tong; (3) "A Random Walk Phenomenon under an Interesting Stopping Rule" by S. Chakraborty; (4) "On Some Confidence Intervals for Estimating the…

  9. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1980-01-01

    This collection of notes describes (1) an optoelectronic apparatus for classroom demonstrations of mechanical laws, (2) a more efficient method for demonstrated nuclear chain reactions using electrically energized "traps" and ping-pong balls, and (3) an inexpensive demonstration for qualitative analysis of temperature-dependent resistance. (CS)

  10. Eye Movement Control during Scene Viewing: Immediate Effects of Scene Luminance on Fixation Durations

    ERIC Educational Resources Information Center

    Henderson, John M.; Nuthmann, Antje; Luke, Steven G.

    2013-01-01

    Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation…

  11. When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes

    ERIC Educational Resources Information Center

    Vo, Melissa L. -H.; Wolfe, Jeremy M.

    2012-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…

  12. Framework of passive millimeter-wave scene simulation based on material classification

    NASA Astrophysics Data System (ADS)

    Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun

    2006-05-01

    using actual PMMW sensors. With the reliable PMMW scene simulator, it will be more efficient to apply the PMMW sensor to various applications.

  13. Surreal Scene Part of Lives.

    ERIC Educational Resources Information Center

    Freeman, Christina

    1999-01-01

    Describes a school newspaper editor's attempts to cover the devastating tornado that severely damaged her school--North Hall High School in Gainesville, Georgia. Notes that the 16-page special edition she and the staff produced included first-hand accounts, tributes to victims, tales of survival, and pictures of the tragedy. (RS)

  14. The spatiotemporal dynamics of scene gist recognition.

    PubMed

    Larson, Adam M; Freeman, Tyler E; Ringer, Ryan V; Loschky, Lester C

    2014-04-01

    Viewers can rapidly extract a holistic semantic representation of a real-world scene within a single eye fixation, an ability called recognizing the gist of a scene, and operationally defined here as recognizing an image's basic-level scene category. However, it is unknown how scene gist recognition unfolds over both time and space-within a fixation and across the visual field. Thus, in 3 experiments, the current study investigated the spatiotemporal dynamics of basic-level scene categorization from central vision to peripheral vision over the time course of the critical first fixation on a novel scene. The method used a window/scotoma paradigm in which images were briefly presented and processing times were varied using visual masking. The results of Experiments 1 and 2 showed that during the first 100 ms of processing, there was an advantage for processing the scene category from central vision, with the relative contributions of peripheral vision increasing thereafter. Experiment 3 tested whether this pattern could be explained by spatiotemporal changes in selective attention. The results showed that manipulating the probability of information being presented centrally or peripherally selectively maintained or eliminated the early central vision advantage. Across the 3 experiments, the results are consistent with a zoom-out hypothesis, in which, during the first fixation on a scene, gist extraction extends from central vision to peripheral vision as covert attention expands outward. PMID:24245502

  15. History Scene Investigations: From Clues to Conclusions

    ERIC Educational Resources Information Center

    McIntyre, Beverly

    2011-01-01

    In this article, the author introduces a social studies lesson that allows students to learn history and practice reading skills, critical thinking, and writing. The activity is called History Scene Investigation or HSI, which derives its name from the popular television series based on crime scene investigations (CSI). HSI uses discovery learning…

  16. Teaching Notes

    NASA Astrophysics Data System (ADS)

    2001-05-01

    If you would like to contribute a teaching note for any of these sections please contact ped@iop.org. Contents: LET'S INVESTIGATE: Standing waves on strings MY WAY: Physics slips, trips and falls PHYSICS ON A SHOESTRING The McOhm: using fast food to explain resistance Eggs and a sheet STARTING OUT: After a nervous start, I'm flying ON THE MAP: Christ's Hospital CURIOSITY: The Levitron TECHNICAL TRIMMINGS: Brownian motion smoke cell LET'S INVESTIGATE

  17. Look Closely: The Finer Points of Scene Analysis.

    ERIC Educational Resources Information Center

    Miller, Bruce

    1998-01-01

    Continues a discussion of script analysis for actors. Focuses on specific scenes and how an eventual scene-by-scene analysis will help students determine a "throughline" of a play's action. Uses a scene from "Romeo and Juliet" to illustrate scene analysis. Gives 13 script questions for students to answer. Presents six tips for scoring the action.…

  18. Stages as models of scene geometry.

    PubMed

    Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark

    2010-09-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations. PMID:20634560

  19. Scene construction in amnesia: an FMRI study.

    PubMed

    Mullally, Sinéad L; Hassabis, Demis; Maguire, Eleanor A

    2012-04-18

    In recent years, there has been substantial interest in how the human hippocampus not only supports recollection of past experiences, but also the construction of fictitious and future events, and the leverage this might offer for understanding the operating mechanisms of the hippocampus. Evidence that patients with bilateral hippocampal damage and amnesia cannot construct novel or future scenes/events has been influential in driving this line of research forward. There are, however, some patients with hippocampal damage and amnesia who retain the ability to construct novel scenes. This dissociation may indicate that the hippocampus is not required for scene construction, or alternatively, there could be residual function in remnant hippocampal tissue sufficient to support the basic construction of scenes. Resolving this controversy is central to current theoretical debates about the hippocampus. To investigate, we used fMRI and a scene construction task to test patient P01, who has dense amnesia, ∼50% bilateral hippocampal volume loss, and intact scene construction. We found that scene construction in P01 was associated with increased activity in a set of brain areas, including medial temporal, retrosplenial, and posterior parietal cortices, that overlapped considerably with the regions engaged in control participants performing the same task. Most notably, the remnant of P01's right hippocampus exhibited increased activity during scene construction. This suggests that the intact scene construction observed in some hippocampal-damaged amnesic patients may be supported by residual function in their lesioned hippocampus, in accordance with theoretical frameworks that ascribe a vital role to the hippocampus in scene construction. PMID:22514326

  20. Brief Note: Response to Benatar.

    PubMed

    Kelland, Lindsay-Ann

    2015-11-01

    In his response to my article entitled 'The Harm of Male-on-Female Rape: A Response to David Benatar', Benatar argues that I take his claims out of context, misrepresent them, and set up a straw man, which means, he claims, that I fail to respond to anything he has actually said. In this brief note, I respond to these allegations. PMID:25592400

  1. Photonic crystal scene projector development

    NASA Astrophysics Data System (ADS)

    Wilson, J. A.; Burckel, B.; Caulfield, J.; Cogan, S.; Massie, M.; Lamott, R.; Snyder, D.; Rapp, R.

    2010-04-01

    This paper describes results from the Extremely High Temperature Photonic Crystal System Technology (XTEMPS) program. The XTEMPS program is developing projector technology based on photonic crystals capable of high dynamic range, multispectral emission from SWIR to LWIR, and realistic band widths. These Photonics Crystals (PhC) are fabricated from refractory materials to provide high radiance and long device lifetime. Cyan is teamed with Sandia National Laboratories, to develop photonics crystals designed for realistic scene projection systems and Nova sensors to utilize their advanced Read In Integrated Circuit (RIIC). PhC based emitters show improved in-band output power efficiency when compared to broad band "graybody" emitters due to the absence of out-of-band emission. Less electrical power is required to achieve high operating temperature, and the potential for nonequilibrium pumping exists. Both effects boost effective radiance output. Cyan has demonstrated pixel designs compatible with Nova's medium format RIIC, ensuring high apparent output temperatures, modest drive currents, and low operating voltages of less than five volts. Unit cell pixel structures with high radiative efficiency have been demonstrated, and arrays using PhC optimized for up to four spectral bands have been successfully patterned.

  2. Actual use scene of Han-Character for proper name and coded character set

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tatsuo

    This article discusses the following two issues. One is overview of standardization of Han-Character in coded character set including Universal coded character set (ISO/IEC 10646), with the relation to Japanese language policy of the government. The other is the difference and particularity of Han-Character usage for proper name and difficulty to implement in ICT systems.

  3. Scene categorization at large visual eccentricities.

    PubMed

    Boucart, Muriel; Moroni, Christine; Thibaut, Miguel; Szaffarczyk, Sebastien; Greene, Michelle

    2013-06-28

    Studies of scene perception have shown that the visual system is particularly sensitive to global properties such as the overall layout of a scene. Such global properties cannot be computed locally, but rather require relational analysis over multiple regions. To what extent is observers' perception of scenes impaired in the far periphery? We examined the perception of global scene properties (Experiment 1) and basic-level categories (Experiment 2) presented in the periphery from 10° to 70°. Pairs of scene photographs were simultaneously presented left and right of fixation for 80ms on a panoramic screen (5m diameter) covering the whole visual field while central fixation was controlled. Observers were instructed to press a key corresponding to the spatial location left/right of a pre-defined target property or category. The results show that classification of global scene properties (e.g., naturalness, openness) as well as basic-level categorization (e.g., forests, highways), while better near the center, were accomplished with a performance highly above chance (around 70% correct) in the far periphery even at 70° eccentricity. The perception of some global properties (e.g., naturalness) was more robust in peripheral vision than others (e.g., indoor/outdoor) that required a more local analysis. The results are consistent with studies suggesting that scene gist recognition can be accomplished by the low resolution of peripheral vision. PMID:23597581

  4. Visual scenes are categorized by function.

    PubMed

    Greene, Michelle R; Baldassano, Christopher; Esteva, Andre; Beck, Diane M; Fei-Fei, Li

    2016-01-01

    How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene's category may be determined by the scene's function. PMID:26709590

  5. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    PubMed Central

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  6. Behind the Scenes: 'Fishing' For Rockets

    NASA Video Gallery

    In this episode of NASA "Behind the Scenes," go on board the two ships -- Liberty Star and Freedom Star -- which retrieve the shuttle's solid rocket boosters after every launch. Astronaut Mike Mass...

  7. Behind the Scenes: Astronauts Get Float Training

    NASA Video Gallery

    In this episode of "NASA Behind the Scenes," astronaut Mike Massimino continues his visit with safety divers and flight doctors at the Johnson Space Center's Neutral Buoyancy Laboratory as they com...

  8. Behind the Scenes: Under the Shuttle

    NASA Video Gallery

    In this episode of "NASA Behind the Scenes," astronaut Mike Massimino takes you up to - and under - the space shuttle as it waits on launch pad 39A at the Kennedy Space Center for the start of a re...

  9. Behind the Scenes: Discovery Crew Practices Landing

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, Astronaut Mike Massimino introduces you to Commander Steve Lindsey and the crewmembers of STS-133, space shuttle Discovery's last mission. Go inside one o...

  10. Cognition inspired framework for indoor scene annotation

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    We present a simple yet effective scene annotation framework based on a combination of bag-of-visual words (BoVW), three-dimensional scene structure estimation, scene context, and cognitive theory. From a macroperspective, the proposed cognition-based hybrid motivation framework divides the annotation problem into empirical inference and real-time classification. Inspired by the inference ability of human beings, common objects of indoor scenes are defined for experience-based inference, while in the real-time classification stage, an improved BoVW-based multilayer abstract semantics labeling method is proposed by introducing abstract semantic hierarchies to narrow the semantic gap and improve the performance of object categorization. The proposed framework was evaluated on a variety of common data sets and experimental results proved its effectiveness.

  11. Statistics of high-level scene context

    PubMed Central

    Greene, Michelle R.

    2013-01-01

    Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by

  12. Innovations in infrared scene simulator design

    NASA Astrophysics Data System (ADS)

    Lane, Richard; Heath, Jeffery L.

    1998-07-01

    The MIRAGE (Multispectral Infrared Animation Generation Equipment) Dynamic Infrared Scene Projector, is a joint project developed by Santa Barbara Infrared, Inc. and Indigo Systems Corporation. MIRAGE is a complete infrared scene projector, accepting 3-D rendered analog or digital scene data at its input, and providing all other electronics, collimated optics, calibration and thermal support subsystems needed to simulated a unit under test with high-fidelity, dynamic infrared scenes. At the heart of MIRAGE is a 512 X 512 emitter array, with key innovations that solve several problems of existing designs. The read-in integrated circuit (RIIC) features 'snapshot' updating of the entire 512 X 512 resistive array, thus solving synchronization and latency problems inherent in 'rolling- update' type designs, where data is always changing somewhere on the emitter array at any given time. This custom mixed- signal RIIC also accepts digital scene information at its input, and uses on-board D/A converters and individual unit- cell buffer amplifiers to create analog scene levels, eliminating the complexity, noise, and limitations of speed and dynamic range associated with external generation of analog scene levels. The proprietary process used to create the advanced technology micro-membrane emitter elements allows a wide choice of resistor and structure materials while preserving the dissipation and providing a thermal time constant of the order of 5 ms. These innovations, along with a compact electronics subsystem based on a standard desktop PC, greatly reduce the complexity of the required external support electronics, resulting in a smaller, higher performance dynamic scene simulator system.

  13. Scene change detection based on multimodal integration

    NASA Astrophysics Data System (ADS)

    Zhu, Yingying; Zhou, Dongru

    2003-09-01

    Scene change detection is an essential step to automatic and content-based video indexing, retrieval and browsing. In this paper, a robust scene change detection and classification approach is presented, which analyzes audio, visual and textual sources and accounts for their inter-relations and coincidence to semantically identify and classify video scenes. Audio analysis focuses on the segmentation of audio stream into four types of semantic data such as silence, speech, music and environmental sound. Further processing on speech segments aims at locating speaker changes. Video analysis partitions visual stream into shots. Text analysis can provide a supplemental source of clues for scene classification and indexing information. We integrate the video and audio analysis results to identify video scenes and use the text information detected by the video OCR technology or derived from transcripts available to refine scene classification. Results from single source segmentation are in some cases suboptimal. By combining visual, aural features adn the accessorial text information, the scence extraction accuracy is enhanced, and more semantic segmentations are developed. Experimental results are proven to rather promising.

  14. Scene analysis in the natural environment

    PubMed Central

    Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740

  15. Thermal resolution specification in infrared scene projectors

    NASA Astrophysics Data System (ADS)

    LaVeigne, Joe; Franks, Greg; Danielson, Tom

    2015-05-01

    Infrared scene projectors (IRSPs) are a key part of performing dynamic testing of infrared (IR) imaging systems. Two important properties of an IRSP system are apparent temperature and thermal resolution. Infrared scene projector technology continues to progress, with several systems capable of producing high apparent temperatures currently available or under development. These systems use different emitter pixel technologies, including resistive arrays, digital micro-mirror devices (DMDs), liquid crystals and LEDs to produce dynamic infrared scenes. A common theme amongst these systems is the specification of the bit depth of the read-in integrated circuit (RIIC) or projector engine , as opposed to specifying the desired thermal resolution as a function of radiance (or apparent temperature). For IRSPs, producing an accurate simulation of a realistic scene or scenario may require simulating radiance values that range over multiple orders of magnitude. Under these conditions, the necessary resolution or "step size" at low temperature values may be much smaller than what is acceptable at very high temperature values. A single bit depth value specified at the RIIC, especially when combined with variable transfer functions between commanded input and radiance output, may not offer the best representation of a customer's desired radiance resolution. In this paper, we discuss some of the various factors that affect thermal resolution of a scene projector system, and propose some specification guidelines regarding thermal resolution to help better define the real needs of an IR scene projector system.

  16. The roles of scene priming and location priming in object-scene consistency effects

    PubMed Central

    Heise, Nils; Ansorge, Ulrich

    2014-01-01

    Presenting consistent objects in scenes facilitates object recognition as compared to inconsistent objects. Yet the mechanisms by which scenes influence object recognition are still not understood. According to one theory, consistent scenes facilitate visual search for objects at expected places. Here, we investigated two predictions following from this theory: If visual search is responsible for consistency effects, consistency effects could be weaker (1) with better-primed than less-primed object locations, and (2) with less-primed than better-primed scenes. In Experiments 1 and 2, locations of objects were varied within a scene to a different degree (one, two, or four possible locations). In addition, object-scene consistency was studied as a function of progressive numbers of repetitions of the backgrounds. Because repeating locations and backgrounds could facilitate visual search for objects, these repetitions might alter the object-scene consistency effect by lowering of location uncertainty. Although we find evidence for a significant consistency effect, we find no clear support for impacts of scene priming or location priming on the size of the consistency effect. Additionally, we find evidence that the consistency effect is dependent on the eccentricity of the target objects. These results point to only small influences of priming to object-scene consistency effects but all-in-all the findings can be reconciled with a visual-search explanation of the consistency effect. PMID:24910628

  17. Editorial Note

    NASA Astrophysics Data System (ADS)

    van der Meer, F.; Ommen Kloeke, E.

    2015-07-01

    With this editorial note we would like to update you on the performance of the International Journal of Applied Earth Observation and Geoinformation (JAG) and inform you about changes that have been made to the composition of the editorial team. Our Journal publishes original papers that apply earth observation data for the management of natural resources and the environment. Environmental issues include biodiversity, land degradation, industrial pollution and natural hazards such as earthquakes, floods and landslides. As such the scope is broad and ranges from conceptual and more fundamental work on earth observation and geospatial sciences to the more problem-solving type of work. When I took over the role of Editor-in-Chief in 2012, I together with the Publisher set myself the mission to position JAG in the top-3 of the remote sensing and GIS journals. To do so we strived at attracting high quality and high impact papers to the journal and to reduce the review turnover time to make JAG a more attractive medium for publications. What has been achieved? Have we reached our ambitions? We can say that: The submissions have increased over the years with over 23% for the last 12 months. Naturally not all may lead to more papers, but at least a portion of the additional submissions should lead to a growth in journal content and quality.

  18. [Suicidal single intraoral shooting by a shotgun--risk of misinterpretation at the crime scene].

    PubMed

    Woźniak, Krzysztof; Pohl, Jerzy

    2003-01-01

    The authors presented two cases of suicidal single intraoral shooting by a shotgun. The first case relates to a victim found near the peak of Swinica in the Tatra mountains. When the circumstances could have suggested fatal fall from a height and minute, insignificant external injuries were found, the pistol found at the scene has been the most important indicator leading to the actual cause of death. The second case relates to a 38-year-old male found in this family house in a village. Severe internal cranial injury (bone fragmentation) was diagnosed at the scene. A self-made weapon was previously removed and hidden from the scene by a relative of the victim. Before regular forensic autopsy X-ray examination was conducted which revealed multiple intracranial foreign bodies of a shape of a shot. After the results of the autopsy the relative of the deceased indicated the location of the weapon. PMID:14971300

  19. 47 CFR 80.1127 - On-scene communications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false On-scene communications. 80.1127 Section 80... for Distress and Safety Communications § 80.1127 On-scene communications. (a) On-scene communications... unit coordinating search and rescue operations. (b) Control of on-scene communications is...

  20. Young drug addicts and the drug scene.

    PubMed

    Lucchini, R

    1985-01-01

    The drug scene generally comprises the following four distinct categories of young people: neophytes, addicts who enjoy a high status vis-à-vis other addicts, multiple drug addicts, and non-addicted drug dealers. It has its own evolution, hierarchy, structure and criteria of success and failure. The members are required to conform to the established criteria. The integration of the young addict into the drug scene is not voluntary in the real sense of the word, for he is caught between the culture that he rejects and the pseudo-culture of the drug scene. To be accepted into the drug scene, the neophyte must furnish proof of his reliability, which often includes certain forms of criminal activities. The addict who has achieved a position of importance in the drug world serves as a role model for behaviour to the neophyte. In a more advanced phase of addiction, the personality of the addict and the social functions of the drug scene are overwhelmed by the psychoactive effects of the drug, and this process results in the social withdrawal of the addict. The life-style of addicts and the subculture they develop are largely influenced by the type of drug consumed. For example, it is possible to speak of a heroin subculture and a cocaine subculture. In time, every drug scene deteriorates so that it becomes fragmented into small groups, which is often caused by legal interventions or a massive influx of new addicts. The fragmentation of the drug scene is followed by an increase in multiple drug abuse, which often aggravates the medical and social problems of drug addicts. PMID:4075000

  1. Visual Scenes are Categorized by Function

    PubMed Central

    Greene, Michelle R.; Baldassano, Christopher; Esteva, Andre; Beck, Diane M.; Fei-Fei, Li

    2015-01-01

    How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. We therefore test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether two images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r=0.50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r=0.33), visual features from a convolutional neural network (r=0.39), lexical distance (r=0.27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was due to their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene’s category may be determined by the scene’s function. PMID:26709590

  2. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  3. Infants detect changes in everyday scenes: the role of scene gist.

    PubMed

    Duh, Shinchieh; Wang, Su-hua

    2014-07-01

    When watching physical events, infants bring to bear prior knowledge about objects and readily detect changes that contradict physical rules. Here we investigate the possibility that scene gist may affect infants, as it affects adults, when detecting changes in everyday scenes. In Experiment 1, 15-month-old infants missed a perceptually salient change that preserved the gist of a generic outdoor scene; the same change was readily detected if infants had insufficient time to process the display and had to rely on perceptual information for change detection. In Experiment 2, 15-month-olds detected a perceptually subtle change that preserved the scene gist but violated the rule of object continuity, suggesting that physical rules may overpower scene gist in infants' change detection. Finally, Experiments 3 and 4 provided converging evidence for the effects of scene gist, showing that 15-month-olds missed a perceptually salient change that preserved the gist and detected a perceptually subtle change that disrupted the gist. Together, these results suggest that prior knowledge, including scene knowledge and physical knowledge, affects the process by which infants maintain their representations of everyday scenes. PMID:24751990

  4. Scene Construction, Visual Foraging, and Active Inference

    PubMed Central

    Mirza, M. Berk; Adams, Rick A.; Mathys, Christoph D.; Friston, Karl J.

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  5. Moving through a multiplex holographic scene

    NASA Astrophysics Data System (ADS)

    Mrongovius, Martina

    2013-02-01

    This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.

  6. Scene Construction, Visual Foraging, and Active Inference.

    PubMed

    Mirza, M Berk; Adams, Rick A; Mathys, Christoph D; Friston, Karl J

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  7. Maxwellian Eye Fixation during Natural Scene Perception

    PubMed Central

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A.

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  8. Maxwellian eye fixation during natural scene perception.

    PubMed

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  9. Research on target scene generation for hardware-in-the-loop simulation of four-element infrared seeker

    NASA Astrophysics Data System (ADS)

    Yu, Jinsong; Xu, Bo; Hao, Wangsong; Li, Xingshan

    2006-11-01

    To satisfy the need of hardware-in-the-loop simulation of four-element infrared seeker, a method of dynamic infrared scene generation based on "direct signal inject" is proposed. Infrared scene signals generated by model calculation are composed of target movement, disturbers launching and complex background of sky or ground. The signals are directly injects into the electrical cabin of seeker for verification and modification of the algorithms of tracking and anti-jamming, thus the complicated target simulator consisting of black body, turntable, and optical system is not required. The dynamic infrared scene generation techniques based on the four-element infrared guidance principle and the modeling of infrared scene are investigated in detail. Moreover, the implementation of the actual system is given to prove the feasibility of the method in practice.

  10. Dynamic infrared scene projection: a review

    NASA Astrophysics Data System (ADS)

    Williams, Owen M.

    1998-12-01

    Since the early 1990s, there has been major progress in the developing field of dynamic infrared scene projection, driven principally by the need for hardware-in-the-loop simulation of the oncoming generation of imaging infrared missile seekers and more recently by the needs for realistic simulation of the new generation of thermal imagers and forward-looking infrared systems. In this paper the current status of the dynamic infrared projection field is reviewed, commencing with an outline of its history. The requirements for dynamic infrared scene projection are examined, allowing a set of validity criteria to be developed. Each class of infrared projector that has been investigated—emissive, transmissive, reflective, laser scanner and phosphor—together with the specific technology initiatives within the class is described and examined against the validity criteria. In this way the leading dynamic infrared scene projection technologies are identified.

  11. Perception of saturation in natural scenes.

    PubMed

    Schiller, Florian; Gegenfurtner, Karl R

    2016-03-01

    We measured how well perception of color saturation in natural scenes can be predicted by different measures that are available in the literature. We presented 80 color images of natural scenes or their gray-scale counterparts to our observers, who were asked to choose the pixel from each image that appeared to be the most saturated. We compared our observers' choices to the predictions of seven popular saturation measures. For the color images, all of the measures predicted perception of saturation quite well, with CIECAM02 performing best. Differences between the measures were small but systematic. When gray-scale images were viewed, observers still chose pixels whose counterparts in the color images were saturated above average. This indicates that image structure and prior knowledge can be relevant to perception of saturation. Nevertheless, our results also show that saturation in natural scenes can be specified quite well without taking these factors into account. PMID:26974924

  12. Knowleuye-Baseu Seyentation Of Road Scenes

    NASA Astrophysics Data System (ADS)

    Duane, G. S.

    1986-12-01

    A rule-based expert system is being developed to segment and label the major elements in digitized images of simple road scenes. The system acts on a set of uniform regions, generated by the split-and-merge algorithm, which may be regarded dS a non-standard primal sketch. Simple descriptors attached to regions and to boundaries between regions are referenced by the rule base, which incorporates general Knowledge of the typical benavior of the split-and-merge algorithm, as well as specific knowledge of the scene domain. The rules merge oversegmented regions to form an essentially correct description of the scene. The system is intended to investigate tne limits of applicability of methods which do riot introduce an explicit tnree-dimensional (3D) representation

  13. The polymorphism of crime scene investigation: An exploratory analysis of the influence of crime and forensic intelligence on decisions made by crime scene examiners.

    PubMed

    Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin

    2015-12-01

    A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing. PMID:26583959

  14. Coarse-to-fine categorization of visual scenes in scene-selective cortex.

    PubMed

    Musel, Benoit; Kauffmann, Louise; Ramanoël, Stephen; Giavarini, Coralie; Guyader, Nathalie; Chauvin, Alan; Peyrin, Carole

    2014-10-01

    Neurophysiological, behavioral, and computational data indicate that visual analysis may start with the parallel extraction of different elementary attributes at different spatial frequencies and follows a predominantly coarse-to-fine (CtF) processing sequence (low spatial frequencies [LSF] are extracted first, followed by high spatial frequencies [HSF]). Evidence for CtF processing within scene-selective cortical regions is, however, still lacking. In the present fMRI study, we tested whether such processing occurs in three scene-selective cortical regions: the parahippocampal place area (PPA), the retrosplenial cortex, and the occipital place area. Fourteen participants were subjected to functional scans during which they performed a categorization task of indoor versus outdoor scenes using dynamic scene stimuli. Dynamic scenes were composed of six filtered images of the same scene, from LSF to HSF or from HSF to LSF, allowing us to mimic a CtF or the reverse fine-to-coarse (FtC) sequence. Results showed that only the PPA was more activated for CtF than FtC sequences. Equivalent activations were observed for both sequences in the retrosplenial cortex and occipital place area. This study suggests for the first time that CtF sequence processing constitutes the predominant strategy for scene categorization in the PPA. PMID:24738768

  15. Optimal exposure sets for high dynamic range scenes

    NASA Astrophysics Data System (ADS)

    Valli Kumari, V.; RaviKiran, B.; Raju, K. V. S. V. N.; Shajahan Basha, S. A.

    2011-10-01

    The dynamic range of many natural scenes is far greater than the dynamic range of the imaging devices. These scenes present a challenge to the consumer digital cameras. The well-known technique to capture the full dynamic range of the scene is by fusing multiple images of the same scene. Usually people combine three or five different exposures to get the full dynamic range of the scene. Some cameras like Pentax K-7, always combines fixed exposures together to produce the output result. However, this should be adaptive to the scene characteristics. We propose an optimal solution for dynamically selecting the exposure sets.

  16. Relating Spatial Patterns in Image Data to Scene Characteristics

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Woodcock, C. E.

    1983-01-01

    In remote sensing, the primary goal is accurate scene inference, in which characteristics of the scene are inferred from the image data. More effective inference of scene characteristics can be accomplished through the use of techniques that use explicit models of spatial pattern. Spatial patterns in image data are functionally related to the size and spacing of elements in the scene and to the spatial resolution of the image data. At resolutions where variance is high, scene inference techniques should rely heavily on data from the spatial domain. As variance decreases, effective scene inference will increasingly rely on spectral data.

  17. Vocational Guidance Requests within the International Scene

    ERIC Educational Resources Information Center

    Goodman, Jane; Gillis, Sarah

    2009-01-01

    This article summarizes the work of a diverse group of researchers and practitioners from 5 continents on "Vocational Guidance Requests Within the International Scene" presented in the discussion group at a symposium of the International Association for Educational and Vocational Guidance, the Society for Vocational Psychology, and the National…

  18. Processing of Unattended Emotional Visual Scenes

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2007-01-01

    Prime pictures of emotional scenes appeared in parafoveal vision, followed by probe pictures either congruent or incongruent in affective valence. Participants responded whether the probe was pleasant or unpleasant (or whether it portrayed people or animals). Shorter latencies for congruent than for incongruent prime-probe pairs revealed affective…

  19. Partially sparse imaging of stationary indoor scenes

    NASA Astrophysics Data System (ADS)

    Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian

    2014-12-01

    In this paper, we exploit the notion of partial sparsity for scene reconstruction associated with through-the-wall radar imaging of stationary targets under reduced data volume. Partial sparsity implies that the scene being imaged consists of a sparse part and a dense part, with the support of the latter assumed to be known. For the problem at hand, sparsity is represented by a few stationary indoor targets, whereas the high scene density is defined by exterior and interior walls. Prior knowledge of wall positions and extent may be available either through building blueprints or from prior surveillance operations. The contributions of the exterior and interior walls are removed from the data through the use of projection matrices, which are determined from wall- and corner-specific dictionaries. The projected data, with enhanced sparsity, is then processed using l 1 norm reconstruction techniques. Numerical electromagnetic data is used to demonstrate the effectiveness of the proposed approach for imaging stationary indoor scenes using a reduced set of measurements.

  20. Creating false memories for visual scenes.

    PubMed

    Miller, M B; Gazzaniga, M S

    1998-06-01

    Creating false memories has become an important tool to investigate the processes underlying true memories. In the course of investigating the constructive and/or reconstructive processes underlying the formation of false memories, it has become clear that paradigms are needed that can create false memories reliably in a variety of laboratory settings. In particular, neuroimaging techniques present certain constraints in terms of subject response and timing of stimuli that a false memory paradigm needs to comply with. We have developed a picture paradigm which results in the false recognition of items of a scene which did not occur almost as often as the true recognition of items that did occur. It uses a single presentation of pictures with thematic, stereotypical scenes (e.g. a beach scene). Some of the exemplars from the scene were removed (e.g. a beach ball) and used as lures during an auditory recognition test. Subjects' performance on this paradigm was compared with their performance on the word paradigm reintroduced by Roediger and McDermott. The word paradigm has been useful in creating false memories in several neuroimaging studies because of the high frequency of false recognition for critical lures (words not presented but closely associated with lists of words that were presented) and the strong subjective sense of remembering accompanying these false recognitions. However, it has several limitations including small numbers of lures and a particular source confusion. The picture paradigm avoids these limitations and produces identical effects on normal subjects. PMID:9705061

  1. Parafoveal Semantic Processing of Emotional Visual Scenes

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Lang, Peter J.

    2005-01-01

    The authors investigated whether emotional pictorial stimuli are especially likely to be processed in parafoveal vision. Pairs of emotional and neutral visual scenes were presented parafoveally (2.1[degrees] or 2.5[degrees] of visual angle from a central fixation point) for 150-3,000 ms, followed by an immediate recognition test (500-ms delay).…

  2. Common high-resolution MMW scene generator

    NASA Astrophysics Data System (ADS)

    Saylor, Annie V.; McPherson, Dwight A.; Satterfield, H. DeWayne; Sholes, William J.; Mobley, Scott B.

    2001-08-01

    The development of a modularized millimeter wave (MMW) target and background high resolution scene generator is reported. The scene generator's underlying algorithms are applicable to both digital and real-time hardware-in-the-loop (HWIL) simulations. The scene generator will be configurable for a variety of MMW and multi-mode sensors employing state of the art signal processing techniques. At present, digital simulations for MMW and multi-mode sensor development and testing are custom-designed by the seeker vendor and are verified, validated, and operated by both the vendor and government in simulation-based acquisition. A typical competition may involve several vendors, each requiring high resolution target and background models for proper exercise of seeker algorithms. There is a need and desire by both the government and sensor vendors to eliminate costly re-design and re-development of digital simulations. Additional efficiencies are realized by assuring commonality between digital and HWIL simulation MMW scene generators, eliminating duplication of verification and validation efforts.

  3. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  4. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  5. Light field constancy within natural scenes.

    PubMed

    Mury, Alexander A; Pont, Sylvia C; Koenderink, Jan J

    2007-10-10

    The structure of light fields of natural scenes is highly complex due to high frequencies in the radiance distribution function. However it is the low-order properties of light that determine the appearance of common matte materials. We describe the local light field in terms of spherical harmonics and analyze the qualitative properties and physical meaning of the low-order components. We take a first step in the further development of Gershun's classical work on the light field by extending his description beyond the 3D vector field, toward a more complete description of the illumination using tensors. We show that the three first components, namely, the monopole (density of light), the dipole (light vector), and the quadrupole (squash tensor) suffice to describe a wide range of qualitatively different light fields. In this paper we address a related issue, namely, the spatial properties of light fields within natural scenes. We want to find out to what extent local light fields change from point to point and how different orders behave. We found experimentally that the low-order components of the light field are rather constant over the scenes whereas high-order components are not. Using very simple models, we found a strong relationship between the low-order components and the geometrical layouts of the scenes. PMID:17932545

  6. Extracting text from real-world scenes

    NASA Technical Reports Server (NTRS)

    Bixler, J. Patrick; Miller, David P.

    1989-01-01

    Many scenes contain significant textual information that can be extremely helpful for understanding and/or navigation. For example, text-based information can frequently be the primary cure used for navigating inside buildings. A subject might first read a marquee, then look for an appropriate hallway and walk along reading door signs and nameplates until the destination is found. Optical character recognition has been studied extensively in recent years, but has been applied almost exclusively to printed documents. As these techniques improve it becomes reasonable to ask whether they can be applied to an arbitrary scene in an attempt to extract text-based information. Before an automated system can be expected to navigate by reading signs, however, the text must first be segmented from the rest of the scene. This paper discusses the feasibility of extracting text from an arbitrary scene and using that information to guide the navigation of a mobile robot. Considered are some simple techniques for first locating text components and then tracking the individual characters to form words and phrases. Results for some sample images are also presented.

  7. Augustus De Morgan behind the Scenes

    ERIC Educational Resources Information Center

    Simmons, Charlotte

    2011-01-01

    Augustus De Morgan's support was crucial to the achievements of the four mathematicians whose work is considered greater than his own. This article explores the contributions he made to mathematics from behind the scenes by supporting the work of Hamilton, Boole, Gompertz, and Ramchundra.

  8. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  9. Exploiting spatial descriptions in visual scene analysis.

    PubMed

    Ziegler, Leon; Johannsen, Katrin; Swadzba, Agnes; De Ruiter, Jan P; Wachsmuth, Sven

    2012-08-01

    The reliable automatic visual recognition of indoor scenes with complex object constellations using only sensor data is a nontrivial problem. In order to improve the construction of an accurate semantic 3D model of an indoor scene, we exploit human-produced verbal descriptions of the relative location of pairs of objects. This requires the ability to deal with different spatial reference frames (RF) that humans use interchangeably. In German, both the intrinsic and relative RF are used frequently, which often leads to ambiguities in referential communication. We assume that there are certain regularities that help in specific contexts. In a first experiment, we investigated how speakers of German describe spatial relationships between different pieces of furniture. This gave us important information about the distribution of the RFs used for furniture-predicate combinations, and by implication also about the preferred spatial predicate. The results of this experiment are compiled into a computational model that extracts partial orderings of spatial arrangements between furniture items from verbal descriptions. In the implemented system, the visual scene is initially scanned by a 3D camera system. From the 3D point cloud, we extract point clusters that suggest the presence of certain furniture objects. We then integrate the partial orderings extracted from the verbal utterances incrementally and cumulatively with the estimated probabilities about the identity and location of objects in the scene, and also estimate the probable orientation of the objects. This allows the system to significantly improve both the accuracy and richness of its visual scene representation. PMID:22806654

  10. Remote Dynamic Three-Dimensional Scene Reconstruction

    PubMed Central

    Yang, You; Liu, Qiong; Ji, Rongrong; Gao, Yue

    2013-01-01

    Remote dynamic three-dimensional (3D) scene reconstruction renders the motion structure of a 3D scene remotely by means of both the color video and the corresponding depth maps. It has shown a great potential for telepresence applications like remote monitoring and remote medical imaging. Under this circumstance, video-rate and high resolution are two crucial characteristics for building a good depth map, which however mutually contradict during the depth sensor capturing. Therefore, recent works prefer to only transmit the high-resolution color video to the terminal side, and subsequently the scene depth is reconstructed by estimating the motion vectors from the video, typically using the propagation based methods towards a video-rate depth reconstruction. However, in most of the remote transmission systems, only the compressed color video stream is available. As a result, color video restored from the streams has quality losses, and thus the extracted motion vectors are inaccurate for depth reconstruction. In this paper, we propose a precise and robust scheme for dynamic 3D scene reconstruction by using the compressed color video stream and their inaccurate motion vectors. Our method rectifies the inaccurate motion vectors by analyzing and compensating their quality losses, motion vector absence in spatial prediction, and dislocation in near-boundary region. This rectification ensures the depth maps can be compensated in both video-rate and high resolution at the terminal side towards reducing the system consumption on both the compression and transmission. Our experiments validate that the proposed scheme is robust for depth map and dynamic scene reconstruction on long propagation distance, even with high compression ratio, outperforming the benchmark approaches with at least 3.3950 dB quality gains for remote applications. PMID:23667417

  11. A graph theoretic approach to scene matching

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1991-01-01

    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.

  12. A graph theoretic approach to scene matching

    NASA Astrophysics Data System (ADS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1991-08-01

    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.

  13. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed

    PubMed Central

    Zimbardo, Philip G.

    2013-01-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person’s past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness, a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior. PMID:24363542

  14. The role of transverse occipital sulcus in scene perception and its relationship to object individuation in inferior intraparietal sulcus

    PubMed Central

    Bettencourt, Katherine C.; Xu, Yaoda

    2013-01-01

    The parietal cortex has been functionally divided into various subregions; however, very little is known about how these areas relate to each other. Two such regions are the transverse occipital sulcus (TOS) scene area and inferior intraparietal sulcus (IPS). TOS exhibits similar activation patterns to the scene selective parahippocampal place area (PPA), suggesting its role in scene perception. Inferior IPS, in contrast, has been shown to participate in object individuation and selection via location. Interestingly, both regions have been localized to the same general area of the brain. If these two were actually the same brain region, it would have important implications regarding these regions’ role in cognition. To explore this, we first localized TOS and inferior IPS in individual participants and examined the degree of overlap between these regions in each participant. We found that TOS showed only a minor degree of overlap with inferior IPS (∼10%). We then directly explored the role of TOS and inferior IPS in object individuation and scene perception by examining their responses to furnished rooms, empty rooms, isolated furniture, and multiple isolated objects. If TOS and inferior IPS were the same region, we would expect to see similar response patterns in both. Instead, the response of TOS was predominantly scene selective, while activity in inferior IPS was primarily driven by the number of objects present in the display, regardless of scene context. These results show that TOS and inferior IPS are nearby, but distinct regions, with different functional roles in visual cognition. PMID:23662863

  15. High-Level Aftereffects to Global Scene Properties

    ERIC Educational Resources Information Center

    Greene, Michelle R.; Oliva, Aude

    2010-01-01

    Adaptation is ubiquitous in the human visual system, allowing recalibration to the statistical regularities of its input. Previous work has shown that global scene properties such as openness and mean depth are informative dimensions of natural scene variation useful for human and machine scene categorization (Greene & Oliva, 2009b; Oliva &…

  16. Scene and Position Specificity in Visual Memory for Objects

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2006-01-01

    This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object…

  17. Additional Crime Scenes for Projectile Motion Unit

    NASA Astrophysics Data System (ADS)

    Fullerton, Dan; Bonner, David

    2011-12-01

    Building students' ability to transfer physics fundamentals to real-world applications establishes a deeper understanding of underlying concepts while enhancing student interest. Forensic science offers a great opportunity for students to apply physics to highly engaging, real-world contexts. Integrating these opportunities into inquiry-based problem solving in a team environment provides a terrific backdrop for fostering communication, analysis, and critical thinking skills. One such activity, inspired jointly by the museum exhibit "CSI: The Experience"2 and David Bonner's TPT article "Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene,"3 provides students with three different crime scenes, each requiring an analysis of projectile motion. In this lesson students socially engage in higher-order analysis of two-dimensional projectile motion problems by collecting information from 3-D scale models and collaborating with one another on its interpretation, in addition to diagramming and mathematical analysis typical to problem solving in physics.

  18. Viewing Complex, Dynamic Scenes "Through the Eyes" of Another Person: The Gaze-Replay Paradigm.

    PubMed

    Bush, Jennifer Choe; Pantelis, Peter Christopher; Morin Duchesne, Xavier; Kagemann, Sebastian Alexander; Kennedy, Daniel Patrick

    2015-01-01

    We present a novel "Gaze-Replay" paradigm that allows the experimenter to directly test how particular patterns of visual input-generated from people's actual gaze patterns-influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze "donor." This was intended to simulate the donor's visual selection, such that a participant could effectively view scenes "through the eyes" of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition. PMID:26252493

  19. Photometric analysis as an aid to 3D reconstruction of indoor scenes

    NASA Astrophysics Data System (ADS)

    Serfaty, Veronique; Ackah-Miezan, Andrew; Lutton, Evelyne; Gagalowicz, Andre

    1993-06-01

    In an Image Understanding framework, our aim is to reconstruct an actual indoor scene from a (sequence of) color pair(s) of stereoscopic images. The desired (synthesis-oriented) description requires the analysis of both 3D geometric and photometric parameters in order to use the feedback provided by image synthesis to control the image analysis. The environment model is a hierarchy of polyhedral 3D objects (planar lambertian facets). Two main physical phenomena determine the image intensities: surface reflectance properties and light sources. From illumination models established in Computer Graphics, we derive the appropriate irradiance equations. Rather than use a point source located at infinity, we choose instead isotropic point sources with decreasing energy. This allows us to discriminate small irradiance gradients inside regions. For indoor scenes, such photometric models are more realistic, due to the presence of ceiling lights, desk lamps, and so on. Both a photometric reconstruction algorithm and a technique for localizing the 'dominant' light source are presented along with lighting simulations. For comparison purposes, corresponding artificial images are shown. Using this work, we wish to highlight the fruitful cooperation between the Vision and Graphics domains in order to perform a more accurate scene reconstruction, both photometrically and geometrically. The emphasis is on the illumination characterization which influences the scene interpretation.

  20. Combining MMW radar and radiometer images for enhanced characterization of scenes

    NASA Astrophysics Data System (ADS)

    Peichl, Markus; Dill, Stephan

    2016-05-01

    Since several years the use of active (radar) and passive (radiometer) MMW remote sensing is considered as an appropriate tool for a lot of security related applications. Those are personnel screening for concealed object detection under clothing, or enhanced vision for vehicles or aircraft, just to mention few examples. Radars, having a transmitter for scene illumination and a receiver for echo recording, are basically range measuring devices which deliver in addition information about a target's reflectivity behavior. Radiometers, having only a receiver to record natural thermal radiation power, provide typically emission and reflection properties of a scene using the environment and the cosmic background radiation as a natural illumination source. Consequently, the active and passive signature of a scene and its objects is quite different depending on the target and its scattering characteristics, and the actual illumination properties. Typically technology providers are working either purely on radar or purely on radiometers for gathering information about a scene of interest. Rather rarely both information sources are really combined for enhanced information extraction, and then the sensor's imaging geometries usually do not fit adequately so that the benefit of doing that cannot be fully exploited. Consequently, investigations on adequate combinations of MMW radar and radiometer data have been performed. A mechanical scanner used from earlier experiments on personnel screening was modified to provide similar imaging geometry for Ka-band radiometer and K-band radar. First experimental results are shown and discussed.

  1. Worth a quick look? Initial scene previews can guide eye movements as a function of domain-specific expertise but can also have unforeseen costs.

    PubMed

    Litchfield, Damien; Donovan, Tim

    2016-07-01

    Rapid scene recognition is a global visual process we can all exploit to guide search. This ability is thought to underpin expertise in medical image perception yet there is no direct evidence that isolates the expertise-specific contribution of processing scene previews on subsequent eye movement performance. We used the flash-preview moving window paradigm (Castelhano & Henderson, 2007) to investigate this issue. Expert radiologists and novice observers underwent 2 experiments whereby participants viewed a 250-ms scene preview or a mask before searching for a target. Observers looked for everyday objects from real-world scenes (Experiment 1), and searched for lung nodules from medical images (Experiment 2). Both expertise groups exploited the brief preview of the upcoming scene to more efficiently guide windowed search in Experiment 1, but there was only a weak effect of domain-specific expertise in Experiment 2, with experts showing small improvements in search metrics with scene previews. Expert diagnostic performance was better than novices in all conditions but was not contingent on seeing the scene preview, and scene preview actually impaired novice diagnostic performance. Experiment 3 required novice and experienced observers to search for a variety of abnormalities from different medical images. Rather than maximizing the expertise-specific advantage of processing scene previews, both novices and experienced radiographers were worse at detecting abnormalities with scene previews. We discuss how restricting access to the initial glimpse can be compensated for by subsequent search and discovery processing, but there can still be costs in integrating a fleeting glimpse of a medical scene. (PsycINFO Database Record PMID:26784003

  2. Functional imaging of auditory scene analysis.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew R

    2014-01-01

    Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. PMID:23968821

  3. Holography of incoherently illuminated 3D scenes

    NASA Astrophysics Data System (ADS)

    Shaked, Natan T.; Rosen, Joseph

    2008-04-01

    We review several methods of generating holograms of 3D realistic objects illuminated by incoherent white light. Using these methods, it is possible to obtain holograms with a simple digital camera, operating in regular light conditions. Thus, most disadvantages characterizing conventional holography, namely the need for a powerful, highly coherent laser and meticulous stability of the optical system are avoided. These holograms can be reconstructed optically by illuminating them with a coherent plane wave, or alternatively by using a digital reconstruction technique. In order to generate the proposed hologram, the 3D scene is captured from multiple points of view by a simple digital camera. Then, the acquired projections are digitally processed to yield the final hologram of the 3D scene. Based on this principle, we can generate Fourier, Fresnel, image or other types of holograms. To obtain certain advantages over the regular holograms, we also propose new digital holograms, such as modified Fresnel holograms and protected correlation holograms. Instead of shifting the camera mechanically to acquire a different projection of the 3D scene each time, it is possible to use a microlens array for acquiring the entire projections in a single camera shot. Alternatively, only the extreme projections can be acquired experimentally, while the middle projections are predicted digitally by using the view synthesis algorithm. The prospective goal of these methods is to facilitate the design of a simple, portable digital holographic camera which can be useful for a variety of practical applications.

  4. The time course of natural scene perception with reduced attention.

    PubMed

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention. PMID:26609116

  5. Human-Machine CRFs for Identifying Bottlenecks in Scene Understanding.

    PubMed

    Mottaghi, Roozbeh; Fidler, Sanja; Yuille, Alan; Urtasun, Raquel; Parikh, Devi

    2016-01-01

    Recent trends in image understanding have pushed for scene understanding models that jointly reason about various tasks such as object detection, scene recognition, shape analysis, contextual reasoning, and local appearance based classifiers. In this work, we are interested in understanding the roles of these different tasks in improved scene understanding, in particular semantic segmentation, object detection and scene recognition. Towards this goal, we "plug-in" human subjects for each of the various components in a conditional random field model. Comparisons among various hybrid human-machine CRFs give us indications of how much "head room" there is to improve scene understanding by focusing research efforts on various individual tasks. PMID:26656579

  6. Influence of a psychological perspective on scene viewing and memory for scenes.

    PubMed

    Kaakinen, Johanna K; Hyönä, Jukka; Viljanen, Minna

    2011-07-01

    In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing. PMID:21391155

  7. Note-Making with T-Notes.

    ERIC Educational Resources Information Center

    Clark, Elvis G.; Davis, Archie D.

    The T-Note system is an easy way for students to take notes, is organized for effective review, and is adaptable because it provides a system for recording five types of information typically presented in the classroom. The student first divides a single loose-leaf notebook page vertically down the middle, and horizontally about one or two inches…

  8. Basic level scene understanding: categories, attributes and structures

    PubMed Central

    Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude

    2013-01-01

    A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590

  9. Collaboration on Scene Graph Based 3D Data

    NASA Astrophysics Data System (ADS)

    Ammon, Lorenz; Bieri, Hanspeter

    Professional 3D digital content creation tools, like Alias Maya or discreet 3ds max, offer only limited support for a team of artists to work on a 3D model collaboratively. We present a scene graph repository system that enables fine-grained collaboration on scenes built using standard 3D DCC tools by applying the concept of collaborative versions to a general attributed scene graph. Artists can work on the same scene in parallel without locking out each other. The artists' changes to a scene are regularly merged to ensure that all artists can see each others progress and collaborate on current data. We introduce the concept of indirect changes and indirect conflicts to systematically inspect the effects that collaborative changes have on a scene. Inspecting indirect conflicts helps maintaining scene consistency by systematically looking for inconsistencies at the right places.

  10. Rapid 3D video/laser sensing and digital archiving with immediate on-scene feedback for 3D crime scene/mass disaster data collection and reconstruction

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.

    1996-02-01

    We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.

  11. Applying artificial vision models to human scene understanding.

    PubMed

    Aminoff, Elissa M; Toneva, Mariya; Shrivastava, Abhinav; Chen, Xinlei; Misra, Ishan; Gupta, Abhinav; Tarr, Michael J

    2015-01-01

    How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective-the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)-have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN-the models that best accounted for the patterns obtained from PPA and TOS-were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method-NEIL ("Never-Ending-Image-Learner"), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes-showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network. PMID:25698964

  12. Microcounseling Skill Discrimination Scale: A Methodological Note

    ERIC Educational Resources Information Center

    Stokes, Joseph; Romer, Daniel

    1977-01-01

    Absolute ratings on the Microcounseling Skill Discrimination Scale (MSDS) confound the individual's use of the rating scale and actual ability to discriminate effective and ineffective counselor behaviors. This note suggests methods of scoring the MSDS that will eliminate variability attributable to response language and improve the validity of…

  13. Linguistic Theory and Actual Language.

    ERIC Educational Resources Information Center

    Segerdahl, Par

    1995-01-01

    Examines Noam Chomsky's (1957) discussion of "grammaticalness" and the role of linguistics in the "correct" way of speaking and writing. It is argued that the concern of linguistics with the tools of grammar has resulted in confusion, with the tools becoming mixed up with the actual language, thereby becoming the central element in a metaphysical…

  14. Primal scene derivatives in the work of Yukio Mishima: the primal scene fantasy.

    PubMed

    Turco, Ronald N

    2002-01-01

    This article discusses the preoccupation with fire, revenge, crucifixion, and other fantasies as they relate to the primal scene. The manifestations of these fantasies are demonstrated in a work of fiction by Yukio Mishima. The Temple of the Golden Pavillion. As is the case in other writings of Mishima there is a fusion of aggressive and libidinal drives and a preoccupation with death. The primal scene is directly connected with pyromania and destructive "acting out" of fantasies. This article is timely with regard to understanding contemporary events of cultural and national destruction. PMID:12197253

  15. TMS to object cortex affects both object and scene remote networks while TMS to scene cortex only affects scene networks.

    PubMed

    Rafique, Sara A; Solomon-Harris, Lily M; Steeves, Jennifer K E

    2015-12-01

    Viewing the world involves many computations across a great number of regions of the brain, all the while appearing seamless and effortless. We sought to determine the connectivity of object and scene processing regions of cortex through the influence of transient focal neural noise in discrete nodes within these networks. We consecutively paired repetitive transcranial magnetic stimulation (rTMS) with functional magnetic resonance-adaptation (fMR-A) to measure the effect of rTMS on functional response properties at the stimulation site and in remote regions. In separate sessions, rTMS was applied to the object preferential lateral occipital region (LO) and scene preferential transverse occipital sulcus (TOS). Pre- and post-stimulation responses were compared using fMR-A. In addition to modulating BOLD signal at the stimulation site, TMS affected remote regions revealing inter and intrahemispheric connections between LO, TOS, and the posterior parahippocampal place area (PPA). Moreover, we show remote effects from object preferential LO to outside the ventral perception network, in parietal and frontal areas, indicating an interaction of dorsal and ventral streams and possibly a shared common framework of perception and action. PMID:26511624

  16. Recognizing dynamic scenes: influence of processing orientation.

    PubMed

    Huff, Markus; Schwan, Stephan; Garsoffky, Bärbel

    2011-04-01

    From face recognition studies, it is known that instructions are able to change processing orientation of stimuli, leading to an impairment of recognition performance. The present study examined instructional influences on the visual recognition of dynamic scenes. A global processing orientation without any instruction was assumed to lead to highest recognition performance, whereas instructions focusing participants' attention on certain characteristics of the event should lead to a local processing orientation with an impairment of visual recognition performance as a direct consequence. Since the pattern of results provided evidence for this hypothesis, theoretical contributions were discussed. PMID:21667754

  17. Lateralized discrimination of emotional scenes in peripheral vision.

    PubMed

    Calvo, Manuel G; Rodríguez-Chinea, Sandra; Fernández-Martín, Andrés

    2015-03-01

    This study investigates whether there is lateralized processing of emotional scenes in the visual periphery, in the absence of eye fixations; and whether this varies with emotional valence (pleasant vs. unpleasant), specific emotional scene content (babies, erotica, human attack, mutilation, etc.), and sex of the viewer. Pairs of emotional (positive or negative) and neutral photographs were presented for 150 ms peripherally (≥6.5° away from fixation). Observers judged on which side the emotional picture was located. Low-level image properties, scene visual saliency, and eye movements were controlled. Results showed that (a) correct identification of the emotional scene exceeded the chance level; (b) performance was more accurate and faster when the emotional scene appeared in the left than in the right visual field; (c) lateralization was equivalent for females and males for pleasant scenes, but was greater for females and unpleasant scenes; and (d) lateralization occurred similarly for different emotional scene categories. These findings reveal discrimination between emotional and neutral scenes, and right brain hemisphere dominance for emotional processing, which is modulated by sex of the viewer and scene valence, and suggest that coarse affective significance can be extracted in peripheral vision. PMID:25511169

  18. Simulation on polarization states of finite surface for infrared scenes

    NASA Astrophysics Data System (ADS)

    Gao, Ying; Wang, Lin; Shao, Xiaopeng; Liu, Fei

    2015-05-01

    A simulation method for analyzing polarization states for infrared scenes is proposed in order to study the polarization features of infrared spontaneous emission deeply, since current infrared polarization devices can't show the polarization signature of infrared spontaneous emission for a target or an object well. A preliminary analysis on polarization characteristics of infrared spontaneous emission in the ideal case is carried out and also a corresponding ideal model is established through Kirchhoff's law and the Fresnel theorem. Based on the newly built ideal model, a three-dimensional (3D) scene modeling and simulation based on the OpenSceneGraph (OSG) rendering engine is utilized to obtain the polarization scene of infrared emission under ideal conditions. Through the corresponding software, different infrared scenes can be generated by adjusting the input parameters. By interacting with the scene, the infrared polarization images can be acquired readily, also a fact can be obviously confirmed that the degree of linear polarization (DoLP) for an object in the 3D scene varies with the many factors such as emission angle and complex refractive index. Moreover, large difference between two kinds of material such as metal and nonmetal in the polarization characteristics of infrared spontaneous emission at the same temperature can be easily discerned in the 3D scene. The 3D scene simulation and modeling in the ideal case provides a direct understanding on infrared polarization property, which is of great significance for the further study of infrared polarization characteristics in the situation of real scenes.

  19. Spatial frequency processing in scene-selective cortical regions.

    PubMed

    Kauffmann, Louise; Ramanoël, Stephen; Guyader, Nathalie; Chauvin, Alan; Peyrin, Carole

    2015-05-15

    Visual analysis begins with the parallel extraction of different attributes at different spatial frequencies. Low spatial frequencies (LSF) convey coarse information and are characterized by high luminance contrast, while high spatial frequencies (HSF) convey fine details and are characterized by low luminance contrast. In the present fMRI study, we examined how scene-selective regions-the parahippocampal place area (PPA), the retrosplenial cortex (RSC) and the occipital place area (OPA)-responded to spatial frequencies when contrast was either equalized or not equalized across spatial frequencies. Participants performed a categorization task on LSF, HSF and non-filtered scenes belonging to two different categories (indoors and outdoors). We either left contrast across scenes untouched, or equalized it using a root-mean-square contrast normalization. We found that when contrast remained unmodified, LSF and NF scenes elicited greater activation than HSF scenes in the PPA. However, when contrast was equalized across spatial frequencies, the PPA was selective to HFS. This suggests that PPA activity relies on an interaction between spatial frequency and contrast in scenes. In the RSC, LSF and NF elicited greater response than HSF scenes when contrast was not modified, while no effect of spatial frequencies appeared when contrast was equalized across filtered scenes, suggesting that the RSC is sensitive to high-contrast information. Finally, we observed selective activation of the OPA in response to HSF, irrespective of contrast manipulation. These results provide new insights into how scene-selective areas operate during scene processing. PMID:25754068

  20. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  1. Suicide, accident? The importance of the scene investigation.

    PubMed

    Ermenc, B; Prijon, T

    2005-01-17

    We present the as yet unresolved case of the death by gunshot wound of a 21-year-old student from a recent local inspection. It was reported that the daughter of the house had been shot through the window while she was washing the dishes. Slight discrepancies were noted in the statements of the family, who are very religious. The firearm, projectile and cartridge have not been found despite an intensive search. The daughter and the mother tested positive for traces of gunpowder on their hands, while in the case of the son traces were found on his hands and on his vest. That the trajectory of the projectile was from the kitchen outwards was established on the basis of a small hole in the inner pane of the kitchen window and a larger hole in the outer pane. The shot passed through the victim's cheek and the neck. The entrance wound (aditus) on the right cheek had complementary features characteristic of a gunshot from a short-barrelled firearm at relative proximity. The shot passed through the left jugular vein and the left internal carotid artery. The exit wound (exitus) was slightly larger and of irregular shape. The family chose a traditional burial. The mother and son did not present themselves for polygraph testing. A charge was filed against the mother of the deceased. Emphasis was placed on the scene investigation. A covered-up suicide? An accident (a scuffle when trying to prevent suicide)? PMID:15694721

  2. Full Scenes Produce More Activation than Close-Up Scenes and Scene-Diagnostic Objects in Parahippocampal and Retrosplenial Cortex: An fMRI Study

    ERIC Educational Resources Information Center

    Henderson, John M.; Larson, Christine L.; Zhu, David C.

    2008-01-01

    We used fMRI to directly compare activation in two cortical regions previously identified as relevant to real-world scene processing: retrosplenial cortex and a region of posterior parahippocampal cortex functionally defined as the parahippocampal place area (PPA). We compared activation in these regions to full views of scenes from a global…

  3. Applying artificial vision models to human scene understanding

    PubMed Central

    Aminoff, Elissa M.; Toneva, Mariya; Shrivastava, Abhinav; Chen, Xinlei; Misra, Ishan; Gupta, Abhinav; Tarr, Michael J.

    2015-01-01

    How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective—the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)—have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN—the models that best accounted for the patterns obtained from PPA and TOS—were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method—NEIL (“Never-Ending-Image-Learner”), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes—showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network. PMID:25698964

  4. Solar Concepts: Teacher Notes.

    ERIC Educational Resources Information Center

    Gorham, Jonathan W.

    This volume of teacher notes describes teaching methods to support the material presented in the background text and to elaborate on basic solar concepts. Included are objectives and quizzes, teacher notes and bibliographies, and selected student projects. (Author/RE)

  5. Lecture Notes on Multigrid Methods

    SciTech Connect

    Vassilevski, P S

    2010-06-28

    The Lecture Notes are primarily based on a sequence of lectures given by the author while been a Fulbright scholar at 'St. Kliment Ohridski' University of Sofia, Sofia, Bulgaria during the winter semester of 2009-2010 academic year. The notes are somewhat expanded version of the actual one semester class he taught there. The material covered is slightly modified and adapted version of similar topics covered in the author's monograph 'Multilevel Block-Factorization Preconditioners' published in 2008 by Springer. The author tried to keep the notes as self-contained as possible. That is why the lecture notes begin with some basic introductory matrix-vector linear algebra, numerical PDEs (finite element) facts emphasizing the relations between functions in finite dimensional spaces and their coefficient vectors and respective norms. Then, some additional facts on the implementation of finite elements based on relation tables using the popular compressed sparse row (CSR) format are given. Also, typical condition number estimates of stiffness and mass matrices, the global matrix assembly from local element matrices are given as well. Finally, some basic introductory facts about stationary iterative methods, such as Gauss-Seidel and its symmetrized version are presented. The introductory material ends up with the smoothing property of the classical iterative methods and the main definition of two-grid iterative methods. From here on, the second part of the notes begins which deals with the various aspects of the principal TG and the numerous versions of the MG cycles. At the end, in part III, we briefly introduce algebraic versions of MG referred to as AMG, focusing on classes of AMG specialized for finite element matrices.

  6. Impacts of VIIRS polarization sensitivity on non-ocean scenes

    NASA Astrophysics Data System (ADS)

    Wilkinson, Timothy S.

    2015-09-01

    The Visible and Infrared Imaging Radiometer Suite (VIIRS) collects Earth science data continually in a sun-synchronous orbit. VIIRS raw data records (RDRs) are processed by ground software to generate a variety of environmental data records (EDRs). Over open ocean, ground software produces measurements of chlorophyll concentration based on subsurface reflectance estimates. Considering that about 90% of the top of the atmosphere (TOA) radiance reaching a sensor over open ocean can be attributed to atmosphere or surface reflectance, it is possible to introduce large chlorophyll estimate errors by ignoring ordinarily small contributions due to polarization sensitivity. For chlorophyll determination, instrument polarization sensitivity measurements are used in combination with atmospheric models to compensate for polarization phenomenon. VIIRS ground software does not compensate for polarization phenomenon when processing land scenes. It is therefore natural to consider the impacts of ignoring VIIRS polarization phenomenon for land surface reflectance estimates. In this work, pre-flight polarization sensitivity characterization data is used in conjunction with a polarized atmospheric propagation model to analyze potential impacts on retrieved TOA reflectance. Impacts are analyzed across several collection conditions, including ground surface type, atmospheric visibility, general atmospheric profile and collection geometry. Actual pre-flight characterization data is used for both NPP and J1 VIIRS.

  7. Individual predictions of eye-movements with dynamic scenes

    NASA Astrophysics Data System (ADS)

    Barth, Erhardt; Drewes, Jan; Martinetz, Thomas

    2003-06-01

    We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches in two ways: (1) we deal with dynamic scenes, and (2) we provide means of adapting the model to a particular observer. As an indicator for the degree of saliency we evaluate the intrinsic dimension of the image sequence within a geometric approach implemented by using the structure tensor. Out of these candidate saliency-based locations, the currently attended location is selected according to a strategy found by supervised learning. The data are obtained with an eye-tracker and subjects who view video sequences. The selection algorithm receives candidate locations of current and past frames and a limited history of locations attended in the past. We use a linear mapping that is obtained by minimizing the quadratic difference between the predicted and the actually attended location by gradient descent. Being linear, the learned mapping can be quickly adapted to the individual observer.

  8. Synthetic scene generation model (SSGM R7.0)

    NASA Astrophysics Data System (ADS)

    Wilcoxen, Bruce A.; Heckathorn, Harry M.

    1996-06-01

    BMDO must simulate the detection, acquisition, discrimination and tracking of anticipated targets and predict the effect of natural and man-made backgrounds and environmental phenomena on optical and radar sensor systems designed to perform these tasks. The SSGM is designed to integrate state-of-science knowledge, data bases and valid phenomenology models to simulate ballistic missile engagement scenarios for both passive and active sensors aboard surveillance system platforms and defensive interceptor missiles -- thereby serving as a traceable standard against which different BMDO concepts and designs can be evaluated. This paper concentrates on describing the current capabilities and planned development efforts for SSGM. The focus will be on the functionality of the SSGM Release 7.0 and the planned development effort for subsequent SSGM releases. We shall demonstrate the current SSGM capability (R7.0, January 1996) with sample multi-phenomenology output scenes and videos. New capabilities include realistic 6-DOF dynamics for targets, simulated target radar cross section correlated with IR information, and authorative target model data sets based on actual flight experiments.

  9. Comprehensive Understanding for Vegetated Scene Radiance Relationships

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Deering, D. W.

    1984-01-01

    Directional reflectance distributions spanning the entire existent hemisphere were measured in two field studies; one using a Mark III 3-band radiometer and one using the rapid scanning bidirectional field instrument called PARABOLA. Surfaces measured included corn, soybeans, bare soils, grass lawn, orchard grass, alfalfa, cotton row crops, plowed field, annual grassland, stipa grass, hard wheat, salt plain shrubland, and irrigated wheat. Analysis of field data showed unique reflectance distributions ranging from bare soil to complete vegetation canopies. Physical mechanisms causing these trends were proposed. A 3-D model was developed and is unique in that it predicts: (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy; (2) the spectral absorption as a function of location within the scene; and (3) the directional spectral radiance as a function of the sensor's location within the scene. Initial verification of the model as applied to a soybean row crop showed that the simulated directional data corresponded relatively well in gross trends to the measured data. The model was expanded to include the anisotropic scattering properties of leaves as a function of the leaf orientation distribution in both the zenith and azimuth angle modes.

  10. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  11. Tachistoscopic illumination and masking of real scenes

    PubMed Central

    Chichka, David; Philbeck, John W.; Gajewski, Daniel A.

    2014-01-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and the directional locations of objects in 2D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues may be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This paper describes the system and the timing characteristics of each component. Verification of the ability to control exposure to time scales as low as a few milliseconds is demonstrated. PMID:24519496

  12. Recognition and memory for briefly presented scenes.

    PubMed

    Potter, Mary C

    2012-01-01

    Three times per second, our eyes make a new fixation that generates a new bottom-up analysis in the visual system. How much is extracted from each glimpse? For how long and in what form is that information remembered? To answer these questions, investigators have mimicked the effect of continual shifts of fixation by using rapid serial visual presentation of sequences of unrelated pictures. Experiments in which viewers detect specified target pictures show that detection on the basis of meaning is possible at presentation durations as brief as 13 ms, suggesting that understanding may be based on feedforward processing, without feedback. In contrast, memory for what was just seen is poor unless the viewer has about 500 ms to think about the scene: the scene does not need to remain in view. Initial memory loss after brief presentations occurs over several seconds, suggesting that at least some of the information from the previous few fixations persists long enough to support a coherent representation of the current environment. In contrast to marked memory loss shortly after brief presentations, memory for pictures viewed for 1 s or more is excellent. Although some specific visual information persists, the form and content of the perceptual and memory representations of pictures over time indicate that conceptual information is extracted early and determines most of what remains in longer-term memory. PMID:22371707

  13. Characteristics of the Self-Actualized Person: Visions from the East and West.

    ERIC Educational Resources Information Center

    Chang, Raylene; Page, Richard C.

    1991-01-01

    Compares and contrasts the ways that Chinese Taoism and Zen Buddhism view the development of human potential with the ways that the self-actualization theories of Rogers and Maslow describe the human potential movement. Notes many similarities between the ways that Taoism, Zen Buddhism, and the self-actualization theories of Rogers and Maslow…

  14. "A cool little buzz": alcohol intoxication in the dance club scene.

    PubMed

    Hunt, Geoffrey; Moloney, Molly; Fazio, Adam

    2014-06-01

    In recent years, there has been increasing concern about youthful "binge" drinking and intoxication. Yet the meaning of intoxication remains under-theorized. This paper examines intoxication in a young adult nightlife scene, using data from a 2005-2008 National Institute on Drug Abuse-funded project on Asian American youth and nightlife. Analyzing in-depth qualitative interview data with 250 Asian American young adults in the San Francisco area, we examine their narratives about alcohol intoxication with respect to sociability, stress, and fun, and their navigation of the fine line between being "buzzed" and being "wasted." Finally, limitations of the study and directions for future research are noted. PMID:24779496

  15. Crime scene units: a look to the future

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    The scientific examination of physical evidence is well recognized as a critical element in conducting successful criminal investigations and prosecutions. The forensic science field is an ever changing discipline. With the arrival of DNA, new processing techniques for latent prints, portable lasers, and electro-static dust print lifters, and training of evidence technicians has become more important than ever. These scientific and technology breakthroughs have increased the possibility of collecting and analyzing physical evidence that was never possible before. The problem arises with the collection of physical evidence from the crime scene not from the analysis of the evidence. The need for specialized units in the processing of all crime scenes is imperative. These specialized units, called crime scene units, should be trained and equipped to handle all forms of crime scenes. The crime scenes units would have the capability to professionally evaluate and collect pertinent physical evidence from the crime scenes.

  16. The Spatial Representation of Dynamic Scenes - An Integrative Approach

    NASA Astrophysics Data System (ADS)

    Huff, Markus; Schwan, Stephan; Garsoffky, Bärbel

    This paper addresses the spatial representation of dynamic scenes, particularly the question whether recognition performance is viewpoint dependent or viewpoint invariant. Beginning with the delimitation of static and dynamic scene recognition, the viewpoint dependency of visual recognition performance and the structure of the underlying mental representation are discussed. In the following, two parameters (an easy to identify event model and salient static features) are identified which appeared to be accountable for viewpoint dependency or viewpoint invariance of visual recognition performance for dynamic scenes.

  17. Sensitivity to emotional scene content outside the focus of attention.

    PubMed

    Calvo, Manuel G; Gutiérrez-García, Aida; Del Líbano, Mario

    2015-10-01

    We investigated whether the emotional content of visual scenes depicting people is processed in peripheral vision. Emotional or neutral scene photographs were paired with a matched scrambled image for 150ms in peripheral vision (≥5°). The pictures were immediately followed by a digit or letter in a discrimination task. Interference (i.e., slowed reaction times) with performance in this task indexed the processing resources drawn by the pictures. Twelve types of specific emotional scene contents (e.g., erotica or mutilation) were compared. Results showed, first, that emotional scenes caused greater interference than neutral scenes, in the absence of fixations. This suggests that emotional scenes are processed and draw covert attention outside the focus of overt attention. Second, interference was similar for female and male participants with pleasant scenes (except for erotica), but females were more affected by all types of unpleasant scenes than males. This reveals that sensitivity to peripheral vision is modulated by sex and affective valence. Third, low-level image properties, visual saliency, and size of bodies and faces, were generally equivalent for emotional and neutral scenes. This rules out the alternative hypothesis of a contribution of non-emotional, purely perceptual factors. PMID:26301803

  18. Research on hyperspectral dynamic infrared scene simulation technology

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Hu, Yu; Ding, Na; Sun, Kefeng; Sun, Dandan; Xie, Junhu; Wu, Wenli; Gao, Jiaobo

    2015-02-01

    The paper presents a hardware in loop dynamic IR scene simulation technology for IR hyperspectral imaging system. Along with fleetly development of new type EO detecting, remote sensing and hyperspectral imaging technique, not only static parameters' calibration of hyperspectral IR imaging system but also dynamic parameters' testing and evaluation are required, thus hyperspectral dynamic IR simulation and evaluation become more and more important. Hyperspectral dynamic IR scene projector utilizes hyperspectral space and time domain features controlling spectrum and time synchronously to realize hardware in loop simulation. Hyperspectral IR target and background simulating image can be gained by the accomplishment of 3D model and IR characteristic romancing, hyperspectral dynamic IR scene is produced by image converting device. The main parameters of a developed hyperspectral dynamic IR scene projector: wave band range is 3~5μm, 8~12μm Field of View (FOV) is 8°; spatial resolution is 1024×768 spectrum resolution is 1%~2%. IR source and simulating scene features should be consistent with spectrum characters of target, and different spectrum channel's images can be gotten from calibration. A hyperspectral imaging system splits light with dispersing type grating, pushbrooms and collects the output signal of dynamic IR scene projector. With hyperspectral scene spectrum modeling, IR features romancing, atmosphere transmission feature modeling and IR scene projecting, target and scene in outfield can be simulated ideally, simulation and evaluation of IR hyperspectral imaging system's dynamic features are accomplished in laboratory.

  19. A Reconfigurable Tangram Model for Scene Representation and Categorization.

    PubMed

    Jun Zhu; Tianfu Wu; Song-Chun Zhu; Xiaokang Yang; Wenjun Zhang

    2016-01-01

    This paper presents a hierarchical and compositional scene layout (i.e., spatial configuration) representation and a method of learning reconfigurable model for scene categorization. Three types of shape primitives (i.e., triangle, parallelogram, and trapezoid), called tans, are used to tile scene image lattice in a hierarchical and compositional way, and a directed acyclic AND-OR graph (AOG) is proposed to organize the overcomplete dictionary of tan instances placed in image lattice, exploring a very large number of scene layouts. With certain off-the-shelf appearance features used for grounding terminal-nodes (i.e., tan instances) in the AOG, a scene layout is represented by the globally optimal parse tree learned via a dynamic programming algorithm from the AOG, which we call tangram model. Then, a scene category is represented by a mixture of tangram models discovered with an exemplar-based clustering method. On basis of the tangram model, we address scene categorization in two aspects: 1) building a tangram bank representation for linear classifiers, which utilizes a collection of tangram models learned from all categories and 2) building a tangram matching kernel for kernel-based classification, which accounts for all hidden spatial configurations in the AOG. In experiments, our methods are evaluated on three scene data sets for both the configuration-level and semantic-level scene categorization, and outperform the spatial pyramid model consistently. PMID:26561434

  20. The occipital place area represents the local elements of scenes.

    PubMed

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. PMID:26931815

  1. Imaging polarimetry in scene element discrimination

    NASA Astrophysics Data System (ADS)

    Duggin, Michael J.

    1999-10-01

    Recent work has shown that the use of a calibrated digital camera fitted with a rotating linear polarizer can facilitate the study of Stokes parameter images across a wide dynamic range of scene radiance values. Here, we show images of a MacBeth color chips, Spectralon gray scale targets and Kodak gray cards. We also consider a static aircraft mounted on a platform against a clear sky background. We show that the contrast in polarization is greater than for intensity, and that polarization contrast increases as intensity contrast decreases. We also show that there is a great variation in the polarization in and between each of the bandpasses: this variation is comparable to the magnitude of the variation in intensity.

  2. [A doctor's action within possible crime scene].

    PubMed

    Sowizdraniuk, Joanna

    2016-01-01

    Every doctor regardless of specialization in his practice may meet the need to provide assistance to victims of crime-related action. In this article there were disscused the issues of informing the investigative authorities about the crime, ensuring the safety of themselves and the environment at the scene. It also shows the specific elements of necessary procedures and practice to deal with the victims designed to securing any evidence present of potential or committed crime in proper manner. Special attention has been given to medical operation and other, necessary in case of certain criminal groups, among the latter we need to underline: actions against sexual freedom and decency, bodily integrity, life and well-being of human, and specially homicide, infanticide and suicide. PMID:27164285

  3. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  4. How People Actually Use Thermostats

    SciTech Connect

    Meier, Alan; Aragon, Cecilia; Hurwitz, Becky; Mujumdar, Dhawal; Peffer, Therese; Perry, Daniel; Pritoni, Marco

    2010-08-15

    Residential thermostats have been a key element in controlling heating and cooling systems for over sixty years. However, today's modern programmable thermostats (PTs) are complicated and difficult for users to understand, leading to errors in operation and wasted energy. Four separate tests of usability were conducted in preparation for a larger study. These tests included personal interviews, an on-line survey, photographing actual thermostat settings, and measurements of ability to accomplish four tasks related to effective use of a PT. The interviews revealed that many occupants used the PT as an on-off switch and most demonstrated little knowledge of how to operate it. The on-line survey found that 89% of the respondents rarely or never used the PT to set a weekday or weekend program. The photographic survey (in low income homes) found that only 30% of the PTs were actually programmed. In the usability test, we found that we could quantify the difference in usability of two PTs as measured in time to accomplish tasks. Users accomplished the tasks in consistently shorter times with the touchscreen unit than with buttons. None of these studies are representative of the entire population of users but, together, they illustrate the importance of improving user interfaces in PTs.

  5. Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing

    NASA Astrophysics Data System (ADS)

    Sedlar, Michael F.; Griffith, Jerry A.

    1988-07-01

    This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.

  6. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    PubMed

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes. PMID:24783559

  7. An analysis of LANDSAT MSS scene-to-scene registration accuracy

    NASA Technical Reports Server (NTRS)

    Seyfarth, B. R.; Cook, P. W. (Principal Investigator)

    1981-01-01

    Measurements were made for 12 registrations done by ERL for 8 registrations done by SRS. The results indicate that the ERL method is significantly more accurate in five of the eight comparison. The difference between the two methods are not significant in the other three cases. There are two possible reasons for the differences. First, the ERL model is a piecewise linear model and the EDITOR model is a cubic polynomial model. Second, the ERL program resamples using bilinear interpolation while the EDITOR software uses a nearest neighbor resampling. This study did not indicate how much of the difference is attributable to each factor. The average of all merged scene error values for ERL was 31.6 meters and the average for the eight common areas was 32.6 meters. The average of the eight merged scene error values for SRS was 40.1 meters.

  8. The Influence of Color on the Perception of Scene Gist

    ERIC Educational Resources Information Center

    Castelhano, Monica S.; Henderson, John M.

    2008-01-01

    In 3 experiments the authors used a new contextual bias paradigm to explore how quickly information is extracted from a scene to activate gist, whether color contributes to this activation, and how color contributes, if it does. Participants were shown a brief presentation of a scene followed by the name of a target object. The target object could…

  9. CRISP: A Computational Model of Fixation Durations in Scene Viewing

    ERIC Educational Resources Information Center

    Nuthmann, Antje; Smith, Tim J.; Engbert, Ralf; Henderson, John M.

    2010-01-01

    Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations.…

  10. Emotional Scene Content Drives the Saccade Generation System Reflexively

    ERIC Educational Resources Information Center

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2009-01-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster…

  11. Binding actions and scenes in visual long-term memory.

    PubMed

    Urgolites, Zhisen Jiang; Wood, Justin N

    2013-12-01

    How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action-scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80%). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59%). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors. PMID:23653419

  12. Visual search for arbitrary objects in real scenes

    PubMed Central

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  13. Being There: (Re)Making the Assessment Scene

    ERIC Educational Resources Information Center

    Gallagher, Chris W.

    2011-01-01

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the assessment…

  14. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  15. High-fidelity real-time maritime scene rendering

    NASA Astrophysics Data System (ADS)

    Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin

    2011-06-01

    The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.

  16. The Importance of Information Localization in Scene Gist Recognition

    ERIC Educational Resources Information Center

    Loschky, Lester C.; Sethi, Amit; Simons, Daniel J.; Pydimarri, Tejaswi N.; Ochs, Daniel; Corbeille, Jeremy L.

    2007-01-01

    People can recognize the meaning or gist of a scene from a single glance, and a few recent studies have begun to examine the sorts of information that contribute to scene gist recognition. The authors of the present study used visual masking coupled with image manipulations (randomizing phase while maintaining the Fourier amplitude spectrum;…

  17. Mental Layout Extrapolations Prime Spatial Processing of Scenes

    ERIC Educational Resources Information Center

    Gottesman, Carmela V.

    2011-01-01

    Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…

  18. Intrinsic Frames of Reference and Egocentric Viewpoints in Scene Recognition

    ERIC Educational Resources Information Center

    Mou, Weimin; Fan, Yanli; McNamara, Timothy P.; Owen, Charles B.

    2008-01-01

    Three experiments investigated the roles of intrinsic directions of a scene and observer's viewing direction in recognizing the scene. Participants learned the locations of seven objects along an intrinsic direction that was different from their viewing direction and then recognized spatial arrangements of three or six of these objects from…

  19. Detecting and representing predictable structure during auditory scene analysis.

    PubMed

    Sohoglu, Ediz; Chait, Maria

    2016-01-01

    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to 'scenes' comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces 'surprise'. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA. PMID:27602577

  20. Simulation of partially obscured scenes using the radiosity method

    SciTech Connect

    Gerstl, S.A.W.; Borel, C.C.

    1990-01-01

    Using the extended radiosity method or zonal method realistic synthetic images are constructed of visual scenes in the visible and infrared containing radiatively participating media such as smoke, fog and clouds. Computational methods will be discussed as well as the rendering of various scenes using computer graphics methods.

  1. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  2. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  3. Joint 3d Estimation of Vehicles and Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.; Geiger, A.

    2015-08-01

    driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.

  4. Classification-based scene modeling for urban point clouds

    NASA Astrophysics Data System (ADS)

    Hao, Wen; Wang, Yinghui

    2014-03-01

    The three-dimensional modeling of urban scenes is an important topic that can be used for various applications. We present a comprehensive strategy to reconstruct a scene from urban point clouds. First, the urban point clouds are classified into the ground points, planar points on the ground, and nonplanar points on the ground by using the support vector machine algorithm which takes several differential geometry properties as features. Second, the planar points and nonplanar points on the ground are segmented into patches by using different segmentation methods. A collection of characteristics of point cloud segments like height, size, topological relationship, and ratio between the width and length are applied to extract different objects after removing the unwanted segments. Finally, the buildings, ground, and trees in the scene are reconstructed, resulting in a hybrid model representing the urban scene. Experimental results demonstrate that the proposed method can be used as a robust way to reconstruct the scene from the massive point clouds.

  5. Selective orienting to pleasant versus unpleasant visual scenes.

    PubMed

    Fernández-Martín, Andrés; Calvo, Manuel G

    2016-10-01

    We investigated the relative attentional capture by positive versus simultaneously presented negative images in extrafoveal vision for female observers. Pairs of task-irrelevant pleasant and unpleasant visual scenes were displayed peripherally (⩾5° away from fixation) during a task-relevant letter-discrimination task at fixation. Selective attentional orienting was assessed by the probability of first fixating each scene and the time until first fixation. Results revealed a higher first fixation probability and shorter entry times, followed by longer dwell times, for pleasant relative to unpleasant scenes. The attentional capture advantage by pleasant scenes occurred in the absence of differences in perceptual properties. Processing of affective scene significance automatically occurs through covert attention in peripheral vision early. At least in non-threatening conditions, the attentional system is tuned to initially orient to pleasant images when competing with unpleasant ones. PMID:27371766

  6. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  7. Improving text recognition by distinguishing scene and overlay text

    NASA Astrophysics Data System (ADS)

    Quehl, Bernhard; Yang, Haojin; Sack, Harald

    2015-02-01

    Video texts are closely related to the content of a video. They provide a valuable source for indexing and interpretation of video data. Text detection and recognition task in images or videos typically distinguished between overlay and scene text. Overlay text is artificially superimposed on the image at the time of editing and scene text is text captured by the recording system. Typically, OCR systems are specialized on one kind of text type. However, in video images both types of text can be found. In this paper, we propose a method to automatically distinguish between overlay and scene text to dynamically control and optimize post processing steps following text detection. Based on a feature combination a Support Vector Machine (SVM) is trained to classify scene and overlay text. We show how this distinction in overlay and scene text improves the word recognition rate. Accuracy of the proposed methods has been evaluated by using publicly available test data sets.

  8. Universal scene change detection on MPEG-coded data domain

    NASA Astrophysics Data System (ADS)

    Nakajima, Yasuyuki; Ujihara, Kiyono; Yoneyama, Akio

    1997-01-01

    In this paper, we propose scene decomposition algorithm from MPEG compressed video data. As a preprocessing for scene decomposition, partial reconstruction methods of DC image for P- and B-pictures as well as I-pictures directly from MPEG bitstream are used. As for detection algorithms, we have exploited several methods for detection of abrupt scene change, dissolve and wipe transitions using comparison of DC images between frames and coding information such as motion vectors. It is also proposed the method for exclusion of undesired detection such as flashlight in order to enhance scene change detection accuracy. It is shown that more than 95 percent of decomposition accuracy has been obtained in the experiment using more than one hour TV program. It is also found that in the proposed algorithm scene change detection can be performed more than 5 times faster than normal playback speed using 130MIPS workstation.

  9. Music scene description: Toward audio-based real-time music understanding

    NASA Astrophysics Data System (ADS)

    Goto, Masataka

    2002-05-01

    Music understanding is an important component of audio-based interactive music systems. A real-time music scene description system for the computational modeling of music understanding is proposed. This research is based on the assumption that a listener understands music without deriving musical scores or even fully segregating signals. In keeping with this assumption, our music scene description system produces intuitive descriptions of music, such as the beat structure and the melody and bass lines. Two real-time subsystems have been developed, a beat-tracking subsystem and a melody-and-bass detection subsystem, which can deal with real-world monaural audio signals sampled from popular-music CDs. The beat-tracking subsystem recognizes a hierarchical beat structure comprising the quarter-note, half-note, and measure levels by using three kinds of musical knowledge: of onset times, of chord changes, and of drum patterns. The melody-and-bass detection subsystem estimates the F0 (fundamental frequency) of melody and bass lines by using a predominant-F0 estimation method called PreFEst, which does not rely on the F0's unreliable frequency component and obtains the most predominant F0 supported by harmonics within an intentionally limited frequency range. Several applications of music understanding are described, including a beat-driven, real-time computer graphics and lighting controller.

  10. Does object view influence the scene consistency effect?

    PubMed

    Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2015-04-01

    Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information. PMID:25522833

  11. One high performance technology of infrared scene projection

    NASA Astrophysics Data System (ADS)

    Wang, Hong-jie; Qian, Li-xun; Cao, Chun; Li, Zhuo

    2014-11-01

    Infrared scenes generation technologies are used to simulate the infrared radiation characteristics of target and background in the laboratory. They provide synthetic infrared imagery for thermal imager test and evaluation application in the infrared imaging systems. At present, many Infrared scenes generation technologies have been widely used, and they make a lot of achievements. In this paper, we design and manufacture one high performance IR scene generation technology, and the whole thin film type transducer is the key, which is fabricated based on micro electro mechanical systems (MEMS). The specific MEMS technological process parameters are obtained from a large number of experiments. The properties of infrared scene generation chip are investigated experimentally. It achieves high resolution, high frame, and reliable performance, which can meet the requirements of most simulation system. The radiation coefficient of the thin film transducer is measured to be 0.86. The frame rate is 160 Hz. The emission spectrum is from 2μm to 12μm in infrared band. Illuminated by the visible light with different intensities the equivalent black body temperature of transducer could be varied in the range of 290K to 440K. The spatial resolution is more than 256×256.The geometric distortion and the uniformity of the generated infrared scene is 5 percent. The infrared scene generator based on the infrared scene generation chip include three parts, which are visual image projector, visual to thermal transducer and the infrared scene projector. The experimental results show that this thin film type infrared scene generation chip meets the requirements of most of hardware-in-the-loop scene simulation systems for IR sensors testing.

  12. Just Another Social Scene: Evidence for Decreased Attention to Negative Social Scenes in High-Functioning Autism

    ERIC Educational Resources Information Center

    Santos, Andreia; Chaminade, Thierry; Da Fonseca, David; Silva, Catarina; Rosset, Delphine; Deruelle, Christine

    2012-01-01

    The adaptive threat-detection advantage takes the form of a preferential orienting of attention to threatening scenes. In this study, we compared attention to social scenes in 15 high-functioning individuals with autism (ASD) and matched typically developing (TD) individuals. Eye-tracking was recorded while participants were presented with pairs…

  13. Sticky-Note Murals

    ERIC Educational Resources Information Center

    Sands, Ian

    2011-01-01

    In this article, the author describes a sticky-note mural project that originated from his desire to incorporate contemporary materials into his assignments as well as to inspire collaboration between students. The process takes much more than sticking sticky notes to the wall. It takes critical thinking skills and teamwork to design and complete…

  14. Memory efficient atmospheric effects modeling for infrared scene generators

    NASA Astrophysics Data System (ADS)

    Kavak, Çaǧlar; Özsaraç, Seçkin

    2015-05-01

    The infrared (IR) energy radiated from any source passes through the atmosphere before reaching the sensor. As a result, the total signature captured by the IR sensor is significantly modified by the atmospheric effects. The dominant physical quantities that constitute the mentioned atmospheric effects are the atmospheric transmittance and the atmospheric path radiance. The incoming IR radiation is attenuated by the transmittance and path radiance is added on top of the attenuated radiation. In IR scene simulations OpenGL is widely used for rendering purposes. In the literature there are studies, which model the atmospheric effects in an IR band using OpenGLs exponential fog model as suggested by Beers law. In the standard pipeline of OpenGL, the related fog model needs single equivalent OpenGL variables for the transmittance and path radiance, which actually depend on both the distance between the source and the sensor and also on the wavelength of interest. However, in the conditions where the range dependency cannot be modeled as an exponential function, it is not accurate to replace the atmospheric quantities with a single parameter. The introduction of OpenGL Shading Language (GLSL) has enabled the developers to use the GPU more flexible. In this paper, a novel method is proposed for the atmospheric effects modeling using the least squares estimation with polynomial fitting by programmable OpenGL shader programs built with GLSL. In this context, a radiative transfer model code is used to obtain the transmittance and path radiance data. Then, polynomial fits are computed for the range dependency of these variables. Hence, the atmospheric effects model data that will be uploaded in the GPU memory is significantly reduced. Moreover, the error because of fitting is negligible as long as narrow IR bands are used.

  15. The scene and the unseen: manipulating photographs for experiments on change blindness and scene memory: image manipulation for change blindness.

    PubMed

    Ball, Felix; Elzemann, Anne; Busch, Niko A

    2014-09-01

    The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes. PMID:24311058

  16. View Nine of Lunar Panoramic Scene

    NASA Technical Reports Server (NTRS)

    1969-01-01

    The second manned lunar landing mission, Apollo 12, launched from launch pad 39-A at Kennedy Space Center in Florida on November 14, 1969 via a Saturn V launch vehicle. The Saturn V vehicle was developed by the Marshall Space Flight Center (MSFC) under the direction of Dr. Wernher von Braun. Aboard Apollo 12 was a crew of three astronauts: Alan L. Bean, pilot of the Lunar Module (LM), Intrepid; Richard Gordon, pilot of the Command Module (CM), Yankee Clipper; and Spacecraft Commander Charles Conrad. The LM, Intrepid, landed astronauts Conrad and Bean on the lunar surface in what's known as the Ocean of Storms while astronaut Richard Gordon piloted the CM, Yankee Clipper, in a parking orbit around the Moon. Lunar soil activities included the deployment of the Apollo Lunar Surface Experiments Package (ALSEP), finding the unmanned Surveyor 3 that landed on the Moon on April 19, 1967, and collecting 75 pounds (34 kilograms) of rock samples. This is the ninth of 25 images captured by the crew in attempt to provide a 360 degree Lunar surface scene. Apollo 12 safely returned to Earth on November 24, 1969.

  17. Crime scene investigation (as seen on TV).

    PubMed

    Durnal, Evan W

    2010-06-15

    A mysterious green ooze is injected into a brightly illuminated and humming machine; 10s later, a printout containing a complete biography of the substance is at the fingertips of an attractive young investigator who exclaims "we found it!" We have all seen this event occur countless times on any and all of the three CSI dramas, Cold Cases, Crossing Jordans, and many more. With this new style of "infotainment" (Surette, 2007), comes an increasingly blurred line between the hard facts of reality and the soft, quick solutions of entertainment. With these advances in technology, how can crime rates be anything but plummeting as would-be criminals cringe at the idea of leaving the smallest speck of themselves at a crime scene? Surely there are very few serious crimes that go unpunished in today's world of high-tech, fast-paced gadgetry. Science and technology have come a great distance since Sir Arthur Conan Doyle first described the first famous forensic scientist (Sherlock Holmes), but still have light-years to go. PMID:20227206

  18. Dense Correspondences across Scenes and Scales.

    PubMed

    Tau, Moria; Hassner, Tal

    2016-05-01

    We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only few image pixels; matching only pixels for which stable scales may be reliably estimated. Recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work is motivated by the observation that pixels in the image have contexts-the pixels around them-which may be exploited in order to reliably estimate local scales. We make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image. (ii) We explore three means for propagating this information: using the scales at detected interest points, using the underlying image information to guide scale propagation in each image separately, and using both images together. Finally, (iii), we provide extensive qualitative and quantitative results, demonstrating that scale propagation allows for accurate dense correspondences to be obtained even between very different images, with little computational costs beyond those required by existing methods. PMID:26336115

  19. Scene recognition by manifold regularized deep learning architecture.

    PubMed

    Yuan, Yuan; Mou, Lichao; Lu, Xiaoqiang

    2015-10-01

    Scene recognition is an important problem in the field of computer vision, because it helps to narrow the gap between the computer and the human beings on scene understanding. Semantic modeling is a popular technique used to fill the semantic gap in scene recognition. However, most of the semantic modeling approaches learn shallow, one-layer representations for scene recognition, while ignoring the structural information related between images, often resulting in poor performance. Modeled after our own human visual system, as it is intended to inherit humanlike judgment, a manifold regularized deep architecture is proposed for scene recognition. The proposed deep architecture exploits the structural information of the data, making for a mapping between visible layer and hidden layer. By the proposed approach, a deep architecture could be designed to learn the high-level features for scene recognition in an unsupervised fashion. Experiments on standard data sets show that our method outperforms the state-of-the-art used for scene recognition. PMID:25622326

  20. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  1. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Astrophysics Data System (ADS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-12-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  2. Decoding Representations of Scenes in the Medial Temporal Lobes

    PubMed Central

    Bonnici, Heidi M; Kumaran, Dharshan; Chadwick, Martin J; Weiskopf, Nikolaus; Hassabis, Demis; Maguire, Eleanor A

    2012-01-01

    Recent theoretical perspectives have suggested that the function of the human hippocampus, like its rodent counterpart, may be best characterized in terms of its information processing capacities. In this study, we use a combination of high-resolution functional magnetic resonance imaging, multivariate pattern analysis, and a simple decision making task, to test specific hypotheses concerning the role of the medial temporal lobe (MTL) in scene processing. We observed that while information that enabled two highly similar scenes to be distinguished was widely distributed throughout the MTL, more distinct scene representations were present in the hippocampus, consistent with its role in performing pattern separation. As well as viewing the two similar scenes, during scanning participants also viewed morphed scenes that spanned a continuum between the original two scenes. We found that patterns of hippocampal activity during morph trials, even when perceptual inputs were held entirely constant (i.e., in 50% morph trials), showed a robust relationship with participants' choices in the decision task. Our findings provide evidence for a specific computational role for the hippocampus in sustaining detailed representations of complex scenes, and shed new light on how the information processing capacities of the hippocampus may influence the decision making process. © 2011 Wiley Periodicals, Inc. PMID:21656874

  3. A Discriminative Representation of Convolutional Features for Indoor Scene Recognition

    NASA Astrophysics Data System (ADS)

    Khan, Salman H.; Hayat, Munawar; Bennamoun, Mohammed; Togneri, Roberto; Sohel, Ferdous A.

    2016-07-01

    Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities. This paper presents a novel approach which exploits rich mid-level convolutional features to categorize indoor scenes. Traditionally used convolutional features preserve the global spatial structure, which is a desirable property for general object recognition. However, we argue that this structuredness is not much helpful when we have large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target dataset, but it also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale dataset of 1300 object categories which are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over previous state of the art approaches on five major scene classification datasets.

  4. Multiple object properties drive scene-selective regions.

    PubMed

    Troiani, Vanessa; Stigliani, Anthony; Smith, Mary E; Epstein, Russell A

    2014-04-01

    Neuroimaging studies have identified brain regions that respond preferentially to specific stimulus categories, including 3 areas that activate maximally during viewing of real-world scenes: The parahippocampal place area (PPA), retrosplenial complex (RSC), and transverse occipital sulcus (TOS). Although these findings suggest the existence of regions specialized for scene processing, this interpretation is challenged by recent reports that activity in scene-preferring regions is modulated by properties of isolated single objects. To understand the mechanisms underlying these object-related responses, we collected functional magnetic resonance imaging data while subjects viewed objects rated along 7 dimensions, shown both in isolation and on a scenic background. Consistent with previous reports, we find that scene-preferring regions are sensitive to multiple object properties; however, results of an item analysis suggested just 2 independent factors--visual size and the landmark suitability of the objects--sufficed to explain most of the response. This object-based modulation was found in PPA and RSC irrespective of the presence or absence of a scenic background, but was only observed in TOS for isolated objects. We hypothesize that scene-preferring regions might process both visual qualities unique to scenes and spatial qualities that can appertain to either scenes or objects. PMID:23211209

  5. Detecting and representing predictable structure during auditory scene analysis

    PubMed Central

    Sohoglu, Ediz; Chait, Maria

    2016-01-01

    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to ‘scenes’ comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces ‘surprise’. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA. DOI: http://dx.doi.org/10.7554/eLife.19113.001 PMID:27602577

  6. Motion parallax links visual motion areas and scene regions.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2016-01-15

    When we move, the retinal velocities of objects in our surrounding differ according to their relative distances and give rise to a powerful three-dimensional visual cue referred to as motion parallax. Motion parallax allows us to infer our surrounding's 3D structure as well as self-motion based on 2D retinal information. However, the neural substrates mediating the link between visual motion and scene processing are largely unexplored. We used fMRI in human observers to study motion parallax by means of an ecologically relevant yet highly controlled stimulus that mimicked the observer's lateral motion past a depth-layered scene. We found parallax selective responses in parietal regions IPS3 and IPS4, and in a region lateral to scene selective occipital place area (OPA). The traditionally defined scene responsive regions OPA, the para-hippocampal place area (PPA) and the retrosplenial cortex (RSC) did not respond to parallax. During parallax processing, the occipital parallax selective region entertained highly specific functional connectivity with IPS3 and with scene selective PPA. These results establish a network linking dorsal motion and ventral scene processing regions specifically during parallax processing, which may underlie the brain's ability to derive 3D scene information from motion parallax. PMID:26515906

  7. The actual goals of geoethics

    NASA Astrophysics Data System (ADS)

    Nemec, Vaclav

    2014-05-01

    The most actual goals of geoethics have been formulated as results of the International Conference on Geoethics (October 2013) held at the geoethics birth-place Pribram (Czech Republic): In the sphere of education and public enlightenment an appropriate needed minimum know how of Earth sciences should be intensively promoted together with cultivating ethical way of thinking and acting for the sustainable well-being of the society. The actual activities of the Intergovernmental Panel of Climate Changes are not sustainable with the existing knowledge of the Earth sciences (as presented in the results of the 33rd and 34th International Geological Congresses). This knowledge should be incorporated into any further work of the IPCC. In the sphere of legislation in a large international co-operation following steps are needed: - to re-formulate the term of a "false alarm" and its legal consequences, - to demand very consequently the needed evaluation of existing risks, - to solve problems of rights of individuals and minorities in cases of the optimum use of mineral resources and of the optimum protection of the local population against emergency dangers and disasters; common good (well-being) must be considered as the priority when solving ethical dilemmas. The precaution principle should be applied in any decision making process. Earth scientists presenting their expert opinions are not exempted from civil, administrative or even criminal liabilities. Details must be established by national law and jurisprudence. The well known case of the L'Aquila earthquake (2009) should serve as a serious warning because of the proven misuse of geoethics for protecting top Italian seismologists responsible and sentenced for their inadequate superficial behaviour causing lot of human victims. Another recent scandal with the Himalayan fossil fraud will be also documented. A support is needed for any effort to analyze and to disclose the problems of the deformation of the contemporary

  8. Preference for luminance histogram regularities in natural scenes.

    PubMed

    Graham, Daniel; Schwarz, Bianca; Chatterjee, Anjan; Leder, Helmut

    2016-03-01

    Natural scene luminance distributions typically have positive skew, and for single objects, there is evidence that higher skew is a correlate (but not a guarantee) of glossiness. Skewness is also relevant to aesthetics: preference for glossy single objects (with high skew) has been shown even in infants, and skewness is a good predictor of fruit freshness. Given that primate vision appears to efficiently encode natural scene luminance variation, and given evidence that natural scene regularities may be a prerequisite for aesthetic perception in the spatial domain, here we ask whether humans in general prefer natural scenes with more positively skewed luminance distributions. If humans generally prefer images with the higher-order regularities typical of natural scenes and/or shiny objects, we would expect this to be the case. By manipulating luminance distribution skewness (holding mean and variance constant) for individual natural images, we show that in fact preference varies inversely with increasing positive skewness. This finding holds for: artistic landscape images and calibrated natural scenes; scenes with and without glossy surfaces; landscape scenes and close-up objects; and noise images with natural luminance histograms. Across conditions, humans prefer images with skew near zero over higher skew images, and they prefer skew lower than that of the unmodified scenes. These results suggest that humans prefer images with luminances that are distributed relatively evenly about the mean luminance, i.e., images with similar amounts of light and dark. We propose that our results reflect an efficient processing advantage of low-skew images over high-skew images, following evidence from prior brain imaging results. PMID:25872178

  9. Real-time IR/EO scene generation utilizing an optimized scene rendering subsystem

    NASA Astrophysics Data System (ADS)

    Makar, Robert J.; Howe, Daniel B.

    2000-07-01

    This paper describes advances in the development IR/EO scene generation using the second generation Comptek Amherst Systems' Scene Rendering Subsystem (SRS). The SRS is a graphics rendering engine designed specifically to support real-time hardware-in-the-loop testing of IR/EO sensor systems. The SRS serves as an alternative to commercial rendering systems, such as the Silicon GraphicsR InfiniteReality, when IR/EO sensor fidelity requirements surpass the limits designed into COTS hardware that is optimized for visual rendering. The paper will discuss the need for such a system and will present examples of the kinds of sensor tests that can take advantage of the high radiometric fidelity provided by the SRS. Examples of situations where the high spatial fidelity of the InfiniteReality is more appropriate will also be presented. The paper will also review models and algorithms used in IR/EO scene rendering and show how the design of the SRS was driven by the requirements of these models and algorithms. This work has been done in support of the Infrared Sensor Stimulator system (IRSS) which will be used for installed system testing of avionics electronic combat systems. The IRSS will provide a high frame rate, real-time, reactive, hardware-in-the-loop test capability for the stimulation of current and future infrared and ultraviolet based sensor systems. The IRSS program is a joint development effort under the leadership of the Naval Air Warfare Center -- Aircraft Division, Air Combat Environment Test and Evaluation Facility (ACETEF) with close coordination and technical support from the Electronic Combat Integrated Test (ECIT) Program Office. The system will be used for testing of multiple sensor avionics systems to support the Development Test & Evaluation and Operational Test & Evaluation objectives of the U.S. Navy and Air Force.

  10. Thematic mapper radiomtric variability on ostensibly uniform agricultural scenes

    NASA Technical Reports Server (NTRS)

    Duggin, M. J.

    1983-01-01

    The effects of the interaction of the sensor point spread function with a heterogeneous scene consisting of elements giving rise to different spectral radiant intensities cause errors in multitemporal signatures due to fractional pixel repositioning errors. In the case of a heterogeneous scene, the repositioning accuracy between acquisitions could affect the radiometric output in any band and could affect the spectral distribution of radiance between bands. Error caused by within-band and between-band variations in radiance with time could be compounded by resampling along and between scan lines during processing. The magnitude of both error sources depends on the degree of heterogeneity of the scene.

  11. AgRISTARS. Supporting research: Algorithms for scene modelling

    NASA Technical Reports Server (NTRS)

    Rassbach, M. E. (Principal Investigator)

    1982-01-01

    The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.

  12. Extended scene wavefront sensor for space application

    NASA Astrophysics Data System (ADS)

    Bomer, Thierry; Ravel, Karen; Corlay, Gilles

    2015-10-01

    The spatial resolution of optical monitoring satellites increases continuously and it is more and more difficult to satisfy the stability constraints of the instrument. The compactness requirements induce high sensitivity to drift during storage and launching. The implementation of an active loop for the control of the performances for the telescope becomes essential, in the same way of astronomy telescopes on ground. The active loop requires disposing of informations in real time of optical distortions of the wavefront, due to mirror deformations. It is the role of the Shack-Hartmann wave front sensor studied by Sodern. It is located in the focal plane of the telescope, in edge of field of view, in order not to disturb acquisition by the main instrument. Its particular characteristic, compared to a traditional wavefront sensor is not only to work on point source as star image, but also on extended scenes, as those observed by the instrument. The exit pupil of the telescope is imaged on a micro lenses array by a relay optics. Each element of the micro lenses array generates a small image, drifted by the local wavefront slope. The processing by correlation between small images allows to measure local slope and to recover the initial wavefront deformation according to Zernike decomposition. Sodern has realized the sensor dimensioning and has studied out the comparison of various algorithms of images correlation making it possible to measure the local slopes of the wave front. Simulations, taking into account several types of detectors, enabled to compare the performances of these solutions and a choice of detector was carried out. This article describes the state of progress of the work done so far. It shows the result of the comparisons on the choice of the detector, the main features of the sensor definition and the performances obtained.

  13. Modelling and simulation of virtual Mars scene

    NASA Astrophysics Data System (ADS)

    Sun, Si-liang; Chen, Ren; Sun, Li; Yan, Jie

    2011-08-01

    There is a limited cognition on human beings comprehend the universe. Aiming at the impending need of mars exploration in the near future, starting from the mars three-dimensional (3D) model, the mars texture which based on several reality pictures was drew and the Bump mapping technique was managed to enhance the realistic rendering. In order to improve the simulation fidelity, the composing of mars atmospheric was discussed and the reason caused atmospheric scattering was investigated, the scattering algorithm was studied and calculated as well. The reasons why "Red storm" that frequently appeared on mars were particularized, these factors inevitable brought on another celestial body appearance. To conquer this problem, two methods which depended on different position of view point (universe point and terrestrial point) were proposed: in previous way, the 3D model was divided into different meshes to simulate the storm effect and the formula algorithm that mesh could rotate with any axis was educed. From a certain extent the model guaranteed rendering result when looked at the mars (with "Red storm") in universe; in latter way, 3D mars terrain scene was build up according to the mars pictures downloaded on "Google Mars", particle system used to simulated the storm effect, then the Billboard technique was managed to finish the color emendation and rendering compensation. At the end, the star field simulation based on multiple texture blending was given. The result of experiment showed that these methods had not only given a substantial increase in fidelity, but also guaranteed real-time rendering. It can be widely used in simulation of space battlefield and exploration tasks.

  14. Macrostructure logic arrays. Volume 2. Task 2: Seeker scene emulator. Final report, 28 June 1985-2 November 1990

    SciTech Connect

    Henshaw, A.; Melton, R.; Gieseking, S.; Alford, C.O.

    1990-11-07

    Under direction from the U.S. Army Strategic Defense Command, the Computer Engineering Research Laboratory at the Georgia Institute of Technology and BDM Corporation have developed a real-time Focal Plan Array Seeker Scene Emulator. This unit enhances Georgia Tech's capabilities in kinetic energy weapon system testing and performance demonstration. The Strategic Defense Initiative Organization HWIL Simulation Structure contains three paths for exercising the Signal Processing and Data Processing algorithms and hardware. Two of these methods use actual Focal Plane Array (FPA) hardware to generate signals for presentation to the SP and DP sub-systems. In many cases, the use of an FPA might be considered restrictive. The Georgia Tech Seeker Scene Emulator is designed to provide the third path in this simulation structure. By emulating the FPA, the Georgia Tech SSE can provide tests results would be costly or difficult to achieve using an actual FPA. The SSE can be used to fill in gaps in testing of components in stressing simulation scenarios, such as nuclear environments and high object counts. The FPA Seeker Scene Emulator combines advanced hardware developed at Georgia Tech with a Ballistic Defense Missile-generated database to produce signal based upon target radiometric information, seeker optical characterization, FPA detector characterization, and simulated background environments.

  15. Behind the Scenes: Sarafin Goes from Farm to Flight Director

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino chats with flight director Mike Sarafin about when he joined NASA and moved from his family's farm in New York to Houston...with ...

  16. LADAR scene projector for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Cornell, Michael C.; Naumann, Charles B.; Stockbridge, Robert G.; Snyder, Donald R.

    2002-07-01

    Future types of direct detection LADAR seekers will employ focal plane arrays in their receivers. Existing LADAR scene projection technology cannot meet the needs of testing these types of seekers in a Hardware-in-the-Loop environment. It is desired that the simulated LADAR return signals generated by the projection hardware be representative of the complex targets and background of a real LADAR image. A LADAR scene projector has been developed that is capable of meeting these demanding test needs. It can project scenes of simulated 2D LADAR return signals without scanning. In addition, each pixel in the projection can be represented by a 'complex' optical waveform, which can be delivered with sub-nanosecond precision. Finally, the modular nature of the projector allows it to be configured to operate at different wavelengths. This paper describes the LADAR Scene Projector and its full capabilities.

  17. 3D scene modeling from multiple range views

    NASA Astrophysics Data System (ADS)

    Sequeira, Vitor; Goncalves, Joao G. M.; Ribeiro, M. Isabel

    1995-09-01

    This paper presents a new 3D scene analysis system that automatically reconstructs the 3D geometric model of real-world scenes from multiple range images acquired by a laser range finder on board of a mobile robot. The reconstruction is achieved through an integrated procedure including range data acquisition, geometrical feature extraction, registration, and integration of multiple views. Different descriptions of the final 3D scene model are obtained: a polygonal triangular mesh, a surface description in terms of planar and biquadratics surfaces, and a 3D boundary representation. Relevant experimental results from the complete 3D scene modeling are presented. Direct applications of this technique include 3D reconstruction and/or update of architectual or industrial plans into a CAD model, design verification of buildings, navigation of autonomous robots, and input to virtual reality systems.

  18. Behind the Scenes: Michoud Builder of Shuttle's External Tank

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino takes you on a tour of the Michoud Assembly Facility in New Orleans, La. This historic facility helped build the mighty Saturn V ...

  19. Behind the Scenes: Discovery Crew Performs Swimmingly - Duration: 11 minutes.

    NASA Video Gallery

    In this episode of NASA "Behind the Scenes," astronaut Mike Massimino visits the Johnson Space Center's Neutral Buoyancy Laboratory. The world's largest indoor pool is where Al Drew, Tim Kopra, Mik...

  20. NASA Social: Behind the Scenes at NASA Dryden

    NASA Video Gallery

    More than 50 followers of NASA's social media websites went behind the scenes at NASA's Dryden Flight Research Center during a "NASA Social" on May 4, 2012. The visitors were briefed on what Dryden...

  1. Behind the Scenes: Rolling Room Greets Returning Astronauts

    NASA Video Gallery

    Have you ever wondered what is the first thing the shuttle crews see after they land? In this episode of NASA Behind the Scenes, astronaut Mike Massimino takes you into the Crew Transport Vehicle, ...

  2. Mountain scene pencil drawing on north eall of sack room, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Mountain scene pencil drawing on north eall of sack room, northwestern corner, looking north. - Camp Tulelake, Shop-Storage Building, West Side of Hill Road, 2 miles South of State Highway 161, Tulelake, Siskiyou County, CA

  3. Simulation of 3D infrared scenes using random fields model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Zhang, Jianqi

    2001-09-01

    Analysis and simulation of smart munitions requires imagery for the munition's sensor to view. The traditional infrared background simulations are always limited in the plane scene studies. A new method is described to synthesize the images in 3D view and with various terrains texture. We develop the random fields model and temperature fields to simulate 3D infrared scenes. Generalized long-correlation (GLC) model, one of random field models, will generate both the 3D terrains skeleton data and the terrains texture in this work. To build the terrain mesh with the random fields, digital elevation models (DEM) are introduced in the paper. And texture mapping technology will perform the task of pasting the texture in the concavo-convex surfaces of the 3D scene. The simulation using random fields model is a very available method to produce 3D infrared scene with great randomicity and reality.

  4. Reconstruction of indoor scene from a single image

    NASA Astrophysics Data System (ADS)

    Wu, Di; Li, Hongyu; Zhang, Lin

    2015-03-01

    Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.

  5. Scene Categorization in Alzheimer's Disease: A Saccadic Choice Task

    PubMed Central

    Lenoble, Quentin; Bubbico, Giovanna; Szaffarczyk, Sébastien; Pasquier, Florence; Boucart, Muriel

    2015-01-01

    Aims We investigated the performance in scene categorization of patients with Alzheimer's disease (AD) using a saccadic choice task. Method 24 patients with mild AD, 28 age-matched controls and 26 young people participated in the study. The participants were presented pairs of coloured photographs and were asked to make a saccadic eye movement to the picture corresponding to the target scene (natural vs. urban, indoor vs. outdoor). Results The patients' performance did not differ from chance for natural scenes. Differences between young and older controls and patients with AD were found in accuracy but not saccadic latency. Conclusions The results are interpreted in terms of cerebral reorganization in the prefrontal and temporo-occipital cortex of patients with AD, but also in terms of impaired processing of visual global properties of scenes. PMID:25759714

  6. Behind the Scenes: Astronauts Keep Trainers in BBQ Bliss

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino talks with astronaut Terry Virts as well as Stephanie Turner, one of the people who keeps the astronaut corps in line. Mass also ...

  7. Cross-linguistic Differences in Talking About Scenes

    PubMed Central

    Sethuraman, Nitya; Smith, Linda B.

    2010-01-01

    Speakers of English and Tamil differ widely in which relational roles they overtly express with a verb. This study provides new information about how speakers of these languages differ in their descriptions of the same scenes and how explicit mention of roles and other scene elements vary with the properties of the scenes themselves. Specifically, we find that English speakers, who in normal speech rely more on explicit mention of verb arguments, in fact appear to be more affected by the pragmatic manipulations used in this study than Tamil speakers. Additionally, although the mention of scene items increases with development in both languages, Tamil-speaking children mention fewer items than do English-speaking children, showing that the children know the structure of the language to which they are exposed. PMID:20802845

  8. Behind the Scenes: Mission Control Practices Launching Discovery

    NASA Video Gallery

    Before every shuttle launch, the astronauts train with their ascent team in Mission Control Houston. In this episode of NASA Behind the Scenes, astronaut Mike Massimino introduces you to some of th...

  9. Crime scene ethics: souvenirs, teaching material, and artifacts.

    PubMed

    Rogers, Tracy L

    2004-03-01

    Police and forensic specialists are ethically obliged to preserve the integrity of their investigations and their agencies' reputations. The American Academy of Forensic Sciences and the Canadian Society of Forensic Science provide no guidelines for crime scene ethics, or the retention of items from former crime scenes. Guidelines are necessary to define acceptable behavior relating to removing, keeping, or selling artifacts, souvenirs, or teaching specimens from former crime scenes, where such activities are not illegal, to prevent potential conflicts of interest and the appearance of impropriety. Proposed guidelines permit the retention of objects with educational value, provided they are not of significance to the case, they are not removed until the scene is released, permission has been obtained from the property owner and police investigator, and the item has no significant monetary value. Permission is necessary even if objects appear discarded, or are not typically regarded as property, e.g., animal bones. PMID:15027551

  10. Behind the Scenes: Shuttle Crawls to Launch Pad

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, take a look at what's needed to roll a space shuttle out of the Vehicle Assembly Building and out to the launch pad. Astronaut Mike Massimino talks to som...

  11. TIFF Image Writer patch for OpenSceneGraph

    2012-01-05

    This software consists of code modifications to the open-source OpenSceneGraph software package to enable the creation of TlFF images containing 16 bit unsigned data. They also allow the user to disable compression and set the DPI tags in the resulting TIFF Images. Some image analysis programs require uncompressed, 16 bit unsigned input data. These code modifications allow programs based on OpenSceneGraph to write out such images, improving connectivity between applications.

  12. 3D Traffic Scene Understanding From Movable Platforms.

    PubMed

    Geiger, Andreas; Lauer, Martin; Wojek, Christian; Stiller, Christoph; Urtasun, Raquel

    2014-05-01

    In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments. PMID:26353233

  13. DRDC's approach to IR scene generation for IRCM simulation

    NASA Astrophysics Data System (ADS)

    Lepage, Jean-François; Labrie, Marc-André; Rouleau, Eric; Richard, Jonathan; Ross, Vincent; Dion, Denis; Harrison, Nathalie

    2011-06-01

    An object oriented simulation framework, called KARMA, was developed over the last decade at Defence Research and Development Canada - Valcartier (DRDC Valcartier) to study infrared countermeasures (IRCM) methods and tactics. It provides a range of infrared (IR) guided weapon engagement services from constructive to HWIL simulations. To support the increasing level of detail of its seeker models, DRDC Valcartier recently developed an IR scene generation (IRSG) capacity for the KARMA framework. The approach relies on Open-Source based rendering of scenes composed of 3D models, using commercial off-the-shelf (COTS) graphics processing units (GPU) of standard PCs. The objective is to produce a high frame rate and medium fidelity representation of the IR scene, allowing to properly reproduce the spectral, spatial, and temporal characteristics of the aircraft's and flare's signature. In particular, the OpenSceneGraph library is used to manage the 3D models, and to send high-level rendering commands. The atmospheric module allows for accurate, run-time computation of the radiative components using a spectrally correlated wide-band mode. Advanced effects, such as surface reflections and zoom anti-aliasing, are computed by the GPU through the use of shaders. Also, in addition to the IR scene generation module, a signature modeling and analysis tool (SMAT) was developed to assist the modeler in building and validating signature models that are independent of a particular sensor type. Details of the IR scene generation module and the associated modeling tool will be presented.

  14. Political conservatism predicts asymmetries in emotional scene memory.

    PubMed

    Mills, Mark; Gonzalez, Frank J; Giuseffi, Karl; Sievert, Benjamin; Smith, Kevin B; Hibbing, John R; Dodd, Michael D

    2016-06-01

    Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology. PMID:26992825

  15. Strategic Scene Generation Model: baseline and operational software

    NASA Astrophysics Data System (ADS)

    Heckathorn, Harry M.; Anding, David C.

    1993-08-01

    The Strategic Defense Initiative (SDI) must simulate the detection, acquisition, discrimination and tracking of anticipated targets and predict the effect of natural and man-made background phenomena on optical sensor systems designed to perform these tasks. NRL is developing such a capability using a computerized methodology to provide modeled data in the form of digital realizations of complex, dynamic scenes. The Strategic Scene Generation Model (SSGM) is designed to integrate state-of-science knowledge, data bases and computerized phenomenology models to simulate strategic engagement scenarios and to support the design, development and test of advanced surveillance systems. Multi-phenomenology scenes are produced from validated codes--thereby serving as a traceable standard against which different SDI concepts and designs can be tested. This paper describes the SSGM design architecture, the software modules and databases which are used to create scene elements, the synthesis of deterministic and/or stochastic structured scene elements into composite scenes, the software system to manage the various databases and digital image libraries, and verification and validation by comparison with empirical data. The focus will be on the functionality of the SSGM Phase II Baseline MOdel (SSGMB) whose implementation is complete Recent enhancements for Theater Missile Defense will also be presented as will the development plan for the SSGM Phase III Operational Model (SSGMO) whose development has just begun.

  16. Synthetic scene generation model (SSGM R6.0)

    NASA Astrophysics Data System (ADS)

    Wilcoxen, Bruce A.; Heckathorn, Harry M.

    1995-06-01

    The Ballistic Missile Defense Organization (BMDO) must simulate the detection, acquisition, discrimination, and tracking of anticipated targets and predict the effect of natural and man- made background phenomena on optical sensor systems designed to perform these tasks. NRL is developing such a capability using a computerized methodology to provide modeled data in the form of digital realizations of complex, dynamic scenes. The Synthetic Scene Generation Model (SSGM) is designed to integrate state-of-science knowledge, data bases, and computerized phenomenology models to simulate ballistic missile engagement scenarios and to support the design, development, and test of advanced electro-optical interceptor and surveillance systems. Multi-phenomenology scenes are produced from validated codes -- thereby serving as a traceable standard against which different BMDO concepts and designs can be tested. This paper describes the SSGM software architecture, the software modules and databases that are used to create scene elements, the synthesis of deterministic and/or stochastic structured scene elements into composite scenes, the software system to manage the various databases and digital image libraries, the ancillary software tool suite, and verification and validation by comparison with empirical data. The focus is on the functionality of the SSGM Release 6.0, and the planned development effort for subsequent SSGM releases.

  17. Spatial distributions of local illumination color in natural scenes.

    PubMed

    Nascimento, Sérgio M C; Amano, Kinjiro; Foster, David H

    2016-03-01

    In natural complex environments, the elevation of the sun and the presence of occluding objects and mutual reflections cause variations in the spectral composition of the local illumination across time and location. Unlike the changes in time and their consequences for color appearance and constancy, the spatial variations of local illumination color in natural scenes have received relatively little attention. The aim of the present work was to characterize these spatial variations by spectral imaging. Hyperspectral radiance images were obtained from 30 rural and urban scenes in which neutral probe spheres were embedded. The spectra of the local illumination at 17 sample points on each sphere in each scene were extracted and a total of 1904 chromaticity coordinates and correlated color temperatures (CCTs) derived. Maximum differences in chromaticities over spheres and over scenes were similar. When data were pooled over scenes, CCTs ranged from 3000 K to 20,000 K, a variation of the same order of magnitude as that occurring over the day. Any mechanisms that underlie stable surface color perception in natural scenes need to accommodate these large spatial variations in local illumination color. PMID:26291072

  18. Discomfort Glare: What Do We Actually Know?

    SciTech Connect

    Clear, Robert D.

    2012-04-19

    We reviewed glare models with an eye for missing conditions or inconsistencies. We found ambiguities as to when to use small source versus large source models, and as to what constitutes a glare source in a complex scene. We also found surprisingly little information validating the assumed independence of the factors driving glare. A barrier to progress in glare research is the lack of a standardized dependent measure of glare. We inverted the glare models to predict luminance, and compared model predictions against the 1949 Luckiesh and Guth data that form the basis of many of them. The models perform surprisingly poorly, particularly with regards to the luminance-size relationship and additivity. Evaluating glare in complex scenes may require fundamental changes to form of the glare models.

  19. Digital forensics: an analytical crime scene procedure model (ACSPM).

    PubMed

    Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut

    2013-12-10

    In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic

  20. Just another social scene: evidence for decreased attention to negative social scenes in high-functioning autism.

    PubMed

    Santos, Andreia; Chaminade, Thierry; Da Fonseca, David; Silva, Catarina; Rosset, Delphine; Deruelle, Christine

    2012-09-01

    The adaptive threat-detection advantage takes the form of a preferential orienting of attention to threatening scenes. In this study, we compared attention to social scenes in 15 high-functioning individuals with autism (ASD) and matched typically developing (TD) individuals. Eye-tracking was recorded while participants were presented with pairs of scenes, either emotional positive-neutral, emotional negative-neutral or neutral-neutral scenes. Early allocation of attention, the first image fixated in each pair, differed between groups: contrary to TD individuals who showed the typical threat-detection advantage towards negative images, the ASD group failed to show a bias toward threat-related scenes. Later processing of stimuli, indicated by the total fixation to the images during the 3-s presentation, was found unaffected in the ASD group. These results support the hypothesis of an early atypical allocation of attention towards natural social scenes in ASD, that is compensated in later stages of visual processing. PMID:22160371

  1. [The future of "NOTES"].

    PubMed

    Maffei, Massimo; Dumonceau, Jean-Marc

    2008-09-01

    In 2003, the first peroral appendectomy was carried out in a human subject. In order to prevent the premature adoption in clinical practice of the so-called "Natural orifice transluminal endoscopic surgery" (NOTES), a rational framework for its development was proposed in 2005. After animal experimentation, further abdominal interventions were carried out in humans (example: cholecystectomy) through the mouth, vagina, or in a combined approach. The main advantage of NOTES compared to laparoscopic surgery is, from the patient viewpoint, the absence of body scar, but other benefits (example: less pain and costs) could prove to be significant. It is impossible to predict whether or not NOTES will enter in routine clinical practice, but it will generate significant improvements for digestive endoscopy. PMID:18831409

  2. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    PubMed

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. PMID:26541082

  3. Exploring Eye Movements in Patients with Glaucoma When Viewing a Driving Scene

    PubMed Central

    Crabb, David P.; Smith, Nicholas D.; Rauscher, Franziska G.; Chisholm, Catharine M.; Barbur, John L.; Edgar, David F.; Garway-Heath, David F.

    2010-01-01

    Background Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). Methodology/Principal Findings The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Conclusions/Significance Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful

  4. Object shape classification and scene shape representation for three-dimensional laser scanned outdoor data

    NASA Astrophysics Data System (ADS)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2013-02-01

    Shape analysis of a three-dimensional (3-D) scene is an important issue and could be widely used for various applications: city planning, robot navigation, virtual tourism, etc. We introduce an approach for understanding the primitive shape of the scene to reveal the semantic scene shape structure and represent the scene using shape elements. The scene objects are labeled and recognized using the geometric and semantic features for each cluster, which is based on the knowledge of scene. Furthermore, the object in scene with a different primitive shape could also be classified and fitted using the Gaussian map of the segmented scene. We demonstrate the presented approach on several complex scenes from laser scanning. According to the experimental result, the proposed method can accurately represent the geometric structure of the 3-D scene.

  5. Student Math Notes.

    ERIC Educational Resources Information Center

    Maletsky, Evan, Ed.

    1985-01-01

    Five sets of activities for students are included in this document. Each is designed for use in junior high and secondary school mathematics instruction. The first "Note" concerns magic squares in which the numbers in every row, column, and diagonal add up to the same sum. An etching by Albrecht Durer is presented, with four questions followed by…

  6. Notes and Discussion

    ERIC Educational Resources Information Center

    American Journal of Physics, 1978

    1978-01-01

    Includes eleven short notes, comments and responses to comments on a variety of topics such as uncertainty in a least-squares fit, display of diffraction patterns, the dark night sky paradox, error in the dynamics of deformable bodies and relative velocities and the runner. (GA)

  7. Notes on Linguistics, 1999.

    ERIC Educational Resources Information Center

    Payne, David, Ed.

    1999-01-01

    The 1999 issues of "Notes on Linguistics," published quarterly, include the following articles, review articles, reviews, book notices, and reports: "A New Program for Doing Morphology: Hermit Crab"; "Lingualinks CD-ROM: Field Guide to Recording Language Data"; "'Unruly' Phonology: An Introduction to Optimality Theory"; "Borrowing vs. Code…

  8. Sawtooth Functions. Classroom Notes

    ERIC Educational Resources Information Center

    Hirst, Keith

    2004-01-01

    Using MAPLE enables students to consider many examples which would be very tedious to work out by hand. This applies to graph plotting as well as to algebraic manipulation. The challenge is to use these observations to develop the students' understanding of mathematical concepts. In this note an interesting relationship arising from inverse…

  9. Notes on Literacy, 1997.

    ERIC Educational Resources Information Center

    Notes on Literacy, 1997

    1997-01-01

    The 1997 volume of "Notes on Literacy," numbers 1-4, includes the following articles: "Community Based Literacy, Burkina Faso"; "The Acquisition of a Second Writing System"; "Appropriate Methodology and Social Context"; "Literacy Megacourse Offered"; "Fitting in with Local Assumptions about Literacy: Some Ethiopian Experiences"; "Gender in…

  10. NCTM Student Math Notes.

    ERIC Educational Resources Information Center

    Maletsky, Evan, Ed.; Yunker, Lee E., Ed.

    1986-01-01

    Five sets of activities for students are included in this document. Each is designed for use in junior high and secondary school mathematics instruction. The first Note concerns mathematics on postage stamps. Historical procedures and mathematicians, metric conversion, geometric ideas, and formulas are among the topics considered. Successful…

  11. Notes on Linguistics, 1990.

    ERIC Educational Resources Information Center

    Notes on Linguistics, 1990

    1990-01-01

    This document consists of the four issues of "Notes on Linguistics" published during 1990. Articles in the four issues include: "The Indians Do Say Ugh-Ugh" (Howard W. Law); "Constraints of Relevance, A Key to Particle Typology" (Regina Blass); "Whatever Happened to Me? (An Objective Case Study)" (Aretta Loving); "Stop Me and Buy One (For $5...)"…

  12. Notes on Linguistics, 1998.

    ERIC Educational Resources Information Center

    Notes on Linguistics, 1998

    1998-01-01

    The four issues of the journal of language research and linguistic theory include these articles: "Notes on Determiners in Chamicuro" (Steve Parker); Lingualinks Field Manual Development" (Larry Hayashi); "Comments from an International Linguistics Consultant: Thumbnail Sketch" (Austin Hale); "Carlalinks Workshop" (Andy Black); "Implications of…

  13. Programmable Logic Application Notes

    NASA Technical Reports Server (NTRS)

    Katz, Richard

    2000-01-01

    This column will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will start a series of notes concentrating on analysis techniques with this issues section discussing worst-case analysis requirements.

  14. Programmable Logic Application Notes

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Day, John H. (Technical Monitor)

    2001-01-01

    This report will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will continue a series of notes concentrating on analysis techniques with this issue's section discussing the use of Root-Sum-Square calculations for digital delays.

  15. REKRIATE: A Knowledge Representation System for Object Recognition and Scene Interpretation

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Bhasin, Sanjay; Chen, X.

    1990-02-01

    What humans actually observe and how they comprehend this information is complex due to Gestalt processes and interaction of context in predicting the course of thinking and enforcing one idea while repressing another. How we extract the knowledge from the scene, what we get from the scene indeed and what we bring from our mechanisms of perception are areas separated by a thin, ill-defined line. The purpose of this paper is to present a system for Representing Knowledge and Recognizing and Interpreting Attention Trailed Entities dubbed as REKRIATE. It will be used as a tool for discovering the underlying principles involved in knowledge representation required for conceptual learning. REKRIATE has some inherited knowledge and is given a vocabulary which is used to form rules for identification of the object. It has various modalities of sensing and has the ability to measure the distance between the objects in the image as well as the similarity between different images of presumably the same object. All sensations received from matrix of different sensors put into an adequate form. The methodology proposed is applicable to not only the pictorial or visual world representation, but to any sensing modality. It is based upon the two premises: a) inseparability of all domains of the world representation including linguistic, as well as those formed by various sensor modalities. and b) representativity of the object at several levels of resolution simultaneously.

  16. Influence of 3D Effects on 1D Aerosol Retrievals in Synthetic, Partially Clouded Scenes

    NASA Astrophysics Data System (ADS)

    Stap, F. A.; Hasekamp, O. P.; Emde, C.

    2014-12-01

    Most satellite measurements of the microphysical and radiative properties of aerosol near clouds are either strictly screened for, or hindered by sub-pixel cloud contamination. This may change with the advent of a new generation of aerosol retrieval algorithms,intended for multi-angle, multi-wavelength photo-polarimetric instruments such as POLDER3on board PARASOL, which show ability to separate between aerosol and cloud particles.In order to obtain the required computational efficiency these algorithms typically make use of 1D radiative transfer models and are thus unable to account for the 3D effects that occur in actual, partially clouded scenes.Here, we apply an aerosol retrieval algorithm, which employs a 1D radiative transfer code and the independent pixel approximation, on synthetic, 3D, partially cloudedscenes calculated with the Monte Carlo radiative transfer code MYSTIC.The influence of the 3D effects due to clouds on the retrieved microphysical and optical aerosol properties is presented and the ability of the algorithm to retrieve these properties in partially clouded scenes will be discussed.

  17. Hilots make the family planning scene.

    PubMed

    1974-10-01

    A hilot (birth attendant), Aling Melchora, of Roxas, Oriental Mindora, who does motivation work in family planning is typical of hilots who are found in every barrio throughout the Philippines. She is 58 years old and has been a hilot for more than 30 years. She learned birth attendance in a training course at the Pandacan Puericulture Center in 1940. She averages 3 deliveries a month and 8 IUD acceptances a month. The hilots are a possible strong force in family planning motivation because of their influence and the respect with which people in the community regard them. They are older, experienced, always available, and charge very reasonable rates for services highly trained clinic staff would balk at doing. The Institute of Maternal and Child Health (IMCH) has trained 400 such hilots to do motivation work in family planning. It is noted that in the Philippines, the hilot may yet provide the key to reach the people in the barrios, which is the most important and challenging task for the national program on family planning. PMID:12306912

  18. Improving Trauma Triage Using Basic Crash Scene Data

    PubMed Central

    Ryb, Gabriel E.; Dischinger, Patricia C.

    2011-01-01

    Objective: to analyze the occurrence of severe injuries and deaths among crash victims transported to hospitals in relation to occupant and scene characteristics, including on-scene patient mobility, and their potential use in triaging patients to the appropriate level of care. Methods: the occurrence of death and ISS>15 were studied in relation to occupant, crash and mobility data readily available to EMS at the scene, using weighted NASS-CDS data. Data set was randomly split in two for model development and evaluation. Characteristics were combined to develop new triage schemes. Overtriage and undertriage rates were calculated for the NASS-CDS case trauma center allocation and for the newly developed triage schemes. Results: Compared to the NASS-CDS distribution, a scheme using patient mobility alone showed lower overtriage of those with ISS≤15 (38.8% vs. 55.5%) and lower undertriage of victims who died from their crash-related injuries (2.34% vs. 21.47%). Undertriage of injuries with ISS> 15 was similar (16.0 vs. 16.9). A scheme based on the presence of one of many scene risk factors (age>55, GCS<14, intrusion ≥18”, near lateral impact, far lateral impact with intrusion ≥12”, rollover or lack of restraint use) resulted in an undertriage of 0.86% (death) and 10.5% (ISS>15) and an overtriage of 63.4%. The combination of at least one of the scene risk factors and mobility status greatly decreased overtriage of those with ISS<15 (24.4%) with an increase in death undertriage (3.19%). Further combination of mobility and scene factors allowed for maintenance of a low undertriage (0.86%) as well as an acceptable overtriage (48%). Conclusion: Patient mobility data easily obtained at the scene of a crash allows triaging of injured patients to the appropriate facility with a high sensitivity and specificity. The addition of crash scene data to scene mobility allows further reductions on undertriaging or overtriaging. PMID:22105408

  19. Do Simultaneously Viewed Objects Influence Scene Recognition Individually or as Groups? Two Perceptual Studies

    PubMed Central

    Gagne, Christopher R.; MacEvoy, Sean P.

    2014-01-01

    The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process

  20. Rendering energy-conservative scenes in real time

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Garbo, Dennis L.; Crow, Dennis R.; Coker, Charles F.

    1997-07-01

    Real-time infrared (IR) scene generation from HardWare-in- the-Loop (HWIL) testing of IR seeker systems is a complex problem due to the required frame rates and image fidelity. High frame rates are required for current generation seeker systems to perform designation, discrimination, identification, tracking, and aimpoint selection tasks. Computational requirements for IR signature phenomenology and sensor effects have been difficult to perform in real- time to support HWIL testing. Commercial scene generation hardware is rapidly improving and is becoming a viable solution for HWIL testing activities being conducted at the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator facility at Eglin AFB, Florida. This paper presents computational techniques performed to overcome IR scene rendering errors incurred with commercially available hardware and software for real-time scene generation in support of HWIL testing. These techniques provide an acceptable solution to real-time IR scene generation that strikes a balance between physical accuracy and image framing rates. The results of these techniques are investigated as they pertain to rendering accuracy and speed for target objects which begin as a point source during acquisition and develop into an extended source representation during aimpoint selection.

  1. Probabilistic modeling of scene dynamics for applications in visual surveillance.

    PubMed

    Saleemi, Imran; Shafique, Khurram; Shah, Mubarak

    2009-08-01

    We propose a novel method to model and learn the scene activity, observed by a static camera. The proposed model is very general and can be applied for solution of a variety of problems. The motion patterns of objects in the scene are modeled in the form of a multivariate nonparametric probability density function of spatiotemporal variables (object locations and transition times between them). Kernel Density Estimation is used to learn this model in a completely unsupervised fashion. Learning is accomplished by observing the trajectories of objects by a static camera over extended periods of time. It encodes the probabilistic nature of the behavior of moving objects in the scene and is useful for activity analysis applications, such as persistent tracking and anomalous motion detection. In addition, the model also captures salient scene features, such as the areas of occlusion and most likely paths. Once the model is learned, we use a unified Markov Chain Monte Carlo (MCMC)-based framework for generating the most likely paths in the scene, improving foreground detection, persistent labeling of objects during tracking, and deciding whether a given trajectory represents an anomaly to the observed motion patterns. Experiments with real-world videos are reported which validate the proposed approach. PMID:19542580

  2. Is OpenSceneGraph an option for ESVS displays?

    NASA Astrophysics Data System (ADS)

    Peinecke, Niklas

    2015-05-01

    Modern Enhanced and Synthetic Vision Systems (ESVS) usually incorporate complex 3D displays, for example, terrain visualizations with color-coded altitude, obstacle representations that change their level of detail based on distance, semi-transparent overlays, dynamic labels, etc. All of these elements can be conveniently implemented by using a modern scene graph implementation. OpenSceneGraph offers such a data structure. Furthermore, OpenSceneGraph includes a broad support for industry-standard file formats, so 3D data and models from other applications can be used. OpenSceneGraph has a large user community and is driven by open source development. Thus a selection of visualization techniques is available and often solutions for common problems can be found easily in the community's discussion groups. On the other side, documentation is sometimes outdated or nonexistent. We investigate which ESVS applications can be realized using OpenSceneGraph and on which platforms this is possible. Furthermore, we take a look at technical and license limitations.

  3. Can cigarette warnings counterbalance effects of smoking scenes in movies?

    PubMed

    Golmier, Isabelle; Chebat, Jean-Charles; Gélinas-Chebat, Claire

    2007-02-01

    Scenes in movies where smoking occurs have been empirically shown to influence teenagers to smoke cigarettes. The capacity of a Canadian warning label on cigarette packages to decrease the effects of smoking scenes in popular movies has been investigated. A 2 x 3 factorial design was used to test the effects of the same movie scene with or without electronic manipulation of all elements related to smoking, and cigarette pack warnings, i.e., no warning, text-only warning, and text+picture warning. Smoking-related stereotypes and intent to smoke of teenagers were measured. It was found that, in the absence of warning, and in the presence of smoking scenes, teenagers showed positive smoking-related stereotypes. However, these effects were not observed if the teenagers were first exposed to a picture and text warning. Also, smoking-related stereotypes mediated the relationship of the combined presentation of a text and picture warning and a smoking scene on teenagers' intent to smoke. Effectiveness of Canadian warning labels to prevent or to decrease cigarette smoking among teenagers is discussed, and areas of research are proposed. PMID:17450995

  4. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification. PMID:23846511

  5. Selective looking at natural scenes: Hedonic content and gender.

    PubMed

    Bradley, Margaret M; Costa, Vincent D; Lang, Peter J

    2015-10-01

    Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. PMID:26156939

  6. Registration Study. Research Note.

    ERIC Educational Resources Information Center

    Baratta, Mary Kathryne

    During spring 1977 registration, 3,255 or 45% of Moraine Valley Community College (MVCC) registering students responded to a scheduling preferences and problems questionnaire covering enrollment status, curriculum load, program preference, ability to obtain courses, schedule conflicts, preferred times for class offerings, actual scheduling of…

  7. False recognition of objects in visual scenes: findings from a combined direct and indirect memory test.

    PubMed

    Weinstein, Yana; Nash, Robert A

    2013-01-01

    We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures. PMID:22976882

  8. Background gradient reduction of an infrared scene projector mounted on a flight motion simulator

    NASA Astrophysics Data System (ADS)

    Cantey, Thomas M.; Bowden, Mark H.; Ballard, Gary

    2008-04-01

    The U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) recently developed an infrared projector mounted on a flight motion simulator (FMS) that is used for hardware-in-the-loop (HWIL) testing. The initial application of this system within a HWIL environment required variations in the projected background radiance level to be very low. This paper describes the investigation into the causes of the variations in background radiance levels and the steps employed to reduce the background variance to an acceptable level. Test data collected before and after the corrective techniques are provided. The procedures discussed provide insight into the types of practical problems encountered when integrating infrared scene projector technologies into actual test facilities.

  9. Moral Reasoning in Hypothetical and Actual Situations.

    ERIC Educational Resources Information Center

    Sumprer, Gerard F.; Butter, Eliot J.

    1978-01-01

    Results of this investigation suggest that moral reasoning of college students, when assessed using the DIT format, is the same whether the dilemmas involve hypothetical or actual situations. Subjects, when presented with hypothetical situations, become deeply immersed in them and respond as if they were actual participants. (Author/BEF)

  10. Factors Related to Self-Actualization.

    ERIC Educational Resources Information Center

    Hogan, H. Wayne; McWilliams, Jettie M.

    1978-01-01

    Provides data to further support the notions that females score higher in self-actualization measures and that self-actualization scores correlate inversely to the degree of undesirability individuals assign to their heights and weights. Finds that, contrary to predictions, greater androgyny was related to lower, not higher, self-actualization…

  11. Content-adaptive ghost imaging of dynamic scenes.

    PubMed

    Li, Ziwei; Suo, Jinli; Hu, Xuemei; Dai, Qionghai

    2016-04-01

    Limited by long acquisition time of 2D ghost imaging, current ghost imaging systems are so far inapplicable for dynamic scenes. However, it's been demonstrated that nature images are spatiotemporally redundant and the redundancy is scene dependent. Inspired by that, we propose a content-adaptive computational ghost imaging approach to achieve high reconstruction quality under a small number of measurements, and thus achieve ghost imaging of dynamic scenes. To utilize content-adaptive inter-frame redundancy, we put the reconstruction under an iterative reweighted optimization, with non-uniform weight computed from temporal-correlated frame sequences. The proposed approach can achieve dynamic imaging at 16fps with 64×64-pixel resolution. PMID:27137022

  12. Real-time generation of reality scene in flight simulator

    NASA Astrophysics Data System (ADS)

    Zhang, Limin; Zhang, Linlin

    2004-03-01

    Reality scene is one of the most basic and important technologies in visual system of flight simulators. It includes real terrain, terrain object and physiognomy. Nowadays, it is usually constructed with digital elevation model (DEM) and remote sensing satellite data. In spite of the fast development of computer hardware, it is very difficult to generate large area reality scenes in real-time. Therefore, model simplification, multi-resolution rendering and level of detail (LOD) become the hotspot of recent research. Multi-resolution rendering is the development and extension of the LOD, model simplification is the key in generating a lower resolution model from a complex higher one. Based on the manufacturing practice of some flight simulators, this paper discusses ways of reality scenes' generating and simplification, and dynamic data partition and schedule based on viewpoint.

  13. Virtual environments for scene of crime reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  14. The contributions of color to recognition memory for natural scenes.

    PubMed

    Wichmann, Felix A; Sharpe, Lindsay T; Gegenfurtner, Karl R

    2002-05-01

    The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5%-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework. PMID:12018503

  15. Advanced radiometric millimeter-wave scene simulation: ARMSS

    NASA Astrophysics Data System (ADS)

    Hauss, Bruce I.; Agravante, Hiroshi H.; Chaiken, Steven

    1997-06-01

    In order to predict the performance of a passive millimeter wave sensor under a variety of weather, terrain and sensor operational conditions, TRW has developed the Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code. This code provides a comprehensive, end-to-end scene simulation capability based on rigorous, `first-principle' physics models of the passive millimeter wave phenomenology and sensor characteristics. The ARMSS code has been extensively benchmarked against both data in the literature and a wide array of millimeter-wave-field-imaging data. The code has been used in support of numerous passive millimeter wave technology programs for interpreting millimeter wave data, establishing scene signatures, performing mission analyses, and developing system requirements for the design of millimeter wave sensor systems. In this paper, we will present details of the ARMSS code and describe its current use in defining system requirements for the passive millimeter wave camera being developed under the Passive Millimeter Wave Camera Consortium led by TRW.

  16. Imaging radiometer overlay model for infrared scene synthesis

    NASA Astrophysics Data System (ADS)

    Jarvis, Donald E.; Gover, Robert E.

    2003-08-01

    A dynamic model of infrared missile engagements needs to integrate the output of signature models into a scene of given resolution with a changing viewpoint and moving targets against some background. Some signature prediction models are stand-alone software packages which currently cannot be dynamically interfaced to a running engagement model. They can be used to conveniently provide an image of an infrared target at high resolution at a single viewpoint. Using an imaging radiometer model, high-resolution, high-fidelity signatures can be quickly combined into a scene of desired configuration. This paper presents the derivation of such a model from physical and signal processing considerations, and its practical implementation. The derived methodology provides very high radiometric accuracy with a rigorously controlled error and smooth integration of objects moving through the scene.

  17. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  18. Use of AFIS for linking scenes of crime.

    PubMed

    Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David

    2016-05-01

    Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes. PMID:26996923

  19. Ray tracing a three dimensional scene using a grid

    DOEpatents

    Wald, Ingo; Ize, Santiago; Parker, Steven G; Knoll, Aaron

    2013-02-26

    Ray tracing a three-dimensional scene using a grid. One example embodiment is a method for ray tracing a three-dimensional scene using a grid. In this example method, the three-dimensional scene is made up of objects that are spatially partitioned into a plurality of cells that make up the grid. The method includes a first act of computing a bounding frustum of a packet of rays, and a second act of traversing the grid slice by slice along a major traversal axis. Each slice traversal includes a first act of determining one or more cells in the slice that are overlapped by the frustum and a second act of testing the rays in the packet for intersection with any objects at least partially bounded by the one or more cells overlapped by the frustum.

  20. Omnidirectional scene illuminant estimation using a multispectral imaging system

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji; Fukuda, Tsuyoshi

    2007-01-01

    A method is developed for estimating an omnidirectional distribution of the scene illuminant spectral distribution, including spiky fluorescent spectra. First, we show a measuring apparatus, consisting of the mirrored ball system and the imaging system using a LCT filter (or color filters), a monochrome CCD camera, and a personal computer. Second, the measuring system is calibrated and images representing the omnidirectional light distribution are created. Third, we present an algorithm for recovering the illuminant spectral-power distribution from the image data. Finally, the feasibility of the proposed method is demonstrated in an experiment on a classroom scene with different illuminant sources such as fluorescent light, incandescent light, and daylight. The accuracy of the estimated scene illuminants is shown in the cases of the 6-channel multi-band camera, 31-channel spectral camera, and 61-channel spectral camera.

  1. A comparison of actual and perceived residential proximity to toxic waste sites.

    PubMed

    Howe, H L

    1988-01-01

    Studies of Memphis and Three Mile Island have noted a positive association between actual residential distance and public concern about exposure to the potential of contamination, whereas none was found at Love Canal. In this study, concern about environmental contamination and exposure was examined in relation to both perceived and actual proximity to a toxic waste disposal site (TWDS). It was hypothesized that perceived residential proximity would better predict concern levels that would actual residential distance. The data were abstracted from a New York State, excluding New York City, survey using all respondents (N = 317) from one county known to have a large number of TWDSs. Using linear regression, the variance explained in concern scores was 22 times higher with perceived distance than for actual distance. Perceived residential distance was a significant predictor of concern scores, while actual distance was not. However, perceived distance explained less than 5% of the variance in concern scores. PMID:3196077

  2. Constructing Virtual Forest Scenes for Assessment of Sub-pixel Vegetation Structure From Imaging Spectroscopy

    NASA Astrophysics Data System (ADS)

    Gerace, A. D.; Yao, W.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; van Leeuwen, M.; Kampe, T. U.

    2015-12-01

    Assessment of vegetation structure via remote sensing modalities has a long history for a range of sensor platforms. Imaging spectroscopy, while often used for biochemical measurements, also applies to structural assessment in that the Hyperspectral Infrared Imager (HyspIRI), for instance, will provide an opportunity to monitor the global ecosystem. Establishing the linkage between HyspIRI data and sub-pixel vegetation structural variation therefore is of keen interest to the remote sensing and ecology communities. NASA's AVIRIS-C was used to collect airborne data during the 2013-2015 time frame, while ground truth data were limited to 2013 due to time-consuming and labor-intensive nature of field data collection. We augmented the available field data with a first-principles, physics-based simulation approach to refine our field efforts and to maintain larger control over within-pixel variation and associated assessments. Three virtual scenes were constructed for the study, corresponding to the actual vegetation structure of the NEON's Pacific Southwest site (Fresno, CA). They presented three typical forest types: oak savanna, dense coniferous forest, and conifer manzanita mixed forest. Airborne spectrometer and a field leaf area index sensor were simulated over these scenes using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model, a synthetic image generation model. After verifying the geometrical parameters and physical model with those replicative senses, more scenes could be constructed by changing one or more vegetation structural parameters, such as forest density, tree species, size, location, and within-pixel distribution. We constructed regression models of leaf area index (LAI, R2=0.92) and forest density(R2=0.97) with narrow-band vegetation indices through simulation. Those models can be used to improve the HyspIRI's suitability for consistent global vegetation structural assessments. The virtual scene and model can also be used in

  3. Improved content aware scene retargeting for retinitis pigmentosa patients

    PubMed Central

    2010-01-01

    Background In this paper we present a novel scene retargeting technique to reduce the visual scene while maintaining the size of the key features. The algorithm is scalable to implementation onto portable devices, and thus, has potential for augmented reality systems to provide visual support for those with tunnel vision. We therefore test the efficacy of our algorithm on shrinking the visual scene into the remaining field of view for those patients. Methods Simple spatial compression of visual scenes makes objects appear further away. We have therefore developed an algorithm which removes low importance information, maintaining the size of the significant features. Previous approaches in this field have included seam carving, which removes low importance seams from the scene, and shrinkability which dynamically shrinks the scene according to a generated importance map. The former method causes significant artifacts and the latter is inefficient. In this work we have developed a new algorithm, combining the best aspects of both these two previous methods. In particular, our approach is to generate a shrinkability importance map using as seam based approach. We then use it to dynamically shrink the scene in similar fashion to the shrinkability method. Importantly, we have implemented it so that it can be used in real time without prior knowledge of future frames. Results We have evaluated and compared our algorithm to the seam carving and image shrinkability approaches from a content preservation perspective and a compression quality perspective. Also our technique has been evaluated and tested on a trial included 20 participants with simulated tunnel vision. Results show the robustness of our method at reducing scenes up to 50% with minimal distortion. We also demonstrate efficacy in its use for those with simulated tunnel vision of 22 degrees of field of view or less. Conclusions Our approach allows us to perform content aware video resizing in real time using

  4. Robust pedestrian detection and tracking in crowded scenes

    NASA Astrophysics Data System (ADS)

    Lypetskyy, Yuriy

    2007-09-01

    This paper presents a vision based tracking system developed for very crowded situations like underground or railway stations. Our system consists on two main parts - searching of people candidates in single frames, and tracking them frame to frame over the scene. This paper concentrates mostly on the tracking part and describes its core components in detail. These are trajectories predictions using KLT vectors or Kalman filter, adaptive active shape model adjusting and texture matching. We show that combination of presented algorithms leads to robust people tracking even in complex scenes with permanent occlusions.

  5. Improved canopy reflectance modeling and scene inference through improved understanding of scene pattern

    NASA Technical Reports Server (NTRS)

    Franklin, Janet; Simonett, David

    1988-01-01

    The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.

  6. The Effect of Speed Alterations on Tempo Note Selection.

    ERIC Educational Resources Information Center

    Madsen, Clifford K.; And Others

    1986-01-01

    Investigated the tempo note preferences of 100 randomly selected college-level musicians using familiar orchestral music as stimuli. Subjects heard selections at increased, decreased, and unaltered tempi. Results showed musicians were not accurate in estimating original tempo and showed consistent preference for faster than actual tempo.…

  7. Early childhood exposure to parental nudity and scenes of parental sexuality ("primal scenes"): an 18-year longitudinal study of outcome.

    PubMed

    Okami, P; Olmstead, R; Abramson, P R; Pendleton, L

    1998-08-01

    As part of the UCLA Family Lifestyles Project (FLS), 200 male and female children participated in an 18-year longitudinal outcome study of early childhood exposure to parental nudity and scenes of parental sexuality ("primal scenes"). At age 17-18, participants were assessed for levels of self-acceptance; relations with peers, parents, and other adults; antisocial and criminal behavior; substance use; suicidal ideation; quality of sexual relationships; and problems associated with sexual relations. No harmful "main effect" correlates of the predictor variables were found. A significant crossover Sex of Participant X Primal Scenes interaction was found such that boys exposed to primal scenes before age 6 had reduced risk of STD transmission or having impregnated someone in adolescence. In contrast, girls exposed to primal scenes before age 6 had increased risk of STD transmission or having become pregnant. A number of main effect trends in the data (nonsignificant at p < 0.05, following the Bonferonni correction) linked exposure to nudity and exposure to primal scenes with beneficial outcomes. However, a number of these findings were mediated by sex of participant interactions showing that the effects were attenuated or absent for girls. All effects were independent of family stability, pathology, or child-rearing ideology; sex of participant; SES; and beliefs and attitudes toward sexuality. Limitations of the data and of long-term regression studies in general are discussed, and the sex of participant interactions are interpreted speculatively. It is suggested that pervasive beliefs in the harmfulness of the predictor variables are exaggerated. PMID:9681119

  8. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  9. 4. Panama Mount. Note concrete ring and metal rail. Note ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Panama Mount. Note concrete ring and metal rail. Note cliff erosion under foundation at left center. Looking 297° W. - Fort Funston, Panama Mounts for 155mm Guns, Skyline Boulevard & Great Highway, San Francisco, San Francisco County, CA

  10. Note-Taking: Different Notes for Different Research Stages.

    ERIC Educational Resources Information Center

    Callison, Daniel

    2003-01-01

    Explains the need to teach students different strategies for taking notes for research, especially at the exploration and collecting information stages, based on Carol Kuhlthau's research process. Discusses format changes; using index cards; notes for live presentations or media presentations versus notes for printed sources; and forming focus…

  11. Semantic control of feature extraction from natural scenes.

    PubMed

    Neri, Peter

    2014-02-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  12. Two-band DMD-based infrared scene simulator

    NASA Astrophysics Data System (ADS)

    Dupuis, Julia Renta; Mansur, David J.; Vaillancourt, Robert; Evans, Thomas; Carlson, David; Schundler, Elizabeth

    2009-05-01

    OPTRA is developing a two-band midwave infrared (MWIR) scene simulator based on digital micromirror device (DMD) technology; this simulator is intended for training various IR threat detection systems that exploit the relative intensities of two separate MWIR spectral bands. Our approach employs two DMDs, one for each spectral band, and an efficient optical design which overlays the scenes reflected by each through a common telecentric projector lens. Other key components include two miniature thermal sources, bandpass filters, and a dichroic beam combiner. Through the use of pulse width modulation, we are able to control the relative intensities of objects simulated by the two channels thereby enabling realistic scene simulations of various targets and projectiles approaching the threat detection system. Performance projections support radiant intensity levels, resolution, bandwidth, and scene durations that meet the requirements for a host of IR threat detection test scenarios. The feasibility of our concept has been demonstrated through the design, build, and test of a breadboard two-band simulator. In this paper we present the design of a prototype two-band simulator which builds on our experience from the breadboard build. We describe the system level, optical, mechanical, and software/electrical designs in detail as well as system characterization and future test plans.

  13. Fuzzy emotional semantic analysis and automated annotation of scene images.

    PubMed

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  14. Independence of color and luminance edges in natural scenes.

    PubMed

    Hansen, Thorsten; Gegenfurtner, Karl R

    2009-01-01

    Form vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information. PMID:19152717

  15. Memory, emotion, and pupil diameter: Repetition of natural scenes.

    PubMed

    Bradley, Margaret M; Lang, Peter J

    2015-09-01

    Recent studies have suggested that pupil diameter, like the "old-new" ERP, may be a measure of memory. Because the amplitude of the old-new ERP is enhanced for items encoded in the context of repetitions that are distributed (spaced), compared to massed (contiguous), we investigated whether pupil diameter is similarly sensitive to repetition. Emotional and neutral pictures of natural scenes were viewed once or repeated with massed (contiguous) or distributed (spaced) repetition during incidental free viewing and then tested on an explicit recognition test. Although an old-new difference in pupil diameter was found during successful recognition, pupil diameter was not enhanced for distributed, compared to massed, repetitions during either recognition or initial free viewing. Moreover, whereas a significant old-new difference was found for erotic scenes that had been seen only once during encoding, this difference was absent when erotic scenes were repeated. Taken together, the data suggest that pupil diameter is not a straightforward index of prior occurrence for natural scenes. PMID:25943211

  16. Publishing in '63: Looking for Relevance in a Changing Scene

    ERIC Educational Resources Information Center

    Reynolds, Thomas

    2008-01-01

    In this article, the author examines various publications published in 1963 in an attempt to look for relevance in a changing publication scene. The author considers Gordon Parks's reportorial photographs and accompanying personal essay, "What Their Cry Means to Me," as an act of publishing with implications for the teaching of written…

  17. Behind the Scenes at Berkeley Lab - The Mechanical Fabrication Facility

    ScienceCinema

    Wells, Russell; Chavez, Pete; Davis, Curtis; Bentley, Brian

    2014-09-15

    Part of the Behind the Scenes series at Berkeley Lab, this video highlights the lab's mechanical fabrication facility and its exceptional ability to produce unique tools essential to the lab's scientific mission. Through a combination of skilled craftsmanship and precision equipment, machinists and engineers work with scientists to create exactly what's needed - whether it's measured in microns or meters.

  18. Behind the Scenes at Berkeley Lab - The Mechanical Fabrication Facility

    SciTech Connect

    Wells, Russell; Chavez, Pete; Davis, Curtis; Bentley, Brian

    2013-05-17

    Part of the Behind the Scenes series at Berkeley Lab, this video highlights the lab's mechanical fabrication facility and its exceptional ability to produce unique tools essential to the lab's scientific mission. Through a combination of skilled craftsmanship and precision equipment, machinists and engineers work with scientists to create exactly what's needed - whether it's measured in microns or meters.

  19. The Hidden Agenda: The Behind-the-Scenes Employees.

    ERIC Educational Resources Information Center

    Deal, Terrence E.

    1994-01-01

    College and university personnel managers are urged to pay more attention to employees who operate behind the scenes by: finding a champion among them; linking work with institutional mission; hiring the best; encouraging customer service; soliciting ideas; fostering trust; enlarging responsibility; not upstaging; providing the best equipment; and…

  20. LOFTrelated semiscale test scene. Water has been dyed red. Hot ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT-related semiscale test scene. Water has been dyed red. Hot steam blowdown exits semiscale at TAN-609 at A&M complex. Edge of building is along left edge of view. Date: 1971. INEEL negative no. 71-376 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  1. Scene Context Dependency of Pattern Constancy of Time Series Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2008-01-01

    A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure.

  2. Contextual Cueing in Naturalistic Scenes: Global and Local Contexts

    ERIC Educational Resources Information Center

    Brockmole, James R.; Castelhano, Monica S.; Henderson, John M.

    2006-01-01

    In contextual cueing, the position of a target within a group of distractors is learned over repeated exposure to a display with reference to a few nearby items rather than to the global pattern created by the elements. The authors contrasted the role of global and local contexts for contextual cueing in naturalistic scenes. Experiment 1 showed…

  3. Multistage neural network model for dynamic scene analysis

    SciTech Connect

    Ajjimarangsee, P.

    1989-01-01

    This research is concerned with dynamic scene analysis. The goal of scene analysis is to recognize objects and have a meaningful interpretation of the scene from which images are obtained. The task of the dynamic scene analysis process generally consists of region identification, motion analysis and object recognition. The objective of this research is to develop clustering algorithms using neural network approach and to investigate a multi-stage neural network model for region identification and motion analysis. The research is separated into three parts. First, a clustering algorithm using Kohonens' self-organizing feature map network is developed to be capable of generating continuous membership valued outputs. A newly developed version of the updating algorithm of the network is introduced to achieve a high degree of parallelism. A neural network model for the fuzzy c-means algorithm is proposed. In the second part, the parallel algorithms of a neural network model for clustering using the self-organizing feature maps approach and a neural network that models the fuzzy c-means algorithm are modified for implementation on a distributed memory parallel architecture. In the third part, supervised and unsupervised neural network models for motion analysis are investigated. For a supervised neural network, a three layer perceptron network is trained by a series of images to recognize the movement of the objects. For the unsupervised neural network, a self-organizing feature mapping network will learn to recognize the movement of the objects without an explicit training phase.

  4. Design of a wide field of view infrared scene projector

    NASA Astrophysics Data System (ADS)

    Jiang, Zhenyu; Li, Lin; Huang, YiFan

    2008-03-01

    In order to make the projected scene cover the seeker's field-of-view promptly the conventional projection optical systems used for hardware-in-the-loop simulation test usually depend on the 5 axes flight-motion-simulator. Those flight-motion-simulator tables are controlled via servomechanisms. The servomechanism needs many axis position transducers and many electromechanical devices. The structure and controlling procedure of the system are complicated. It is hard to avoid the mechanical motion and controlling errors absolutely. The target image jitter will be induced by the vibration of mechanical platform, and the frequency response is limited by the structural performance. To overcome these defects a new infrared image simulating projection system for hardware-in-the-loop simulation test is presented in this paper. The system in this paper consists of multiple lenses joined side by side on a sphere surface. Each single lens uses one IR image generator or resistor array etc. Every IR image generator displays special IR image controlled by the scene simulation computer. The scene computer distributes to every IR image generator the needed image. So the scene detected by the missile seeker is integrated and uninterrupted. The entrance pupil of the seeker lies in the centre of the sphere. Almost semi-sphere range scene can be achieved by the projection system, and the total field of view can be extended by increasing the number of the lenses. However, the luminance uniformity in the field-of-view will be influenced by the joint between the lenses. The method of controlling the luminance uniformity of field-of-view is studied in this paper. The needed luminous exitance of each resist array is analyzed. The experiment shows that the new method is applicable for the hardware-in-the-loop simulation test.

  5. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  6. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  7. Adolescent Characters and Alcohol Use Scenes in Brazilian Movies, 2000-2008.

    PubMed

    Castaldelli-Maia, João Mauricio; de Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh

    2016-04-01

    Quantitative structured assessment of 193 scenes depicting substance use from a convenience sample of 50 Brazilian movies was performed. Logistic regression and analysis of variance or multivariate analysis of variance models were employed to test for two different types of outcome regarding alcohol appearance: The mean length of alcohol scenes in seconds and the prevalence of alcohol use scenes. The presence of adolescent characters was associated with a higher prevalence of alcohol use scenes compared to nonalcohol use scenes. The presence of adolescents was also associated with a higher than average length of alcohol use scenes compared to the nonalcohol use scenes. Alcohol use was negatively associated with cannabis, cocaine, and other drugs use. However, when the use of cannabis, cocaine, or other drugs was present in the alcohol use scenes, a higher average length was found. This may mean that most vulnerable group may see drinking as a more attractive option leading to higher alcohol use. PMID:27166357

  8. Contextual Effects of Scene on the Visual Perception of Object Orientation in Depth

    PubMed Central

    Niimi, Ryosuke; Watanabe, Katsumi

    2013-01-01

    We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle) of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1). When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2). This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3). Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object. PMID:24391947

  9. Bag of Lines (BoL) for Improved Aerial Scene Representation

    DOE PAGESBeta

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less

  10. Bag of Lines (BoL) for Improved Aerial Scene Representation

    SciTech Connect

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scale invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.

  11. Scene-Selectivity and Retinotopy in Medial Parietal Cortex.

    PubMed

    Silson, Edward H; Steel, Adam D; Baker, Chris I

    2016-01-01

    Functional imaging studies in human reliably identify a trio of scene-selective regions, one on each of the lateral [occipital place area (OPA)], ventral [parahippocampal place area (PPA)], and medial [retrosplenial complex (RSC)] cortical surfaces. Recently, we demonstrated differential retinotopic biases for the contralateral lower and upper visual fields within OPA and PPA, respectively. Here, using functional magnetic resonance imaging, we combine detailed mapping of both population receptive fields (pRF) and category-selectivity, with independently acquired resting-state functional connectivity analyses, to examine scene and retinotopic processing within medial parietal cortex. We identified a medial scene-selective region, which was contained largely within the posterior and ventral bank of the parieto-occipital sulcus (POS). While this region is typically referred to as RSC, the spatial extent of our scene-selective region typically did not extend into retrosplenial cortex, and thus we adopt the term medial place area (MPA) to refer to this visually defined scene-selective region. Intriguingly MPA co-localized with a region identified solely on the basis of retinotopic sensitivity using pRF analyses. We found that MPA demonstrates a significant contralateral visual field bias, coupled with large pRF sizes. Unlike OPA and PPA, MPA did not show a consistent bias to a single visual quadrant. MPA also co-localized with a region identified by strong differential functional connectivity with PPA and the human face-selective fusiform face area (FFA), commensurate with its functional selectivity. Functional connectivity with OPA was much weaker than with PPA, and similar to that with face-selective occipital face area (OFA), suggesting a closer link with ventral than lateral cortex. Consistent with prior research, we also observed differential functional connectivity in medial parietal cortex for anterior over posterior PPA, as well as a region on the lateral

  12. Scene-Selectivity and Retinotopy in Medial Parietal Cortex

    PubMed Central

    Silson, Edward H.; Steel, Adam D.; Baker, Chris I.

    2016-01-01

    Functional imaging studies in human reliably identify a trio of scene-selective regions, one on each of the lateral [occipital place area (OPA)], ventral [parahippocampal place area (PPA)], and medial [retrosplenial complex (RSC)] cortical surfaces. Recently, we demonstrated differential retinotopic biases for the contralateral lower and upper visual fields within OPA and PPA, respectively. Here, using functional magnetic resonance imaging, we combine detailed mapping of both population receptive fields (pRF) and category-selectivity, with independently acquired resting-state functional connectivity analyses, to examine scene and retinotopic processing within medial parietal cortex. We identified a medial scene-selective region, which was contained largely within the posterior and ventral bank of the parieto-occipital sulcus (POS). While this region is typically referred to as RSC, the spatial extent of our scene-selective region typically did not extend into retrosplenial cortex, and thus we adopt the term medial place area (MPA) to refer to this visually defined scene-selective region. Intriguingly MPA co-localized with a region identified solely on the basis of retinotopic sensitivity using pRF analyses. We found that MPA demonstrates a significant contralateral visual field bias, coupled with large pRF sizes. Unlike OPA and PPA, MPA did not show a consistent bias to a single visual quadrant. MPA also co-localized with a region identified by strong differential functional connectivity with PPA and the human face-selective fusiform face area (FFA), commensurate with its functional selectivity. Functional connectivity with OPA was much weaker than with PPA, and similar to that with face-selective occipital face area (OFA), suggesting a closer link with ventral than lateral cortex. Consistent with prior research, we also observed differential functional connectivity in medial parietal cortex for anterior over posterior PPA, as well as a region on the lateral

  13. Computerised cardiological case notes.

    PubMed Central

    Williams, K N; Brooksby, I A; Morrice, J; Houseago, S; Webb-Peploe, M M

    1982-01-01

    Optical Mark Reader forms have been used by the Cardiac Department at St Thomas's Hospital for six years to store clinical and haemodynamic data by computer. Forms are completed by clinical staff in outpatients and also for those patients undergoing cardiac catheterisation. Three documents are used to record the symptoms and signs at the clinical consultation, the results of relevant investigations, and the important findings at cardiac catheterisation. These documents are fed into a computer and data from them, together with a limited quantity of types information, are used to produce full clinical reports for our colleagues and the case notes. There reports have saved much secretarial and medical time. A variety of analyses is available for research and management purposes. PMID:7093086

  14. Realizing actual feedback control of complex network

    NASA Astrophysics Data System (ADS)

    Tu, Chengyi; Cheng, Yuhua

    2014-06-01

    In this paper, we present the concept of feedbackability and how to identify the Minimum Feedbackability Set of an arbitrary complex directed network. Furthermore, we design an estimator and a feedback controller accessing one MFS to realize actual feedback control, i.e. control the system to our desired state according to the estimated system internal state from the output of estimator. Last but not least, we perform numerical simulations of a small linear time-invariant dynamics network and a real simple food network to verify the theoretical results. The framework presented here could make an arbitrary complex directed network realize actual feedback control and deepen our understanding of complex systems.

  15. Notes from Nepal: Is There a Better Way to Provide Search and Rescue?

    PubMed

    Peleg, Kobi

    2015-12-01

    This article discusses a possibility for overcoming the limited efficiency of international search and rescue teams in saving lives after earthquakes, which was emphasized by the recent disaster in Nepal and in other earthquakes all over the world. Because most lives are actually saved by the locals themselves long before the international teams arrive on scene, many more lives could be saved by teaching the basics of light rescue to local students and citizens in threatened countries. PMID:26456243

  16. The Nesting of Search Contexts within Natural Scenes: Evidence from Contextual Cuing

    ERIC Educational Resources Information Center

    Brooks, Daniel I.; Rasmussen, Ian P.; Hollingworth, Andrew

    2010-01-01

    In a contextual cuing paradigm, we examined how memory for the spatial structure of a natural scene guides visual search. Participants searched through arrays of objects that were embedded within depictions of real-world scenes. If a repeated search array was associated with a single scene during study, then array repetition produced significant…

  17. Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes

    ERIC Educational Resources Information Center

    Becker, Mark W.; Rasmussen, Ian P.

    2008-01-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…

  18. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    ERIC Educational Resources Information Center

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  19. Recognition of Natural Scenes from Global Properties: Seeing the Forest without Representing the Trees

    ERIC Educational Resources Information Center

    Greene, Michelle R.; Oliva, Aude

    2009-01-01

    Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects…

  20. A crops and soils data base for scene radiation research

    NASA Technical Reports Server (NTRS)

    Biehl, L. L.; Bauer, M. E.; Robinson, B. F.; Daughtry, C. S. T.; Silva, L. F.; Pitts, D. E.

    1982-01-01

    Management and planning activities with respect to food production require accurate and timely information on crops and soils on a global basis. The needed information can be obtained with the aid of satellite-borne sensors, if the relations between the spectral properties and the important biological-physical parameters of crops and soils are known. In order to obtain this knowledge, the development of a crops and soils scene radiation research data base was initiated. Work related to the development of this data base is discussed, taking into account details regarding the conducted experiments, the performed measurements, the calibration of spectral data, questions of data base access, and the expansion of the crops and soils scene radiation data base for 1982.

  1. Radiometric calibration procedures for a wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Flynn, David S.; Marlow, Steven A.; Bergin, Thomas P.; Kircher, James R.

    1999-07-01

    The Wideband Infrared Scene Projector (WISP) has been undergoing development for the Kinetic-Kill Vehicle Hardware-in-the-Loop Simulator facility at Eglin AFB, Florida. In order to perform realistic tests of an infrared seeker, the radiometric output of the WISP system must produce the same response in the seeker as the real scene. In order to ensure this radiometric realism, calibration procedures must be established and followed. This paper describes calibration procedures that have been used in recent tests. The procedures require knowledge of the camera spectral response in the seeker under test. The camera is set up to operate over the desired range of observable radiances. The camera is then nonuniformity corrected (NUCed) and calibrated with an extended blackbody. The camera drift rates are characterized, and as necessary, the camera is reNUCed and recalibrated. The camera is then set up to observe the WISP system, and calibration measurements are made of the camera/WISP system.

  2. Scene Illumination as an Indicator of Image Manipulation

    NASA Astrophysics Data System (ADS)

    Riess, Christian; Angelopoulou, Elli

    The goal of blind image forensics is to distinguish original and manipulated images. We propose illumination color as a new indicator for the assessment of image authenticity. Many images exhibit a combination of multiple illuminants (flash photography, mixture of indoor and outdoor lighting, etc.). In the proposed method, the user selects illuminated areas for further investigation. The illuminant colors are locally estimated, effectively decomposing the scene in a map of differently illuminated regions. Inconsistencies in such a map suggest possible image tampering. Our method is physics-based, which implies that the outcome of the estimation can be further constrained if additional knowledge on the scene is available. Experiments show that these illumination maps provide a useful and very general forensics tool for the analysis of color images.

  3. Hough-based recognition of complex 3-D road scenes

    NASA Astrophysics Data System (ADS)

    Foresti, Gian L.; Regazzoni, Carlo S.

    1992-02-01

    In this paper, we address the problem of the object recognition in a complex 3-D scene by detecting the 2-D object projection on the image-plane for an autonomous vehicle driving; in particular, the problems of road detection and obstacle avoidance in natural road scenes are investigated. A new implementation of the Hough Transform (HT), called Labeled Hough Transform (LHT), to extract and group symbolic features is here presented; the novelty of this method, in respect to the traditional approach, consists in the capability of splitting a maximum in the parameter space into noncontiguous segments, while performing voting. Results are presented on a road image containing obstacles which show the efficiency, good quality, and time performances of the algorithm.

  4. Unique scene description from radar and infrared images

    NASA Astrophysics Data System (ADS)

    Blanquart, Jacques G.; Orgiazzi, Philippe; Grenier, Gilles; Cothenet, A.

    1990-10-01

    Two different visual descriptions provided by two image sensors (radar and infrared camera) contain information of the same scene. We want to associate them, using different methods of fusion, in order to improve our knowledge of the scene. Two approaches are described in this paper: navigation and recognition. In the first approach, the radar is the predominant sensor and we use cartographic information of the area to guide the fusion process. In the second approach, we find regions of interest in the radar image that are used to extract features in the infrared image. To experiment our algorithm, we are using a PtSi infrared camera (3-5jtm) with a 512*5 12 matrix and a millimeterwave radar, that are looking at the same area from an airplane, to detect objects like buildings, roads, fields ... . It is the basis of further developments within an expert system including more complex notions of image processing objects.

  5. Similarity-based global optimization of buildings in urban scene

    NASA Astrophysics Data System (ADS)

    Zhu, Quansheng; Zhang, Jing; Jiang, Wanshou

    2013-10-01

    In this paper, an approach for the similarity-based global optimization of buildings in urban scene is presented. In the past, most researches concentrated on single building reconstruction, making it difficult to reconstruct reliable models from noisy or incomplete point clouds. To obtain a better result, a new trend is to utilize the similarity among the buildings. Therefore, a new similarity detection and global optimization strategy is adopted to modify local-fitting geometric errors. Firstly, the hierarchical structure that consists of geometric, topological and semantic features is constructed to represent complex roof models. Secondly, similar roof models can be detected by combining primitive structure and connection similarities. At last, the global optimization strategy is applied to preserve the consistency and precision of similar roof structures. Moreover, non-local consolidation is adapted to detect small roof parts. The experiments reveal that the proposed method can obtain convincing roof models and promote the reconstruction quality of 3D buildings in urban scene.

  6. Natural auditory scene statistics shapes human spatial hearing

    PubMed Central

    Parise, Cesare V.; Knorre, Katharina; Ernst, Marc O.

    2014-01-01

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409

  7. Racial fantasies and the primal scene of miscegenation.

    PubMed

    Calvo, Luz

    2008-02-01

    The primal scene, theorized by Freud in his case history of the Wolf Man, is a fantasy scenario thoroughly embedded in social relations. While pursuing his analysis of oedipal structures in the Wolf Man's case history, Freud overlooked social relations, downplaying the importance of racial and class difference in the Wolf Man's sexual etiology. In this essay, I trace the circulation of two fantasy structures: 'the primal scene of miscegenation' and 'A black man is being beaten,' both of which structure desire in both Freud 's era and our own. I interpret Fanon's work on racial subjectivity alongside Freud 's theory of fantasy to elucidate the interconnected nature of racial and sexual difference in both Freud and Fanon's theories. The racial fantasies proposed in this essay have application to clinical settings, where they may structure transference and countertransference. PMID:18290791

  8. Nested-hierarchical scene models and image segmentation

    NASA Technical Reports Server (NTRS)

    Woodcock, C.; Harward, V. J.

    1992-01-01

    An improved model of scenes for image analysis purposes, a nested-hierarchical approach which explicitly acknowledges multiple scales of objects or categories of objects, is presented. A multiple-pass, region-based segmentation algorithm improves the segmentation of images from scenes better modeled as a nested hierarchy. A multiple-pass approach allows slow and careful growth of regions while interregion distances are below a global threshold. Past the global threshold, a minimum region size parameter forces development of regions in areas of high local variance. Maximum and viable region size parameters limit the development of undesirably large regions. Application of the segmentation algorithm for forest stand delineation in TM imagery yields regions corresponding to identifiable features in the landscape. The use of a local variance, adaptive-window texture channel in conjunction with spectral bands improves the ability to define regions corresponding to sparsely stocked forest stands which have high internal variance.

  9. Scene-based Wave-front Sensing for Remote Imaging

    SciTech Connect

    Poyneer, L A; LaFortune, K; Chan, C

    2003-07-30

    Scene-based wave-front sensing (SBWFS) is a technique that allows an arbitrary scene to be used for wave-front sensing with adaptive optics (AO) instead of the normal point source. This makes AO feasible in a wide range of interesting scenarios. This paper first presents the basic concepts and properties of SBWFS. Then it discusses that application of this technique with AO to remote imaging. For the specific case of correction of a lightweight optic. End-to-end simulation results establish that in this case, SBWFS can perform as well as point-source AO. Design considerations such as noise propagation, number of subapertures and tracking changing image content are analyzed.

  10. Codesign and high-performance computing: scenes and crisis

    NASA Astrophysics Data System (ADS)

    Hartenstein, Reiner W.; Becker, Juergen; Herz, Michael; Nageldinger, Ulrich

    1996-10-01

    During the development of scientific disciplines, mainstream periods alternate with revolution periods, where 'out of the way disciplines' can become a mainstream. Just in the moment increasing turbulences announce a new revolution. The variety of 'high performance computing' scenes will be mixed up. Can an increasing application of structurally programmable hardware platforms (computing by the yard) break the monopoly of the von Neumann mainstream paradigm (computing in time) also in multipurpose hardware? From a co-design point of view, the paper tries to provide an overview through the turbulences and tendencies, and introduces a fundamentally new machine paradigm, which uses a field-programmable data path array (FPDPA) providing instruction level parallelism. The paper drafts a structured design space for all kinds of parallel algorithm implementations and platforms: procedural programming versus structural programing, concurrent versus parallel, hardwired versus reconfigurable. A structured view by rearranging the variety of computing science scenes seems to be feasible.

  11. Analysis of Agricultural Scenes Based on SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Nico, G.; Mascolo, L.; Pellegrinelli, A.; Giretti, D.; Soccodato, F. M.; Catalao, J.

    2015-05-01

    The aim of this work is to study the temporal behavior of interferometric coherence of natural scenes and use it to discriminate different classes of targets. The scattering properties of targets within a SAR resolution cell depend on their spatial distribution and dielectric constant. We focus on agriculture scenes. In case of bare soils, the radar cross section depends on surface roughness and soil moisture. Both quantities are strongly related to agriculture practices. The interferometric coherence can be modelled as the factorization of correlation terms due to spatial and temporal baselines, terrain roughness, soil moisture and residual noise. We use multivariate analysis methodologies to discriminate scattering classes exhibiting different temporal behaviors of the interferometric coherence. For each class, the temporal evolution of the interferometric phase and radar cross-section are studied.

  12. Natural auditory scene statistics shapes human spatial hearing.

    PubMed

    Parise, Cesare V; Knorre, Katharina; Ernst, Marc O

    2014-04-22

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409

  13. Optical slicing of large scenes by synthetic aperture integral imaging

    NASA Astrophysics Data System (ADS)

    Navarro, Héctor; Saavedra, Genaro; Molina, Ainhoa; Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Javidi, Bahram

    2010-04-01

    Integral imaging (InI) technology was created with the aim of providing the binocular observers of monitors, or matrix display devices, with auto-stereoscopic images of 3D scenes. However, along the last few years the inventiveness of researches has allowed to find many other interesting applications of integral imaging. Examples of this are the application of InI in object recognition, the mapping of 3D polarization distributions, or the elimination of occluding signals. One of the most interesting applications of integral imaging is the production of views focused at different depths of the 3D scene. This application is the natural result of the ability of InI to create focal stacks from a single input image. In this contribution we present new algorithm for this optical slicing application, and show that it is possible the 3D reconstruction with improved lateral resolution.

  14. An intercomparison of artificial intelligence approaches for polar scene identification

    NASA Technical Reports Server (NTRS)

    Tovinkere, V. R.; Penaloza, M.; Logar, A.; Lee, J.; Weger, R. C.; Berendes, T. A.; Welch, R. M.

    1993-01-01

    The following six different artificial-intelligence (AI) approaches to polar scene identification are examined: (1) a feed forward back propagation neural network, (2) a probabilistic neural network, (3) a hybrid neural network, (4) a 'don't care' feed forward perception model, (5) a 'don't care' feed forward back propagation neural network, and (6) a fuzzy logic based expert system. The ten classes into which six AVHRR local-coverage arctic scenes were classified were: water, solid sea ice, broken sea ice, snow-covered mountains, land, stratus over ice, stratus over water, cirrus over water, cumulus over water, and multilayer cloudiness. It was found that 'don't care' back propagation neural network produced the highest accuracies. This approach has also low CPU requirement.

  15. Knowledge-based interpretation of outdoor natural color scenes

    SciTech Connect

    Ohta, Y.

    1985-01-01

    One of the major targets in vision research is to develop a total vision system starting from images to a symbolic description, utilizing various knowledge sources. This book demonstrates a knowledge-based image interpretation system that analyzes natural color scenes. Topics covered include color information for region segmentation, preliminary segmentation of color images, and a bottom-up and top-down region analyzer.

  16. Better Batteries for Transportation: Behind the Scenes @ Berkeley Lab

    SciTech Connect

    Battaglia, Vince

    2011-01-01

    Vince Battaglia leads a behind-the-scenes tour of Berkeley Lab's BATT, the Batteries for Advanced Transportation Technologies Program he leads, where researchers aim to improve batteries upon which the range, efficiency, and power of tomorrow's electric cars will depend. This is the first in a forthcoming series of videos taking viewers into the laboratories and research facilities that members of the public rarely get to see.

  17. Acoustic simulation in realistic 3D virtual scenes

    NASA Astrophysics Data System (ADS)

    Gozard, Patrick; Le Goff, Alain; Naz, Pierre; Cathala, Thierry; Latger, Jean

    2003-09-01

    The simulation workshop CHORALE developed in collaboration with OKTAL SE company for the French MoD is used by government services and industrial companies for weapon system validation and qualification trials in the infrared domain. The main operational reference for CHORALE is the assessment of the infrared guidance system of the Storm Shadow missile French version, called Scalp. The use of CHORALE workshop is now extended to the acoustic domain. The main objective is the simulation of the detection of moving vehicles in realistic 3D virtual scenes. This article briefly describes the acoustic model in CHORALE. The 3D scene is described by a set of polygons. Each polygon is characterized by its acoustic resistivity or its complex impedance. Sound sources are associated with moving vehicles and are characterized by their spectra and directivities. A microphone sensor is defined by its position, its frequency band and its sensitivity. The purpose of the acoustic simulation is to calculate the incoming acoustic pressure on microphone sensors. CHORALE is based on a generic ray tracing kernel. This kernel possesses original capabilities: computation time is nearly independent on the scene complexity, especially the number of polygons, databases are enhanced with precise physical data, special mechanisms of antialiasing have been developed that enable to manage very accurate details. The ray tracer takes into account the wave geometrical divergence and the atmospheric transmission. The sound wave refraction is simulated and rays cast in the 3D scene are curved according to air temperature gradient. Finally, sound diffraction by edges (hill, wall,...) is also taken into account.

  18. Research on real-time infrared simulation of ground scene

    NASA Astrophysics Data System (ADS)

    Dai, Ying; Chen, Bin; Ming, Delie

    2013-10-01

    The paper proposes a modularized infrared scene imaging simulation solution based on GPU and designs a corresponding implementation framework to contain all the key modules of infrared imaging simulation. After that, the paper presents the module of infrared radiation computation, which realizes the real-time parallel computation of infrared radiation by the technology of GPU shader and translates the simulation work from 3D graphics to 2D digital image by the technology of rendering to texture.

  19. Neural Correlates of Divided Attention in Natural Scenes.

    PubMed

    Fagioli, Sabrina; Macaluso, Emiliano

    2016-09-01

    Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes. PMID:27167404

  20. Better Batteries for Transportation: Behind the Scenes @ Berkeley Lab

    ScienceCinema

    Battaglia, Vince

    2013-05-29

    Vince Battaglia leads a behind-the-scenes tour of Berkeley Lab's BATT, the Batteries for Advanced Transportation Technologies Program he leads, where researchers aim to improve batteries upon which the range, efficiency, and power of tomorrow's electric cars will depend. This is the first in a forthcoming series of videos taking viewers into the laboratories and research facilities that members of the public rarely get to see.

  1. Device for imaging scenes with very large ranges of intensity

    DOEpatents

    Deason, Vance Albert

    2011-11-15

    A device for imaging scenes with a very large range of intensity having a pair of polarizers, a primary lens, an attenuating mask, and an imaging device optically connected along an optical axis. Preferably, a secondary lens, positioned between the attenuating mask and the imaging device is used to focus light on the imaging device. The angle between the first polarization direction and the second polarization direction is adjustable.

  2. 50 CFR 253.16 - Actual cost.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Actual cost. 253.16 Section 253.16 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES FISHERIES ASSISTANCE PROGRAMS Fisheries Finance Program §...

  3. 50 CFR 253.16 - Actual cost.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 11 2013-10-01 2013-10-01 false Actual cost. 253.16 Section 253.16 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE AID TO FISHERIES FISHERIES ASSISTANCE PROGRAMS Fisheries Finance Program §...

  4. Humanistic Education and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1984-01-01

    Stresses the need for theoretical justification for the development of humanistic education programs in today's schools. Explores Abraham Maslow's hierarchy of needs and theory of self-actualization. Argues that Maslow's theory may be the best available for educators concerned with educating the whole child. (JHZ)

  5. Children's Rights and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1982-01-01

    Educators need to seriously reflect upon the concept of children's rights. Though the idea of children's rights has been debated numerous times, the idea remains vague and shapeless; however, Maslow's theory of self-actualization can provide the children's rights idea with a needed theoretical framework. (Author)

  6. Culture Studies and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1983-01-01

    True citizenship education is impossible unless students develop the habit of intelligently evaluating cultures. Abraham Maslow's theory of self-actualization, a theory of innate human needs and of human motivation, is a nonethnocentric tool which can be used by teachers and students to help them understand other cultures. (SR)

  7. Group Counseling for Self-Actualization.

    ERIC Educational Resources Information Center

    Streich, William H.; Keeler, Douglas J.

    Self-concept, creativity, growth orientation, an integrated value system, and receptiveness to new experiences are considered to be crucial variables to the self-actualization process. A regular, year-long group counseling program was conducted with 85 randomly selected gifted secondary students in the Farmington, Connecticut Public Schools. A…

  8. Racial Discrimination in Occupations: Perceived and Actual.

    ERIC Educational Resources Information Center

    Turner, Castellano B.; Turner, Barbara F.

    The relationship between the actual representation of Blacks in certain occupations and individual perceptions of the occupational opportunity structure were examined. A scale which rated the degree of perceived discrimination against Blacks in 21 occupations was administered to 75 black male, 70 black female, 1,429 white male and 1,457 white…

  9. Developing Human Resources through Actualizing Human Potential

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    2012-01-01

    The key to human resource development is in actualizing individual and collective thinking, feeling and choosing potentials related to our minds, hearts and wills respectively. These capacities and faculties must be balanced and regulated according to the standards of truth, love and justice for individual, community and institutional development,…

  10. From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category.

    PubMed

    Groen, Iris I A; Ghebreab, Sennay; Prins, Hielke; Lamme, Victor A F; Scholte, H Steven

    2013-11-27

    The visual system processes natural scenes in a split second. Part of this process is the extraction of "gist," a global first impression. It is unclear, however, how the human visual system computes this information. Here, we show that, when human observers categorize global information in real-world scenes, the brain exhibits strong sensitivity to low-level summary statistics. Subjects rated a specific instance of a global scene property, naturalness, for a large set of natural scenes while EEG was recorded. For each individual scene, we derived two physiologically plausible summary statistics by spatially pooling local contrast filter outputs: contrast energy (CE), indexing contrast strength, and spatial coherence (SC), indexing scene fragmentation. We show that behavioral performance is directly related to these statistics, with naturalness rating being influenced in particular by SC. At the neural level, both statistics parametrically modulated single-trial event-related potential amplitudes during an early, transient window (100-150 ms), but SC continued to influence activity levels later in time (up to 250 ms). In addition, the magnitude of neural activity that discriminated between man-made versus natural ratings of individual trials was related to SC, but not CE. These results suggest that global scene information may be computed by spatial pooling of responses from early visual areas (e.g., LGN or V1). The increased sensitivity over time to SC in particular, which reflects scene fragmentation, suggests that this statistic is actively exploited to estimate scene naturalness. PMID:24285888

  11. Scene-based nonuniformity correction using local constant statistics.

    PubMed

    Zhang, Chao; Zhao, Wenyi

    2008-06-01

    In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts. PMID:18516156

  12. Auditory Scene Analysis: The Sweet Music of Ambiguity

    PubMed Central

    Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A.

    2011-01-01

    In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701

  13. Age differences in adults' scene memory: knowledge and strategy interactions.

    PubMed

    Azmitia, M; Perlmutter, M

    1988-08-01

    Three studies explored young and old adults' use of knowledge to support memory performance. Subjects viewed slides of familiar scenes containing high expectancy and low expectancy items and received free recall (Experiments 1, 2, and 3), cued recall (Experiments 1 and 2), and recognition (Experiments 1 and 2) tests. In Experiment 1 encoding intentionality was varied between subjects. Young adults performed better than old adults on all tests, but on all tests, both age groups produced a similar pattern of better memory of high expectancy than low expectancy items and showed an encoding intentionality effect for low expectancy items. In Experiments 2 and 3 all subjects were told to intentionally encode only one item from each scene; the remaining items could be encoded incidentally. Young adults performed better than old adults, although again, the pattern of performance of the two age groups was similar. High expectancy and low expectancy intentional items were recalled equally well, but high expectancy incidental items were recalled better than low expectancy incidental items. Low expectancy intentional items were recognized better than high expectancy intentional items, but incidental high expectancy items were recognized better than incidental low expectancy items. It was concluded that young and old adults use their knowledge in similar ways to guide scene memory. The effects of item expectancy and item intentionality were interpreted within Hasher & Zacks' (2) model of automatic and effortful processes. PMID:3228800

  14. Imagery rescripting: Is incorporation of the most aversive scenes necessary?

    PubMed

    Dibbets, Pauline; Arntz, Arnoud

    2016-05-01

    During imagery rescripting (ImRs) an aversive memory is relived and transformed to have a more positive outcome. ImRs is frequently applied in psychological treatment and is known to reduce intrusions and distress of the memory. However, little is known about the necessity to incorporate the central aversive parts of the memory in ImRs. To examine this necessity one hundred participants watched an aversive film and were subsequently randomly assigned to one of four experimental conditions: ImRs including the aversive scenes (Late ImRs), ImRs without the aversive scenes (Early ImRs), imaginal exposure (IE) or a control condition (Cont). Participants in the IE intervention reported the highest distress levels during the intervention; Cont resulted in the lowest levels of self-reported distress. For the intrusion frequency, only the late ImRs resulted in fewer intrusions compared to the Cont condition; Early ImRs produced significantly more intrusions than the Late ImRs or IE condition. Finally, the intrusions of the Late ImRs condition were reported as less vivid compared to the other conditions. To conclude, it seems beneficial including aversive scenes in ImRs after an analogue trauma induction. PMID:26076101

  15. Efficient sliding spotlight SAR raw signal simulation of extended scenes

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Huang, Pingping; Deng, Yunkai

    2011-12-01

    Sliding spotlight mode is a novel synthetic aperture radar (SAR) imaging scheme with an achieved azimuth resolution better than stripmap mode and ground coverage larger than spotlight configuration. However, its raw signal simulation of extended scenes may not be efficiently implemented in the two-dimensional (2D) Fourier transformed domain. This article presents a novel sliding spotlight raw signal simulation approach from the wide-beam SAR imaging modes. This approach can generate sliding spotlight raw signal not only from raw data evaluated by the simulators, but also from real data in the stripmap/spotlight mode. In order to obtain the desired raw data from conventional stripmap/spotlight mode, the azimuth time-varying filtering, which is implemented by de-rotation and low-pass filtering, is adopted. As raw signal of extended scenes in the stripmap/spotlight mode can efficiently be evaluated in the 2D Fourier domain, the proposed approach provides an efficient sliding spotlight SAR simulator of extended scenes. Simulation results validate this efficient simulator.

  16. Scene kinetics mitigation using factor analysis with derivative factors.

    SciTech Connect

    Larson, Kurt W.; Melgaard, David Kennett; Scholand, Andrew Joseph

    2010-07-01

    Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced. Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation; however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of the image and its derivative factors. This approach has several advantages including: (1) only a small number of images are required to develop the model, (2) the model can estimate backgrounds with jitter different from the input training images, (3) the method is particularly effective for sub-pixel jitter, and (4) the model can be developed from images before the change detection process. In addition the scores from projecting the factors on the background provide estimates of the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the theoretical basis for this technique, provide examples of its application, and discuss its limitations.

  17. Predicting the Valence of a Scene from Observers’ Eye Movements

    PubMed Central

    R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322

  18. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  19. Recovery of fingerprints from fire scenes and associated evidence.

    PubMed

    Deans, J

    2006-01-01

    A lack of information concerning the potential recovery of fingerprints from fire scenes and related evidence prompted several research projects. Latent prints from good secretors and visible prints (in blood) were placed on a variety of different surfaces and subsequently subjected to "real life" fires in fully furnished compartments used for fire investigation training purposes. The items were placed in various locations and at different heights within the compartments. After some initial success, further tests were undertaken using both latent and dirt/grease marks on different objects within the same types of fire compartments. Subsequent sets of tests involved the recovery of latent and visual fingerprints (in blood, dirt and grease) from different types of weapons, lighters, plastic bags, match boxes, tapers, plastic bottles and petrol bombs that had been subjected to the same fire conditions as previously. Throughout the entire series of projects one of the prime considerations was how the resultant findings could be put into practice by fire scene examiners in an attempt to assist the police in their investigations. This research demonstrates that almost one in five items recovered from fire scenes yielded fingerprint ridge detail following normal development treatments. PMID:17388243

  20. High dynamic range imaging of non-static scenes

    NASA Astrophysics Data System (ADS)

    Hossain, Imtiaz; Gunturk, Bahadir K.

    2011-01-01

    A well-known technique in high dynamic range (HDR) imaging is to take multiple photographs, each one with a different exposure time, and then combine them to produce an HDR image. Unless the scene is static and the camera position is fixed, this process creates the so-called "ghosting" artifacts. In order to handle non-static scenes or moving camera, images have to be spatially registered. This is a challenging problem because most optical flow estimation algorithm depends on the constant brightness assumption, which is obviously not the case in HDR imaging. In this paper, we present an algorithm to estimate the dense motion field in image sequences with photometric variations. In an alternating optimization scheme, the algorithm estimates both the dense motion field and the photometric mapping. As a latent information, the occluded regions are extracted and excluded from the photometric mapping estimation. We include experiments with both synthetic and real imagery to demonstrate the efficacy of the proposed algorithm. We show that the ghosting artifacts are reduced significantly in HDR imaging of non-static scenes.

  1. Predicting the Valence of a Scene from Observers' Eye Movements.

    PubMed

    R-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that 'saliency map', 'fixation histogram', 'histogram of fixation duration', and 'histogram of saccade slope' are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322

  2. Auditory scene analysis: the sweet music of ambiguity.

    PubMed

    Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A

    2011-01-01

    In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701

  3. Time-lapse ratios of cone excitations in natural scenes.

    PubMed

    Foster, David H; Amano, Kinjiro; Nascimento, Sérgio M C

    2016-03-01

    The illumination in natural environments varies through the day. Stable inferences about surface color might be supported by spatial ratios of cone excitations from the reflected light, but their invariance has been quantified only for global changes in illuminant spectrum. The aim here was to test their invariance under natural changes in both illumination spectrum and geometry, especially in the distribution of shadows. Time-lapse hyperspectral radiance images were acquired from five outdoor vegetated and nonvegetated scenes. From each scene, 10,000 pairs of points were sampled randomly and ratios measured across time. Mean relative deviations in ratios were generally large, but when sampling was limited to short distances or moderate time intervals, they fell below the level for detecting violations in ratio invariance. When illumination changes with uneven geometry were excluded, they fell further, to levels obtained with global changes in illuminant spectrum alone. Within sampling constraints, ratios of cone excitations, and also of opponent-color combinations, provide an approximately invariant signal for stable surface-color inferences, despite spectral and geometric variations in scene illumination. PMID:25847405

  4. Whiteheadian Actual Entitities and String Theory

    NASA Astrophysics Data System (ADS)

    Bracken, Joseph A.

    2012-06-01

    In the philosophy of Alfred North Whitehead, the ultimate units of reality are actual entities, momentary self-constituting subjects of experience which are too small to be sensibly perceived. Their combination into "societies" with a "common element of form" produces the organisms and inanimate things of ordinary sense experience. According to the proponents of string theory, tiny vibrating strings are the ultimate constituents of physical reality which in harmonious combination yield perceptible entities at the macroscopic level of physical reality. Given that the number of Whiteheadian actual entities and of individual strings within string theory are beyond reckoning at any given moment, could they be two ways to describe the same non-verifiable foundational reality? For example, if one could establish that the "superject" or objective pattern of self- constitution of an actual entity vibrates at a specific frequency, its affinity with the individual strings of string theory would be striking. Likewise, if one were to claim that the size and complexity of Whiteheadian 'societies" require different space-time parameters for the dynamic interrelationship of constituent actual entities, would that at least partially account for the assumption of 10 or even 26 instead of just 3 dimensions within string theory? The overall conclusion of this article is that, if a suitably revised understanding of Whiteheadian metaphysics were seen as compatible with the philosophical implications of string theory, their combination into a single world view would strengthen the plausibility of both schemes taken separately. Key words: actual entities, subject/superjects, vibrating strings, structured fields of activity, multi-dimensional physical reality.

  5. Online Class Size, Note Reading, Note Writing and Collaborative Discourse

    ERIC Educational Resources Information Center

    Qiu, Mingzhu; Hewitt, Jim; Brett, Clare

    2012-01-01

    Researchers have long recognized class size as affecting students' performance in face-to-face contexts. However, few studies have examined the effects of class size on exact reading and writing loads in online graduate-level courses. This mixed-methods study examined relationships among class size, note reading, note writing, and collaborative…

  6. Hangar no. 2 west door detail view. Note tracks. Note ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Hangar no. 2 west door detail view. Note tracks. Note box structures on doors for door opening mechanisms. Looking 4 N. - Marine Corps Air Station Tustin, Southern Lighter Than Air Ship Hangar, Near intersection of Windmill Road & Johnson Street, Tustin, Orange County, CA

  7. Color signals in natural scenes: characteristics of reflectance spectra and effects of natural illuminants.

    PubMed

    Chiao, C C; Cronin, T W; Osorio, D

    2000-02-01

    Multispectral images of natural scenes were collected from both forests and coral reefs to represent typical, complex scenes that might be viewed by modern animals. Both reflectance spectra and modeled visual color signals in these scenes were decorrelated spectrally by principal-component analysis. Nearly 98% of the variance of reflectance spectra and color signals can be described by the first three principal components for both forest and coral reef scenes, which implies that three well-designed visual channels can recover almost all of the spectral information of natural scenes. A variety of natural illuminants affects color signals of forest scenes only slightly, but the variation in ambient irradiance spectra that is due to the absorption of light by water has dramatic influences on the spectral characteristics of coral reef scenes. PMID:10680623

  8. Scrambled eyes? Disrupting scene structure impedes focal processing and increases bottom-up guidance.

    PubMed

    Foulsham, Tom; Alan, Rana; Kingstone, Alan

    2011-10-01

    Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes. PMID:21647804

  9. The Actual Apollo 13 Prime Crew

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The actual Apollo 13 lunar landing mission prime crew from left to right are: Commander, James A. Lovell Jr., Command Module pilot, John L. Swigert Jr.and Lunar Module pilot, Fred W. Haise Jr. The original Command Module pilot for this mission was Thomas 'Ken' Mattingly Jr. but due to exposure to German measles he was replaced by his backup, Command Module pilot, John L. 'Jack' Swigert Jr.

  10. The Anatomy of a Note.

    ERIC Educational Resources Information Center

    Moore, Herb

    1986-01-01

    Suggests that students can learn the physics of a musical note by learning how to synthesize sounds on a computer. Discusses ADSR (attack, decay, sustain, and release of a note) and includes a program (with listing) which students can use to examine ADSR on a Commodore 64 microcomputer. (JN)

  11. Research Notes and Information References

    SciTech Connect

    Hartley, III, Dean S.

    1994-12-01

    The RNS (Research Notes System) is a set of programs and databases designed to aid the research worker in gathering, maintaining, and using notes taken from the literature. The sources for the notes can be books, journal articles, reports, private conversations, conference papers, audiovisuals, etc. The system ties the databases together in a relational structure, thus eliminating data redundancy while providing full access to all the information. The programs provide the means for access and data entry in a way that reduces the key-entry burden for the user. Each note has several data fields. Included are the text of the note, the subject classification (for retrieval), and the reference identification data. These data are divided into four databases: Document data - title, author, publisher, etc., fields to identify the article within the document; Note data - text and page of the note; Sublect data - subject categories to ensure uniform spelling for searches. Additionally, there are subsidiary files used by the system, including database index and temporary work files. The system provides multiple access routes to the notes, both structurally (access method) and topically (through cross-indexing). Output may be directed to a printer or saved as a file for input to word processing software.

  12. Research Notes and Information References

    1994-12-01

    The RNS (Research Notes System) is a set of programs and databases designed to aid the research worker in gathering, maintaining, and using notes taken from the literature. The sources for the notes can be books, journal articles, reports, private conversations, conference papers, audiovisuals, etc. The system ties the databases together in a relational structure, thus eliminating data redundancy while providing full access to all the information. The programs provide the means for access andmore » data entry in a way that reduces the key-entry burden for the user. Each note has several data fields. Included are the text of the note, the subject classification (for retrieval), and the reference identification data. These data are divided into four databases: Document data - title, author, publisher, etc., fields to identify the article within the document; Note data - text and page of the note; Sublect data - subject categories to ensure uniform spelling for searches. Additionally, there are subsidiary files used by the system, including database index and temporary work files. The system provides multiple access routes to the notes, both structurally (access method) and topically (through cross-indexing). Output may be directed to a printer or saved as a file for input to word processing software.« less

  13. Comparison of ASSESS neutralization module results with actual small force engagement outcomes

    SciTech Connect

    Gardner, B.H.; Snell, M.K.; Paulus, W.K. )

    1991-01-01

    The ASSESS Neutralization module (Neutralization) is part of the Analytic System and Software for Evaluation of Safeguards and Security (ASSESS), a vulnerability assessment tool. Neutralization models a fire fight between security inspectors (SIs) and adversaries. This paper reports that a comparison has been made between actual outcomes of police and small military engagements and the results predicted by the Neutralization module for similar scenarios. The results of this comparison show a surprising correlation between predicted outcomes (based on numbers of combatants, weapon types, and exposures, etc.) and the actual outcomes of the engagements analyzed. The importance of this analysis is that given the defenders have intelligence on actual adversary characteristics or are protecting against a design basis threat, defense capabilities can be evaluated before an engagement. Results could then be used to develop a favorable probability of a desired outcome. For example, law enforcement agencies are frequently able to compile the number of criminals, types of weaponry, willingness to use force, etc., from analysis of crime scenes.

  14. Spatial content-based scene matching using a relaxation method

    NASA Astrophysics Data System (ADS)

    Wang, Caixia

    Scene matching is a fundamental task for a variety of geospatial analysis applications. As we move towards multi-source data analysis, constantly increasing amounts of generated geospatial datasets and the diversification of data sources are the two major forces driving the need for novel and more efficient matching solutions. Despite the great effort within the geospatial and computer science communities, automated scene matching still remains crucial and challenging when vector data are involved such as image-to-map registration for change detection. In this context, features extracted from vector data contain no intensity information which typically is the significant component in current promising approaches for registration. This problem becomes increasingly complicated as the two or more datasets usually present differences in coverage, scale, or orientation in general, and accordingly corresponding objects in the two or more datasets may also differ to a certain extent. This dissertation developed a novel methodology for automatic image-to-vector matching, based on contextual information among salient spatial features (e.g. road networks and buildings) in a scene. In this work, we model the road networks extracted from the two to-be-matched datasets as attributed graphs . The developed attribute metric measures the geometric and topological properties of the road network, which are invariant to the differences of the two datasets in scale, orientation, area of coverage, physical changes and extraction errors. Road networks comprise line segments (or curves), intersections and loops. Such complex structure suggests versatile attributes derivable from the components themselves of the road networks as well as between these components. It is important to develop attributes that need less computational efforts, while having sufficient descriptive power. We extend the entropy concept to statistically measure the descriptive quality of the attributes under

  15. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    PubMed

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  16. Benzodiazepine dependence among multidrug users in the club scene

    PubMed Central

    Kurtz, Steven P.; Surratt, Hilary L.; Levi-Minzi, Maria A.; Mooss, Angela

    2011-01-01

    Background Benzodiazepines (BZs) are among the most frequently prescribed drugs with the potential for abuse. Young adults ages 18–29 report the highest rates of BZ misuse in the United States. The majority of club drug users are also in this age group, and BZ misuse is prevalent in the nightclub scene. BZ dependence, however, is not well documented. This paper examines BZ dependence and its correlates among multidrug users in South Florida’s nightclub scene. Methods Data were drawn from structured interviews with men and women (N=521) who reported regular attendance at large dance clubs and recent use of both club drugs and BZs. Results Prevalences of BZ-related problems were 7.9% for BZ dependence, 22.6% BZ abuse, and 25% BZ abuse and/or dependence. In bivariate logistic regression models, heavy cocaine use (OR 2.27; 95% CI 1.18, 4.38), severe mental distress (OR 2.63; 95% CI 1.33, 5.21), and childhood victimization history (OR 2.43; 95% CI 1.10, 5.38) were associated with BZ dependence. Heavy cocaine use (OR 2.14; 95% CI 1.10, 4.18) and severe mental distress (OR 2.16; 95% CI 1.07, 4.37) survived as predictors in the multivariate model. Discussion BZ misuse is widespread among multidrug users in the club scene, who also exhibit high levels of other health and social problems. BZ dependence appears to be more prevalent in this sample than in other populations described in the literature. Recommendations for intervention and additional research are described. PMID:21708434

  17. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  18. Rapid discrimination of visual scene content in the human brain.

    PubMed

    Anokhin, Andrey P; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W; Heath, Andrew C

    2006-06-01

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n = 264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200 and 600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline region, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815

  19. Rapid discrimination of visual scene content in the human brain

    PubMed Central

    Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.

    2007-01-01

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815

  20. Validation of the ASTER instrument level 1A scene geometry

    USGS Publications Warehouse

    Kieffer, H.H.; Mullins, K.F.; MacKinnon, D.J.

    2008-01-01

    An independent assessment of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument geometry was undertaken by the U.S. ASTER Team, to confirm the geometric correction parameters developed and applied to Level 1A (radiometrically and geometrically raw with correction parameters appended) ASTER data. The goal was to evaluate the geometric quality of the ASTER system and the stability of the Terra spacecraft. ASTER is a 15-band system containing optical instruments with resolutions from 15- to 90-meters; all geometrically registered products are ultimately tied to the 15-meter Visible and Near Infrared (VNIR) sub-system. Our evaluation process first involved establishing a large database of Ground Control Points (GCP) in the mid-western United States; an area with features of an appropriate size for spacecraft instrument resolutions. We used standard U.S. Geological Survey (USGS) Digital Orthophoto Quads (DOQS) of areas in the mid-west to locate accurate GCPs by systematically identifying road intersections and recording their coordinates. Elevations for these points were derived from USGS Digital Elevation Models (DEMS). Road intersections in a swath of nine contiguous ASTER scenes were then matched to the GCPs, including terrain correction. We found no significant distortion in the images; after a simple image offset to absolute position, the RMS residual of about 200 points per scene was less than one-half a VNIR pixel. Absolute locations were within 80 meters, with a slow drift of about 10 meters over the entire 530-kilometer swath. Using strictly simultaneous observations of scenes 370 kilometers apart, we determined a stereo angle correction of 0.00134 degree with an accuracy of one microradian. The mid-west GCP field and the techniques used here should be widely applicable in assessing other spacecraft instruments having resolutions from 5 to 50-meters. ?? 2008 American Society for Photogrammetry and Remote Sensing.

  1. Optic flow aided navigation and 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Rollason, Malcolm

    2013-10-01

    An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.

  2. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    PubMed Central

    Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  3. Concepts using optical MEMS array for ladar scene projection

    NASA Astrophysics Data System (ADS)

    Smith, J. Lynn

    2003-09-01

    Scene projection for HITL testing of LADAR seekers is unique because the 3rd dimension is time delay. Advancement in AFRL for electronic delay and pulse shaping circuits, VCSEL emitters, fiber optic and associated scene generation is underway, and technology hand-off to test facilities is expected eventually. However, size and cost currently projected behooves cost mitigation through further innovation in system design, incorporating new developments, cooperation, and leveraging of dual-purpose technology. Therefore a concept is offered which greatly reduces the number (thus cost) of pulse shaping circuits and enables the projector to be installed on the mobile arm of a flight motion simulator table without fiber optic cables. The concept calls for an optical MEMS (micro-electromechanical system) steerable micro-mirror array. IFOV"s are a cluster of four micro-mirrors, each of which steers through a unique angle to a selected light source with the appropriate delay and waveform basis. An array of such sources promotes angle-to-delay mapping. Separate pulse waveform basis circuits for each scene IFOV are not required because a single set of basis functions is broadcast to all MEMS elements simultaneously. Waveform delivery to spatial filtering and collimation optics is addressed by angular selection at the MEMS array. Emphasis is on technology in existence or under development by the government, its contractors and the telecommunications industry. Values for components are first assumed as those that are easily available. Concept adequacy and upgrades are then discussed. In conclusion an opto-mechanical scan option ranks as the best light source for near-term MEMS-based projector testing of both flash and scan LADAR seekers.

  4. Adaptive optimal spectral range for dynamically changing scene

    NASA Astrophysics Data System (ADS)

    Pinsky, Ephi; Siman-tov, Avihay; Peles, David

    2012-06-01

    A novel multispectral video system that continuously optimizes both its spectral range channels and the exposure time of each channel autonomously, under dynamic scenes, varying from short range-clear scene to long range-poor visibility, is currently being developed. Transparency and contrast of high scattering medium of channels with spectral ranges in the near infrared is superior to the visible channels, particularly to the blue range. Longer wavelength spectral ranges that induce higher contrast are therefore favored. Images of 3 spectral channels are fused and displayed for (pseudo) color visualization, as an integrated high contrast video stream. In addition to the dynamic optimization of the spectral channels, optimal real-time exposure time is adjusted simultaneously and autonomously for each channel. A criterion of maximum average signal, derived dynamically from previous frames of the video stream is used (Patent Application - International Publication Number: WO2009/093110 A2, 30.07.2009). This configuration enables dynamic compatibility with the optimal exposure time of a dynamically changing scene. It also maximizes the signal to noise ratio and compensates each channel for the specified value of daylight reflections and sensors response for each spectral range. A possible implementation is a color video camera based on 4 synchronized, highly responsive, CCD imaging detectors, attached to a 4CCD dichroic prism and combined with a common, color corrected, lens. Principal Components Analysis (PCA) technique is then applied for real time "dimensional collapse" in color space, in order to select and fuse, for clear color visualization, the 3 most significant principal channels out of at least 4 characterized by high contrast and rich details in the image data.

  5. Portable X-ray Fluorescence Unit for Analyzing Crime Scenes

    NASA Astrophysics Data System (ADS)

    Visco, A.

    2003-12-01

    Goddard Space Flight Center and the National Institute of Justice have teamed up to apply NASA technology to the field of forensic science. NASA hardware that is under development for future planetary robotic missions, such as Mars exploration, is being engineered into a rugged, portable, non-destructive X-ray fluorescence system for identifying gunshot residue, blood, and semen at crime scenes. This project establishes the shielding requirements that will ensure that the exposure of a user to ionizing radiation is below the U.S. Nuclear Regulatory Commission's allowable limits, and also develops the benchtop model for testing the system in a controlled environment.

  6. Cooperation of mobile robots for accident scene inspection

    NASA Astrophysics Data System (ADS)

    Byrne, R. H.; Harrington, J.

    A telerobotic system demonstration was developed for the Department of Energy's Accident Response group to highlight the applications of telerobotic vehicles to accident site inspection. The proof-of-principle system employs two mobile robots, Dixie and RAYBOT, to inspect a simulated accident site. Both robots are controlled serially from a single driving station, allowing an operator to take advantage of having multiple robots at the scene. The telerobotic system is described and some of the advantages of having more than one robot present are discussed. Future plans for the system are also presented.

  7. Video inpainting using scene model and object tracking

    NASA Astrophysics Data System (ADS)

    Frantc, V. A.; Voronin, V. V.; Marchuck, V. I.; Egiazarian, K. O.

    2013-02-01

    The problem of automatic video restoration and object removal attract the attention of many researchers. In this paper we present a new framework for video inpainting. We consider the case when a camera motion is approximately parallel to the plane of image projection. The scene may consist of a stationary background with a moving foreground, both of which may require inpainting. Moving objects can move differently, but should not to change their size. A framework presented in this paper contains the following steps: moving objects identification, moving objects tracking and background/foreground segmentation, inpainting and, finally, a video rendering. Some results on test video sequence processing are presented.

  8. Probing the Natural Scene by Echolocation in Bats

    PubMed Central

    Moss, Cynthia F.; Surlykke, Annemarie

    2010-01-01

    Bats echolocating in the natural environment face the formidable task of sorting signals from multiple auditory objects, echoes from obstacles, prey, and the calls of conspecifics. Successful orientation in a complex environment depends on auditory information processing, along with adaptive vocal-motor behaviors and flight path control, which draw upon 3-D spatial perception, attention, and memory. This article reviews field and laboratory studies that document adaptive sonar behaviors of echolocating bats, and point to the fundamental signal parameters they use to track and sort auditory objects in a dynamic environment. We suggest that adaptive sonar behavior provides a window to bats’ perception of complex auditory scenes. PMID:20740076

  9. Photorealistic ray tracing to visualize automobile side mirror reflective scenes.

    PubMed

    Lee, Hocheol; Kim, Kyuman; Lee, Gang; Lee, Sungkoo; Kim, Jingu

    2014-10-20

    We describe an interactive visualization procedure for determining the optimal surface of a special automobile side mirror, thereby removing the blind spot, without the need for feedback from the error-prone manufacturing process. If the horizontally progressive curvature distributions are set to the semi-mathematical expression for a free-form surface, the surface point set can then be derived through numerical integration. This is then converted to a NURBS surface while retaining the surface curvature. Then, reflective scenes from the driving environment can be virtually realized using photorealistic ray tracing, in order to evaluate how these reflected images would appear to drivers. PMID:25401606

  10. Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene

    NASA Astrophysics Data System (ADS)

    Bonner, David

    2010-05-01

    Connecting physics concepts with real-world events allows students to establish a strong conceptual foundation. When such events are particularly interesting to students, it can greatly impact their engagement and enthusiasm in an activity. Activities that involve studying real-world events of high interest can provide students a long-lasting understanding and positive memorable experiences, both of which heighten the learning experiences of those students. One such activity, described in depth in this paper, utilizes a murder mystery and crime scene investigation as an application of basic projectile motion.

  11. PRACE resources to study extreme natural events: the SCENE project

    NASA Astrophysics Data System (ADS)

    Fiori, Elisabetta; Galizia, Antonella; Danovaro, Emanuele; Clematis, Andrea; Bedrina, Tatiana; Parodi, Antonio

    2014-05-01

    Forecasting severe storms and floods is one of the main challenges of 21th century. Floods are the most dangerous meteorological hazard in the Mediterranean basins due to both the number of people affected and to the relatively high frequency by which human activities and goods suffer damages and losses. The numerical simulations of extreme events which happen over small basins as the Mediterranean ones are need a very fine-resolution in space and time and as a consequence considerable memory and computational power are required. Since the resources provided by the PRACE project represent the solution for satisfying such requirements, the Super Computing of Extreme Natural Events (SCENE) project has been proposed. SCENE aims to provide an advanced understanding of the intrinsic predictability of severe precipitation processes and the associated predictive ability of high-resolution meteorological models with a special focus on flash flood-producing storms in regions of complex orography (e.g. Mediterranean area) through the assessment of the role of both the convective and microphysical processes. The meteorological model considered in the project is the Weather Research and Forecasting (WRF) model, a state of the art mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. Thus, among all the parameterizations available in the WRF model, the WRF Single-Moment 6-Class Scheme and the Thompson microphysics scheme will be adopted for the numerical simulations in combination with three different approaches for the treatment of the convective processes, that is the use of explicit method, Betts-Miller-Janjic Scheme and Kain-Fritsch. As for flash-flood producing storms, the project considers the recent sequence of extreme events occurred in the north-western portion of the Mediterranean sea; some of these events are the so-called critical cases of the DRIHM project (www.drihm.eu), i.e. selected severe

  12. Contrast enhancing and adjusting advanced very high resolution radiometer scenes for solar illumination

    USGS Publications Warehouse

    Zokaites, David M.

    1993-01-01

    The AVHRR (Advanced Very High Resolution Radiometer) satellite sensors provide daily coverage of the entire Earth. As a result, individual scenes cover broad geographic areas (roughly 3000 km by 5000 km) and can contain varying levels of solar illumination. Mosaics of AVHRR scenes can be created for large (continental and global) study areas. As the north-south extent of such mosaics increases, the lightness variability within the mosaic increases. AVHRR channels one and two of multiple daytime scenes were histogrammed to find a relationship between solar zenith and scene lightness as described by brightness value distribution. This relationship was used to determine look-up tables (luts) which removed effects of varying solar illumination. These luts were combined with a contrast enhancing lut and stored online. For individual scenes, one precomputed composite lut was applied to the entire scene based on the solar zenith at scene center. For mosaicked scenes, each pixel was adjusted based on the solar zenith at that pixel location. These procedures reduce lightness variability within and between scenes and enhance scene contrast to provide visually pleasing imagery.

  13. Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory

    NASA Technical Reports Server (NTRS)

    Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.

    2005-01-01

    Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.

  14. Simple line drawings suffice for functional MRI decoding of natural scene categories.

    PubMed

    Walther, Dirk B; Chai, Barry; Caddigan, Eamon; Beck, Diane M; Fei-Fei, Li

    2011-06-01

    Humans are remarkably efficient at categorizing natural scenes. In fact, scene categories can be decoded from functional MRI (fMRI) data throughout the ventral visual cortex, including the primary visual cortex, the parahippocampal place area (PPA), and the retrosplenial cortex (RSC). Here we ask whether, and where, we can still decode scene category if we reduce the scenes to mere lines. We collected fMRI data while participants viewed photographs and line drawings of beaches, city streets, forests, highways, mountains, and offices. Despite the marked difference in scene statistics, we were able to decode scene category from fMRI data for line drawings just as well as from activity for color photographs, in primary visual cortex through PPA and RSC. Even more remarkably, in PPA and RSC, error patterns for decoding from line drawings were very similar to those from color photographs. These data suggest that, in these regions, the information used to distinguish scene category is similar for line drawings and photographs. To determine the relative contributions of local and global structure to the human ability to categorize scenes, we selectively removed long or short contours from the line drawings. In a category-matching task, participants performed significantly worse when long contours were removed than when short contours were removed. We conclude that global scene structure, which is preserved in line drawings, plays an integral part in representing scene categories. PMID:21593417

  15. IR scene image generation from visual image based on thermal database

    NASA Astrophysics Data System (ADS)

    Liao, Binbin; Wang, Zhangye; Ke, Xiaodi; Xia, Yibin; Peng, Qunsheng

    2007-11-01

    In this paper, we propose a new method to generate complex IR scene image directly from the corresponding visual scene image based on material thermal database. For the input visual scene image, we realize an interactive tool based on the combined method of global magic wand and intelligent scissors to segment the object areas in the scene. And the thermal attributes are assigned to each object area from the thermal database of materials. By adopting the scene infrared signature model based on infrared Physics and Heat Transfer, the surface temperature distribution of the scene are calculated and the corresponding grayscale of each area in IR image is determined by our transformation rule. We also propose a pixel-based RGB spacial similarity model to determine the mixture grayscales of residual area in the scene image. To realistically simulate the IR scene, we develop an IR imager blur model considering the effect of different resolving power of visual and thermal imagers, IR atmospheric noise and the modulation transfer function of thermal imager. Finally, IR scene images at different intervals under different weather conditions are generated. Compared with real IR scene images, our simulated results are quite satisfactory and effective.

  16. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-11-01

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637

  17. Air resistance measurements on actual airplane parts

    NASA Technical Reports Server (NTRS)

    Weiselsberger, C

    1923-01-01

    For the calculation of the parasite resistance of an airplane, a knowledge of the resistance of the individual structural and accessory parts is necessary. The most reliable basis for this is given by tests with actual airplane parts at airspeeds which occur in practice. The data given here relate to the landing gear of a Siemanms-Schuckert DI airplane; the landing gear of a 'Luftfahrzeug-Gesellschaft' airplane (type Roland Dlla); landing gear of a 'Flugzeugbau Friedrichshafen' G airplane; a machine gun, and the exhaust manifold of a 269 HP engine.

  18. Detecting and collecting traces of semen and blood from outdoor crime scenes using crime scene dogs and presumptive tests.

    PubMed

    Skalleberg, A G; Bouzga, M M

    2016-07-01

    In 2009, the Norwegian police academy educated their first crime scene dogs, trained to locate traces of seminal fluid and blood in outdoor and indoor crime scenes. The Department of Forensic Biology was invited to take part in this project to educate the police in specimen collection and presumptive testing. We performed tests where seminal fluid was deposited on different outdoor surfaces from between one hour to six days, and blood on coniferous ground from between one hour to two days. For both body fluids the tests were performed with three different volumes. The crime scene dogs located the stains, and acid phosphatase/tetrabasebariumperoxide was used as presumptive tests before collection for microscopy and DNA analysis. For seminal fluid the dogs were able to locate all stains for up to two days and only the largest volume after four days. The presumptive tests confirmed the dog's detection. By microscopy we were able to detect spermatozoa for the smallest volumes up to 32h, and for the largest volume up to 4 days, and the DNA results are in correlation to these findings. For blood all the stains were detected by the dogs, except the smallest volume of blood after 32h. The presumptive tests confirmed the dog's detection. We were able to get DNA results for most stains in the timeframe 1-48h with the two largest volumes. The smallest volume shows diversities between the parallels, with no DNA results after 24h. These experiments show that it is critical that body fluids are collected within a timeframe to be able to get a good DNA result, preferably within the first 24-48h. Other parameters that should be taken into account are the weather conditions, type of surfaces and specimen collection. PMID:27174517

  19. A simulation study of scene confusion factors in sensing soil moisture from orbital radar

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.; Roth, F. T.

    1983-01-01

    Simulated C-band radar imagery for a 124-km by 108-km test site in eastern Kansas is used to classify soil moisture. Simulated radar resolutions are 100 m by 100 m, 1 km by 1km, and 3 km by 3 km. Distributions of actual near-surface soil moisture are established daily for a 23-day accounting period using a water budget model. Within the 23-day period, three orbital radar overpasses are simulated roughly corresponding to generally moist, wet, and dry soil moisture conditions. The radar simulations are performed by a target/sensor interaction model dependent upon a terrain model, land-use classification, and near-surface soil moisture distribution. The accuracy of soil-moisture classification is evaluated for each single-date radar observation and also for multi-date detection of relative soil moisture change. In general, the results for single-date moisture detection show that 70% to 90% of cropland can be correctly classified to within +/- 20% of the true percent of field capacity. For a given radar resolution, the expected classification accuracy is shown to be dependent upon both the general soil moisture condition and also the geographical distribution of land-use and topographic relief. An analysis of cropland, urban, pasture/rangeland, and woodland subregions within the test site indicates that multi-temporal detection of relative soil moisture change is least sensitive to classification error resulting from scene complexity and topographic effects.

  20. Attention in natural scenes: contrast affects rapid visual processing and fixations alike.

    PubMed

    't Hart, Bernard Marius; Schmidt, Hannah Claudia Elfriede Fanny; Klein-Harmeyer, Ingo; Einhäuser, Wolfgang

    2013-10-19

    For natural scenes, attention is frequently quantified either by performance during rapid presentation or by gaze allocation during prolonged viewing. Both paradigms operate on different time scales, and tap into covert and overt attention, respectively. To compare these, we ask some observers to detect targets (animals/vehicles) in rapid sequences, and others to freely view the same target images for 3 s, while their gaze is tracked. In some stimuli, the target's contrast is modified (increased/decreased) and its background modified either in the same or in the opposite way. We find that increasing target contrast relative to the background increases fixations and detection alike, whereas decreasing target contrast and simultaneously increasing background contrast has little effect. Contrast increase for the whole image (target + background) improves detection, decrease worsens detection, whereas fixation probability remains unaffected by whole-image modifications. Object-unrelated local increase or decrease of contrast attracts gaze, but less than actual objects, supporting a precedence of objects over low-level features. Detection and fixation probability are correlated: the more likely a target is detected in one paradigm, the more likely it is fixated in the other. Hence, the link between overt and covert attention, which has been established in simple stimuli, transfers to more naturalistic scenarios. PMID:24018728

  1. 3-D model-based frame interpolation for distributed video coding of static scenes.

    PubMed

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content. PMID:17491456

  2. Raytracing Dynamic Scenes on the GPU Using Grids.

    PubMed

    Guntury, S; Narayanan, P J

    2012-01-01

    Raytracing dynamic scenes at interactive rates have received a lot of attention recently. We present a few strategies for high performance raytracing on a commodity GPU. The construction of grids needs sorting, which is fast on today's GPUs. The grid is thus the acceleration structure of choice for dynamic scenes as per-frame rebuilding is required. We advocate the use of appropriate data structures for each stage of raytracing, resulting in multiple structure building per frame. A perspective grid built for the camera achieves perfect coherence for primary rays. A perspective grid built with respect to each light source provides the best performance for shadow rays. Spherical grids handle lights positioned inside the model space and handle spotlights. Uniform grids are best for reflection and refraction rays with little coherence. We propose an Enforced Coherence method to bring coherence to them by rearranging the ray to voxel mapping using sorting. This gives the best performance on GPUs with only user-managed caches. We also propose a simple, Independent Voxel Walk method, which performs best by taking advantage of the L1 and L2 caches on recent GPUs. We achieve over 10 fps of total rendering on the Conference model with one light source and one reflection bounce, while rebuilding the data structure for each stage. Ideas presented here are likely to give high performance on the future GPUs as well as other manycore architectures. PMID:21383409

  3. Super-Resolution of Dynamic Scenes Using Sampling Rate Diversity.

    PubMed

    Salem, Faisal; Yagle, Andrew E

    2016-08-01

    In earlier work, we proposed a super-resolution (SR) method that required the availability of two low resolution (LR) sequences corresponding to two different sampling rates, where images from one sequence were used as a basis to represent the polyphase components (PPCs) of the high resolution (HR) image, while the other LR sequences provided the reference LR image (to be super-resolved). The (simple) algorithm implemented by Salem and Yagle is only applicable when the scene is static. In this paper, we recast our approach to SR as a two-stage example-based algorithm to process dynamic scenes. We employ feature selection to create, from the LR frames, local LR dictionaries to represent PPCs of HR patches. To enforce sparsity, we implement Gaussian generative models as an efficient alternative to L1-norm minimization. Estimation errors are further reduced using what we refer to as the anchors, which are based on the relationship between PPCs corresponding to different sampling rates. In the second stage, we revert to simple single frame SR (applied to each frame), using HR dictionaries extracted from the super-resolved sequence of the previous stage. The second stage is thus a reiteration of the sparsity coding scheme, using only one LR sequence, and without involving PPCs. The ability of the modified algorithm to super-resolve challenging LR sequences reintroduces sampling rate diversity as a prerequisite of robust multiframe SR. PMID:27249832

  4. New Proposition for Redating of Mithraic Tauroctony Scene

    NASA Astrophysics Data System (ADS)

    Bon, Edi; Ćirković, Milan; Milosavljević, Ivana

    Considering the idea that figures in the central icon of the Mithraic religion, with scene of tauroctony (bull slaying), represent equatorial constellations in the times in which the spring equinox was in between of Taurus and Aries (Ulansey, 1989) , it was hard to explain why some equatorial constellations were not included in the Mithraic icons (constellations of Orion and Libra), when those constellations were equatorial in those times. With simulations of skies in the times in which the spring equinox was in the constellation of Taurus, only small area of spring equinox positions allows excluding those two constellations, with all other representations of equatorial constellations included (Taurus, Canis Minor, Hidra, Crater, Corvus, Scorpio). These positions were the beginning of the ages of Taurus. But these positions of spring equinox included Gemini as equatorial constellation. Two of the main figures in the icons of Mithaic religions were two identical figures, usually represented on the each side of the bull, wearing frigian caps and holding the torches. Their names, Cautes and Cautopates, and their looks could lead to the idea that they represent the constellation of Gemini. In that case the main icon of Mithraic religion could represent the event that happened around 4000 BC, when the spring equinox entered the constellation of Taurus. Also, this position of equator contain Perseus as the equatorial constellation. In the work of Ulansey was presented that the god Mithras was the constellation of Perseus. In that case, all figures in the main scene would be equatorial constellations.

  5. A new proposition for redating the Mithraic tauroctony scene

    NASA Astrophysics Data System (ADS)

    Bon, E.; Ćirković, M. M.; Milosavljević, I.

    2002-07-01

    Assuming that the figures of the central icon of the Mithraic cult - the scene of tauroctony (bull slaying) - represent equatorial constellations at the time when the spring equinox was placed somewhere between Taurus and Aries, it is difficult to explain why some equatorial constellations (Orion and Libra) were not included in the Mithraic icons A simulation of the sky at the times in which the spring equinox was in the constellation of Taurus, only a small area of spring equinox positions permits to exclude these two constellations, with all other representations of equatorial constellations (Taurus, Canis Minor, Hydra, Crater, Corvus, Scorpio) included. These positions of the spring equinox occurred at the beginning of the age of Taurus, and included Gemini as an equatorial constellation. Two of the main figures in the Mithraic icons are two identical figures, usually represented on the each side of the bull, wearing phrygian caps and holding torches. Their names, Cautes and Cautopates, and their looks may indicate that they represent the constellation of Gemini. In that case the main icon of Mithraic religion could represent an event that happened around 4000 BC, when the spring equinox entered the constellation of Taurus. Also, this position of equator contains Perseus as an equatorial constellation. Ulansey suggested that the god Mithras is identified with the constellation Perseus. In that case, all figures in the main scene would be equatorial constellations.

  6. An enhanced MIML algorithm for natural scene image classification

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhang, Hui; Yang, Suyan

    2015-12-01

    The multi-instance multi-label (MIML) learning is a learning framework where each example is described by a bag of instances and corresponding to a set of labels. In some studies, the algorithms are applied to natural scene image classification and have achieved satisfied performance. We design a MIML algorithm based on RBF neural network for the natural scene image classification. In the framework, we compare classification accuracy based on the existing definitions of bag distance: maximum Hausdorff, minimum Hausdorff and average Hausdorff. Although the accuracy of average Hausdorff bag distance is the highest, we find average Hausdorff bag distance to weaken the role of the minimum distance between the instances in the two bags. So we redefine the average Hausdorff bag distance by introducing an adaptive adjustment coefficient, and it can change according to the minimum distance between the instances in the two bags. Finally, the experimental results show that the enhanced algorithm has a better result than the original algorithm.

  7. Nonuniformity correction of a resistor array infrared scene projector

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Murrer, Robert Lee, Jr.

    1999-07-01

    At the Kinetic-kill vehicle Hardware-in-the-Loop Simulator (KHILS) facility located at Eglin AFB, Florida, a technology has been developed for the projection of scenes to support hardware-in-the-loop testing of infrared seekers. The Wideband Infrared Scene Projector program is based on a 512 X 512 VLSI array of 2 mil pitch resistors. A characteristic associated with these projectors is each resistor emits measurably different in-band radiance when the same voltage is applied. Therefore, since it is desirable to have each resistor emit the same for a commanded radiance, each resistor requires a Non-Uniformity Correction (NUC). Though this NUC task may seem simple to a casual observer, it is, however, quite complicated. A high quality infrared camera and well-designed optical system are prerequisites to measuring each resistor's output accurately for correction. A technique for performing a NUC on a resistor array has been developed and implemented at KHILS that achieves a NUC (standard deviation output/mean output) of less than 1 percent. This paper presents details pertaining to the NUC system, procedures, and results.

  8. Semantic description of drama scene by using SD-form

    NASA Astrophysics Data System (ADS)

    Niimi, Michiharu; Kawaguchi, Eiji

    1997-01-01

    Multimedia data processing is becoming more and more a central concern among the people who have been working on image processing. Multimedia database retrieval is one of such a problem. A foreign language study assisting system is a good example for a multimedia data base design. Because each language depends on conversational situation such as topic and speech intention as well as place of conversation.In that case, we can not neglect the semantic aspect of multimedia information. The author's group has already proposed a semantic structure description form, called the SD-form, of the language meaning. They studied the feasibility of its application to natural language generation, story understanding, and conversational text retrieval systems. This paper presents our new attempt to expand our previous system from a text database systems to a multimedia database system which include motion picture, speech sound as well as language text. the source of the data in this project is a series of bilingual TV drama broadcasted in Japan. The most important point is this attempt is that each video scene is described by a set of SD-forms by which scenes are retrieved semantically.

  9. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms. PMID:22487981

  10. Satellite modulation transfer function estimation from natural scenes

    NASA Astrophysics Data System (ADS)

    Xiyang, Zhi; Wei, Zhang; Xuan, Sun; Dawei, Wang

    2015-11-01

    We propose an in-orbit modulation transfer function (MTF) statistical estimation algorithm based on natural scene, called SeMTF. The algorithm can estimate the in-orbit MTF of a sensor from an image without specialized targets. First, the power spectrum of a satellite image is analyzed, then a two-dimensional (2-D) fractal Brownian motion model is adopted to represent the natural scene. The in-orbit MTF is modeled by a parametric exponential function. Subsequently, the statistical model of satellite imaging is established. Second, the model is solved by the improved profile-likelihood function method. In order to handle the nuisance parameter in the profile-likelihood function, we divided the estimation problem into two minimization problems for the parameters of the MTF model and nuisance parameters, respectively. By alternating the two iterative minimizations, the result will converge to the optimal MTF parameters. Then the SeMTF algorithm is proposed. Finally, the algorithm is tested using real satellite images. Experimental results indicate that the estimation of MTF is highly accurate.

  11. Conceptual priming for realistic auditory scenes and for auditory words.

    PubMed

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  12. No emotional "pop-out" effect in natural scene viewing.

    PubMed

    Acunzo, David J; Henderson, John M

    2011-10-01

    It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided. PMID:21787079

  13. Neural dynamics of change detection in crowded acoustic scenes.

    PubMed

    Sohoglu, Ediz; Chait, Maria

    2016-02-01

    Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection. PMID:26631816

  14. Diode laser arrays for dynamic infrared scene projection

    NASA Astrophysics Data System (ADS)

    Beasley, D. Brett; Cooper, John B.

    1993-08-01

    A novel concept for dynamic IR scene projection using IR diode lasers has been developed. This technology offers significant cost and performance advantages over other currently available projector technologies. Performance advantages include high dynamic range, multiple wavebands, and high frame rates. A projector system which utilizes a 16-element linear array has been developed and integrated into the millimeter wave/infrared (MMW/IR) hardware-in-the-loop (HWIL) facility at the US Army Missile Command's (USAMICOM's) Research, Development, and Engineering Center (RDEC). This projector has demonstrated dynamic range in excess of 105, apparent temperatures greater than 2500 degree(s)C, and nanosecond response times. Performance characteristics for this projector system are presented in the paper. Designs for projectors to test other IR sensor configurations, including FPAs, have been developed and are presented as well. The FPA design consists of a linear array of diode lasers scanned by a polygon mirror. This low-cost projector offers high resolution, high contrast 2-D scenes at up to 10 KHz frame rates. Simulation of active IR countermeasures is another promising application of diode laser projector systems. The diode laser is capable of simulating flares or virtually any IR jammer waveform.

  15. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  16. A scheme for automatic text rectification in real scene images

    NASA Astrophysics Data System (ADS)

    Wang, Baokang; Liu, Changsong; Ding, Xiaoqing

    2015-03-01

    Digital camera is gradually replacing traditional flat-bed scanner as the main access to obtain text information for its usability, cheapness and high-resolution, there has been a large amount of research done on camera-based text understanding. Unfortunately, arbitrary position of camera lens related to text area can frequently cause perspective distortion which most OCR systems at present cannot manage, thus creating demand for automatic text rectification. Current rectification-related research mainly focused on document images, distortion of natural scene text is seldom considered. In this paper, a scheme for automatic text rectification in natural scene images is proposed. It relies on geometric information extracted from characters themselves as well as their surroundings. For the first step, linear segments are extracted from interested region, and a J-Linkage based clustering is performed followed by some customized refinement to estimate primary vanishing point(VP)s. To achieve a more comprehensive VP estimation, second stage would be performed by inspecting the internal structure of characters which involves analysis on pixels and connected components of text lines. Finally VPs are verified and used to implement perspective rectification. Experiments demonstrate increase of recognition rate and improvement compared with some related algorithms.

  17. The Influence of Familiarity on Affective Responses to Natural Scenes

    NASA Astrophysics Data System (ADS)

    Sanabria Z., Jorge C.; Cho, Youngil; Yamanaka, Toshimasa

    This kansei study explored how familiarity with image-word combinations influences affective states. Stimuli were obtained from Japanese print advertisements (ads), and consisted of images (e.g., natural-scene backgrounds) and their corresponding headlines (advertising copy). Initially, a group of subjects evaluated their level of familiarity with images and headlines independently, and stimuli were filtered based on the results. In the main experiment, a different group of subjects rated their pleasure and arousal to, and familiarity with, image-headline combinations. The Self-Assessment Manikin (SAM) scale was used to evaluate pleasure and arousal, and a bipolar scale was used to evaluate familiarity. The results showed a high correlation between familiarity and pleasure, but low correlation between familiarity and arousal. The characteristics of the stimuli, and their effect on the variables of pleasure, arousal and familiarity, were explored through ANOVA. It is suggested that, in the case of natural-scene ads, familiarity with image-headline combinations may increase the pleasure response to the ads, and that certain components in the images (e.g., water) may increase arousal levels.

  18. Scene understanding based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2005-05-01

    New generations of smart weapons and unmanned vehicles must have reliable perceptual systems that are similar to human vision. Instead of precise computations of 3-dimensional models, a network-symbolic system converts image information into an "understandable" Network-Symbolic format, which is similar to relational knowledge models. Logic of visual scenes can be captured in the Network-Symbolic models and used for the disambiguation of visual information. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation that can be better interpreted by higher-level knowledge structures. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views.

  19. Reversed effects of spatial compatibility in natural scenes.

    PubMed

    Müsseler, Jochen; Aschersleben, Gisa; Arning, Katrin; Proctor, Robert W

    2009-01-01

    Effects of spatial stimulus-response compatibility are often attributed to automatic position-based activation of the response elicited by a stimulus. Three experiments examined this assumption in natural scenes. In Experiments 1 and 2, participants performed simulated driving, and a person appeared periodically on either side of the road. Participants were to turn toward a person calling a taxi and away from a person carelessly entering the street. The spatially incompatible response was faster than the compatible response, but neutral stimuli showed a typical benefit for spatially compatible responses. Placing the people further in the visual periphery eliminated the advantage for the incompatible response and showed an advantage for the compatible response. In Experiment 3, participants made left-right joystick responses to a vicious dog or puppy in a walking scenario. Instructions were to avoid the vicious dog and approach the puppy or vice versa. Results again showed an advantage for the spatially incompatible response. Thus, the typically observed advantage of spatially compatible responses was reversed for dangerous situations in natural scenes. PMID:19827702

  20. Neural dynamics of change detection in crowded acoustic scenes

    PubMed Central

    Sohoglu, Ediz; Chait, Maria

    2016-01-01

    Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~ 50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection. PMID:26631816

  1. Bivariate statistical modeling of color and range in natural scenes

    NASA Astrophysics Data System (ADS)

    Su, Che-Chun; Cormack, Lawrence K.; Bovik, Alan C.

    2014-02-01

    The statistical properties embedded in visual stimuli from the surrounding environment guide and affect the evolutionary processes of human vision systems. There are strong statistical relationships between co-located luminance/chrominance and disparity bandpass coefficients in natural scenes. However, these statistical rela- tionships have only been deeply developed to create point-wise statistical models, although there exist spatial dependencies between adjacent pixels in both 2D color images and range maps. Here we study the bivariate statistics of the joint and conditional distributions of spatially adjacent bandpass responses on both luminance/chrominance and range data of naturalistic scenes. We deploy bivariate generalized Gaussian distributions to model the underlying statistics. The analysis and modeling results show that there exist important and useful statistical properties of both joint and conditional distributions, which can be reliably described by the corresponding bivariate generalized Gaussian models. Furthermore, by utilizing these robust bivariate models, we are able to incorporate measurements of bivariate statistics between spatially adjacent luminance/chrominance and range information into various 3D image/video and computer vision applications, e.g., quality assessment, 2D-to-3D conversion, etc.

  2. Gender, smiling, and witness credibility in actual trials.

    PubMed

    Nagle, Jacklyn E; Brodsky, Stanley L; Weeter, Kaycee

    2014-01-01

    It has been acknowledged that females exhibit more smiling behaviors than males, but there has been little attention to this gender difference in the courtroom. Although both male and female witnesses exhibit smiling behaviors, there has been no research examining the subsequent effect of gender and smiling on witness credibility. This study used naturalistic observation to examine smiling behaviors and credibility in actual witnesses testifying in court. Raters assessed the smiling behaviors and credibility (as measured by the Witness Credibility Scale) of 32 male and female witnesses testifying in trials in a mid-sized Southern city. "Credibility raters" rated the perceived likeability, trustworthiness, confidence, knowledge, and overall credibility of the witnesses using the Witness Credibility Scale. "Smile raters" noted smiling frequency and types, including speaking/expressive and listening/receptive smiles. Gender was found to affect perceived trustworthiness ratings, in which male witnesses were seen as more trustworthy than female witnesses. No significant differences were found in the smiling frequency for male and female witnesses. However, the presence of smiling was found to contribute to perceived likeability of a witness. Smiling female witnesses were found to be more likeable than smiling male and non-smiling female witnesses. PMID:24634058

  3. Scene-Motion Thresholds During Head Yaw for Immersive Virtual Environments

    PubMed Central

    Jerald, Jason; Whitton, Mary; Brooks, Frederick P.

    2014-01-01

    In order to better understand how scene motion is perceived in immersive virtual environments, we measured scene-motion thresholds under different conditions across three experiments. Thresholds were measured during quasi-sinusoidal head yaw, single left-to-right or right-to-left head yaw, different phases of head yaw, slow to fast head yaw, scene motion relative to head yaw, and two scene illumination levels. We found that across various conditions 1) thresholds are greater when the scene moves with head yaw (corresponding to gain < 1:0) than when the scene moves against head yaw (corresponding to gain > 1:0), and 2) thresholds increase as head motion increases. PMID:25705137

  4. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  5. The CZSaw notes case study

    NASA Astrophysics Data System (ADS)

    Lee, Eric; Gupta, Ankit; Darvill, David; Dill, John; Shaw, Chris D.; Woodbury, Robert

    2013-12-01

    Analysts need to keep track of their analytic findings, observations, ideas, and hypotheses throughout the analysis process. While some visual analytics tools support such note-taking needs, these notes are often represented as objects separate from the data and in a workspace separate from the data visualizations. Representing notes the same way as the data and integrating them with data visualizations can enable analysts to build a more cohesive picture of the analytical process. We created a note-taking functionality called CZNotes within the visual analytics tool CZSaw for analyzing unstructured text documents. CZNotes are designed to use the same model as the data and can thus be visualized in CZSaw's existing data views. We conducted a preliminary case study to observe the use of CZNotes and observed that CZNotes has the potential to support progressive analysis, to act as a shortcut to the data, and supports creation of new data relationships.

  6. Emergence of Visual Saliency from Natural Scenes via Context-Mediated Probability Distributions Coding

    PubMed Central

    Xu, Jinhua; Yang, Zhiyong; Tsien, Joe Z.

    2010-01-01

    Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs) of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA) and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes. PMID:21209963

  7. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 10 2012-01-01 2012-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  8. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 10 2014-01-01 2014-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  9. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 10 2013-01-01 2013-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  10. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  11. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  12. The actual status of Astronomy in Moldova

    NASA Astrophysics Data System (ADS)

    Gaina, A.

    The astronomical research in the Republic of Moldova after Nicolae Donitch (Donici)(1874-1956(?)) were renewed in 1957, when a satellites observations station was open in Chisinau. Fotometric observations and rotations of first Soviet artificial satellites were investigated under a program SPIN put in action by the Academy of Sciences of former Socialist Countries. The works were conducted by Assoc. prof. Dr. V. Grigorevskij, which conducted also research in variable stars. Later, at the beginning of 60-th, an astronomical Observatory at the Chisinau State University named after Lenin (actually: the State University of Moldova), placed in Lozovo-Ciuciuleni villages was open, which were coordinated by Odessa State University (Prof. V.P. Tsesevich) and the Astrosovet of the USSR. Two main groups worked in this area: first conducted by V. Grigorevskij (till 1971) and second conducted by L.I. Shakun (till 1988), both graduated from Odessa State University. Besides this research areas another astronomical observations were made: Comets observations, astroclimate and atmospheric optics in collaboration with the Institute of the Atmospheric optics of the Siberian branch of the USSR (V. Chernobai, I. Nacu, C. Usov and A.F. Poiata). Comets observations were also made since 1988 by D. I. Gorodetskij which came to Chisinau from Alma-Ata and collaborated with Ukrainean astronomers conducted by K.I. Churyumov. Another part of space research was made at the State University of Tiraspol since the beggining of 70-th by a group of teaching staff of the Tiraspol State Pedagogical University: M.D. Polanuer, V.S. Sholokhov. No a collaboration between Moldovan astronomers and Transdniestrian ones actually exist due to War in Transdniestria in 1992. An important area of research concerned the Radiophysics of the Ionosphere, which was conducted in Beltsy at the Beltsy State Pedagogical Institute by a group of teaching staff of the University since the beginning of 70-th: N. D. Filip, E

  13. What Galvanic Vestibular Stimulation Actually Activates

    PubMed Central

    Curthoys, Ian S.; MacDougall, Hamish Gavin

    2012-01-01

    In a recent paper in Frontiers Cohen et al. (2012) asked “What does galvanic vestibular stimulation actually activate?” and concluded that galvanic vestibular stimulation (GVS) causes predominantly otolithic behavioral responses. In this Perspective paper we show that such a conclusion does not follow from the evidence. The evidence from neurophysiology is very clear: galvanic stimulation activates primary otolithic neurons as well as primary semicircular canal neurons (Kim and Curthoys, 2004). Irregular neurons are activated at lower currents. The answer to what behavior is activated depends on what is measured and how it is measured, including not just technical details, such as the frame rate of video, but the exact experimental context in which the measurement took place (visual fixation vs total darkness). Both canal and otolith dependent responses are activated by GVS. PMID:22833733

  14. MODIS Solar Diffuser: Modelled and Actual Performance

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Xiong, Xiao-Xiong; Esposito, Joe; Wang, Xin-Dong; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument's solar diffuser is used in its radiometric calibration for the reflective solar bands (VIS, NTR, and SWIR) ranging from 0.41 to 2.1 micron. The sun illuminates the solar diffuser either directly or through a attenuation screen. The attenuation screen consists of a regular array of pin holes. The attenuated illumination pattern on the solar diffuser is not uniform, but consists of a multitude of pin-hole images of the sun. This non-uniform illumination produces small, but noticeable radiometric effects. A description of the computer model used to simulate the effects of the attenuation screen is given and the predictions of the model are compared with actual, on-orbit, calibration measurements.

  15. 12 CFR 561.33 - Note account.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 5 2011-01-01 2011-01-01 false Note account. 561.33 Section 561.33 Banks and... SAVINGS ASSOCIATIONS § 561.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  16. 12 CFR 561.33 - Note account.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Note account. 561.33 Section 561.33 Banks and... SAVINGS ASSOCIATIONS § 561.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  17. Seek and you shall remember: scene semantics interact with visual search to build better memories.

    PubMed

    Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L H

    2014-01-01

    Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. PMID:25015385

  18. The Role of Semantic Interference in Limiting Memory for the Details of Visual Scenes

    PubMed Central

    Melcher, David; Murphy, Brian

    2011-01-01

    Many studies suggest a large capacity memory for briefly presented pictures of whole scenes. At the same time, visual working memory (WM) of scene elements is limited to only a few items. We examined the role of retroactive interference in limiting memory for visual details. Participants viewed a scene for 5 s and then, after a short delay containing either a blank screen or 10 distracter scenes, answered questions about the location, color, and identity of objects in the scene. We found that the influence of the distracters depended on whether they were from a similar semantic domain, such as “kitchen” or “airport.” Increasing the number of similar scenes reduced, and eventually eliminated, memory for scene details. Although scene memory was firmly established over the initial study period, this memory was fragile and susceptible to interference. This may help to explain the discrepancy in the literature between studies showing limited visual WM and those showing a large capacity memory for scenes. PMID:22016743

  19. The Neural Bases of the Semantic Interference of Spatial Frequency-based Information in Scenes.

    PubMed

    Kauffmann, Louise; Bourgin, Jessica; Guyader, Nathalie; Peyrin, Carole

    2015-12-01

    Current models of visual perception suggest that during scene categorization, low spatial frequencies (LSF) are processed rapidly and activate plausible interpretations of visual input. This coarse analysis would then be used to guide subsequent processing of high spatial frequencies (HSF). The present fMRI study examined how processing of LSF may influence that of HSF by investigating the neural bases of the semantic interference effect. We used hybrid scenes as stimuli by combining LSF and HSF from two different scenes, and participants had to categorize the HSF scene. Categorization was impaired when LSF and HSF scenes were semantically dissimilar, suggesting that the LSF scene was processed automatically and interfered with categorization of the HSF scene. fMRI results revealed that this semantic interference effect was associated with increased activation in the inferior frontal gyrus, the superior parietal lobules, and the fusiform and parahippocampal gyri. Furthermore, a connectivity analysis (psychophysiological interaction) revealed that the semantic interference effect resulted in increasing connectivity between the right fusiform and the right inferior frontal gyri. Results support influential models suggesting that, during scene categorization, LSF information is processed rapidly in the pFC and activates plausible interpretations of the scene category. These coarse predictions would then initiate top-down influences on recognition-related areas of the inferotemporal cortex, and these could interfere with the categorization of HSF information in case of semantic dissimilarity to LSF. PMID:26244724

  20. Viewing Nature Scenes Positively Affects Recovery of Autonomic Function Following Acute-Mental Stress

    PubMed Central

    2013-01-01

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor. PMID:23590163

  1. Reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3

  2. A Note on Inflation Targeting.

    ERIC Educational Resources Information Center

    Lai, Ching-chong; Chang, Juin-jen

    2001-01-01

    Presents a pedagogical graphical exposition to illustrate the stabilizing effect of price target zones. Finds that authorities' commitment to defend a price target zone affects the public's inflation expectations and, in turn, reduces actual inflation. (RLH)

  3. Emergency prehospital on-scene thoracotomy: a novel method.

    PubMed

    Ashrafian, Hutan; Athanasiou, Thanos

    2010-12-01

    The necessity for prehospital thoracotomy is rare, but can be lifesaving. Occasionally an emergency practitioner or surgeon coincidentally arrives at a trauma scene before the arrival of emergency medical teams. In such a circumstance, even when thoracotomy may be indicated, it is not usually performed in view of the lack of equipment (e.g., dissecting tools or rib retractor). We present a novel technique of "L" shape thoracotomy, or Thoraco-sterno-costochondrotomy, whereby in a prehospital setting, and with minimal equipment (such as a penknife) a thoracotomy can be performed with adequate exposure of the heart and great vessels. The similarities of this pragmatic procedure are considered within the context of ancient Aztec and Mesoamerican thoracotomies. PMID:21874737

  4. Dynamic infrared scene projectors based upon the DMD

    NASA Astrophysics Data System (ADS)

    Beasley, D. Brett; Bender, Matt; Crosby, Jay; Messer, Tim

    2009-02-01

    The Micromirror Array Projector System (MAPS) is an advanced dynamic scene projector system developed by Optical Sciences Corporation (OSC) for Hardware-In-the-Loop (HWIL) simulation and sensor test applications. The MAPS is based upon the Texas Instruments Digital Micromirror Device (DMD) which has been modified to project high resolution, realistic imagery suitable for testing sensors and seekers operating in the UV, visible, NIR, and IR wavebands. Since the introduction of the first MAPS in 2001, OSC has continued to improve the technology and develop systems for new projection and Electro-Optical (E-O) test applications. This paper reviews the basic MAPS design and performance capabilities. We also present example projectors and E-O test sets designed and fabricated by OSC in the last 7 years. Finally, current research efforts and new applications of the MAPS technology are discussed.

  5. Complete scene recovery and terrain classification in textured terrain meshes.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh. PMID:23112653

  6. High accuracy LADAR scene projector calibration sensor development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.; Bowden, Mark H.

    2008-04-01

    A sensor system for the characterization of infrared laser radar scene projectors has been developed. Available sensor systems do not provide sufficient range resolution to evaluate the high precision LADAR projector systems developed by the U.S. Army Research, Development and Engineering Command (RDECOM) Aviation and Missile Research, Development and Engineering Center (AMRDEC). With timing precision capability to a fraction of a nanosecond, it can confirm the accuracy of simulated return pulses from a nominal range of up to 6.5 km to a resolution of 4cm. Increased range can be achieved through firmware reconfiguration. Two independent amplitude triggers measure both rise and fall time providing a judgment of pulse shape and allowing estimation of the contained energy. Each return channel can measure up to 32 returns per trigger characterizing each return pulse independently. Currently efforts include extending the capability to 8 channels. This paper outlines the development, testing, capabilities and limitations of this new sensor system.

  7. Closed-loop real-time infrared scene generator

    NASA Astrophysics Data System (ADS)

    Crow, Dennis R.; Coker, Charles F.; Garbo, Dennis L.; Olson, Eric M.

    1998-07-01

    A computer program has been developed to provide closed-loop infrared imagery of composite targets and backgrounds in real- time. This program operates on parametric databases generated off-line by computationally intensive first principle physics codes such as the Composite Hardbody and Missile Plume (CHAMP) program, Synthetic Scene Generation Model (SSGM), and Multi- Spectral Modeling and Analysis (MSMA/Irma program. The parametric databases allow dynamic variations in flight and engagement scenarios to be modeled as closed-loop guidance and control algorithms modify the operational dynamics. The program is tightly coupled with the parametric databases to produce infrared radiation results in real-time and OpenGL graphic libraries to interface with high performance graphic hardware. The program is being sponsored for development by the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator facility of the Air Force Research Laboratory Munitions Directorate located at Eglin AFB, Florida.

  8. Scene-based nonuniformity correction using sparse prior

    NASA Astrophysics Data System (ADS)

    Mou, Xingang; Zhang, Guilin; Hu, Ruolan; Zhou, Xiao

    2011-11-01

    The performance of infrared focal plane array (IRFPA) is known to be affected by the presence of spatial fixed pattern noise (FPN) that is superimposed on the true image. Scene-based nonuniformity correction (NUC) algorithms are widely concerned since they only need the readout infrared data captured by the imaging system during its normal operation. A novel adaptive NUC algorithm is proposed using the sparse prior that when derivative filters are applied to infrared images, the filter outputs tends to be sparse. A change detection module based on results of derivative filters is introduced to avoid stationary object being learned into the background, so the ghosting artifact is eliminated effectively. The performance of the new algorithm is evaluated with both real and simulated imagery.

  9. Inverting a dispersive scene's side-scanned image

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1983-01-01

    Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.

  10. Infrared imaging of the crime scene: possibilities and pitfalls.

    PubMed

    Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G

    2013-09-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. PMID:23919285

  11. Scene/object classification using multispectral data fusion algorithms

    NASA Astrophysics Data System (ADS)

    Kuzma, Thomas J.; Lazofson, Laurence E.; Choe, Howard C.; Chovan, John D.

    1994-06-01

    Near-simultaneous, multispectral, coregistered imagery of ground target and background signatures were collected over a full diurnal cycle in visible, infrared, and ultraviolet spectrally filtered wavebands using Battelle's portable sensor suite. The imagery data were processed using classical statistical algorithms, artificial neural networks and data clustering techniques to classify objects in the imaged scenes. Imagery collected at different times throughout the day were employed to verify algorithm robustness with respect to temporal variations of spectral signatures. In addition, several multispectral sensor fusion medical imaging applications were explored including imaging of subcutaneous vasculature, retinal angiography, and endoscopic cholecystectomy. Work is also being performed to advance the state of the art using differential absorption lidar as an active remote sensing technique for spectrally detecting, identifying, and tracking hazardous emissions. These investigations support a wide variety of multispectral signature discrimination applications including the concepts of automated target search, landing zone detection, enhanced medical imaging, and chemical/biological agent tracking.

  12. Collaboration behind-the-scenes: key to effective interprofessional education.

    PubMed

    MacKenzie, Diane E; Doucet, Shelley; Nasser, Susan; Godden-Webster, Anne L; Andrews, Cynthia; Kephart, George

    2014-07-01

    A variety of stakeholders, including students, faculty, educational institutions and the broader health care and social service communities, work behind-the-scenes to support interprofessional education initiatives. While program designers are faced with multiple challenges associated with implementing and sustaining such programs, little has been written about how program designers practice the interprofessional competencies that are expected of students. This brief report describes the backstage collaboration underpinning the Dalhousie Health Mentors Program, a large and complex pre-licensure interprofessional experience connecting student teams with community volunteer mentors who have chronic conditions to learn about interprofessional collaboration and patient/client-centered care. Based on our experiences, we suggest that just as students are required to reflect on collaborative processes, interprofessional program designers should examine the ways in which they work together and take into consideration the impact this has on the delivery of the educational experience. PMID:24593325

  13. Projection optical subsystem for STRICOM's dynamic infrared scene projector

    NASA Astrophysics Data System (ADS)

    Thomas, Matthew C.

    1999-07-01

    The design and preliminary performance characteristics of the projection optical subsystem (POS) of US Army STIRCOM's dynamic infrared scene projector (DIRSP) are presented in this paper. The DIRSP POS, made by Diversified Optical Products of Salem, NH and Mission Research Corporation, serves three purposes. The first is to combine the broadband images of three 544 X 672 pixel resistive emitter arrays using wedge-shaped mirrors to make a 1632 X 672 pixel mosaic image with minimal seams and aberrations. The second is to fold in long-wave infrared (LWIR) light from a blackbody for projecting backgrounds from 240 to 337 K. The third purpose is to collimate the LWIR light from the mosaic image and blackbody with a 5:1 motorized zoom lens. Samples of the mosaic image as seen through the blackbody relay are presented along with the designed characteristics of the zoom lens.

  14. Complete Scene Recovery and Terrain Classification in Textured Terrain Meshes

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh. PMID:23112653

  15. A hybrid approach for text detection in natural scenes

    NASA Astrophysics Data System (ADS)

    Wang, Runmin; Sang, Nong; Wang, Ruolin; Kuang, Xiaoqin

    2013-10-01

    In this paper, a hybrid approach is proposed to detect texts in natural scenes. It is performed by the following steps: Firstly, the edge map and the text saliency region are obtained. Secondly, the text candidate regions are detected by connected components (CC) based method and are identified by an off-line trained HOG classifier. And then, the remaining CCs are grouped into text lines with some heuristic strategies to make up for the false negatives. Finally, the text lines are broken into separate words. The performance of the proposed approach is evaluated on the location detection database of ICDAR 2003 robust reading competition. Experimental results demonstrate the validity of our approach and are competitive with other state-of-the-art algorithms.

  16. Target scene generator (TSG) for infrared seeker evaluation

    NASA Astrophysics Data System (ADS)

    Sturlesi, Doron; Pinsky, Ephi

    1997-07-01

    The TSG is used for evaluating infrared missile seekers by dynamic targets and EOCM realistic environment. The system generates multi mode primary and secondary targets, up to three flares and jammers combined with thermal background image of 10 degree(s) field of view. Each component is independently controlled to provide 2D trajectory, velocity and acceleration. Four orders of magnitude in LOS angular velocity can be accomplished. The system allows for variation of sources angular size, radiated intensity and other spatial and temporal modulation. All sources are combined in a collimated output beam. The beam is further projected through an optical relay and a Field Of Regard assembly. This mechanism displays the whole scenario in a wide angle span onto the seeker aperture. Further system improvements involves combining dynamic infrared scene projector with high temperature sources under real time high dynamics, for better performances with imaging seekers of maneuvered platforms.

  17. Conference scene: pharmacogenomics: from cell to clinic (part 2).

    PubMed

    Siest, Gérard; Medeiros, Rui; Melichar, Bohuslav; Stathopoulou, Maria; Van Schaik, Ron Hn; Cacabelos, Ramon; Abt, Peter Meier; Monteiro, Carolino; Gurwitz, David; Queiroz, Jao; Mota-Filipe, Helder; Ndiaye, Ndeye Coumba; Visvikis-Siest, Sophie

    2014-04-01

    Second International ESPT Meeting Lisbon, Portugal, 26-28 September 2013 The second European Society of Pharmacogenomics and Theranostics (ESPT) conference was organized in Lisbon, Portugal, and attracted 250 participants from 37 different countries. The participants could listen to 50 oral presentations, participate in five lunch symposia and were able to view 83 posters and an exhibition. Part 1 of this Conference Scene was presented in the previous issue of Pharmacogenomics. This second part will focus on: clinical implementation of pharmacogenomics tests; transporters and pharmacogenomics; stem cells and other new tools for pharmacogenomics and drug discovery; from system pharmacogenomics to personalized medicine; and, finally, we will discuss the Posters and Awards that were presented at the conference. PMID:24897282

  18. Extracting scene feature vectors through modeling, volume 3

    NASA Technical Reports Server (NTRS)

    Berry, J. K.; Smith, J. A.

    1976-01-01

    The remote estimation of the leaf area index of winter wheat at Finney County, Kansas was studied. The procedure developed consists of three activities: (1) field measurements; (2) model simulations; and (3) response classifications. The first activity is designed to identify model input parameters and develop a model evaluation data set. A stochastic plant canopy reflectance model is employed to simulate reflectance in the LANDSAT bands as a function of leaf area index for two phenological stages. An atmospheric model is used to translate these surface reflectances into simulated satellite radiance. A divergence classifier determines the relative similarity between model derived spectral responses and those of areas with unknown leaf area index. The unknown areas are assigned the index associated with the closest model response. This research demonstrated that the SRVC canopy reflectance model is appropriate for wheat scenes and that broad categories of leaf area index can be inferred from the procedure developed.

  19. Real-time and reliable human detection in clutter scene

    NASA Astrophysics Data System (ADS)

    Tan, Yumei; Luo, Xiaoshu; Xia, Haiying

    2013-10-01

    To solve the problem that traditional HOG approach for human detection can not achieve real-time detection due to its time-consuming detection, an efficient algorithm based on first segmentation then identify method for real-time human detection is proposed to achieve real-time human detection in clutter scene. Firstly, the ViBe algorithm is used to segment all possible human target regions quickly, and more accurate moving objects is obtained by using the YUV color space to eliminate the shadow; secondly, using the body geometry knowledge can help to found the valid human areas by screening the regions of interest; finally, linear support vector machine (SVM) classifier and HOG are applied to train for human body classifier, to achieve accurate positioning of human body's locations. The results of our comparative experiments demonstrated that the approach proposed can obtain high accuracy, good real-time performance and strong robustness.

  20. Archiving of meaningful scenes for personal TV terminals

    NASA Astrophysics Data System (ADS)

    Jin, Sung Ho; Cho, Jun Ho; Ro, Yong Man; Lee, Han Kyu

    2006-01-01

    In this paper, we propose an archiving method of broadcasts for TV terminals including a set-top box (STB) and a personal video recorder (PVR). Our goal is to effectively cluster and retrieve semantic video scenes obtained by realtime broadcasting content filtering for re-use or transmission. For TV terminals, we generate new video archiving formats which combine broadcasting media resources with the related metadata and auxiliary media data. In addition, we implement an archiving system to decode and retrieve the media resource and the metadata within the format. The experiment shows that the proposed format makes it possible to retrieve or browse media data or metadata in the TV terminal effectively, and could have compatibility with a portable device.

  1. Human supervisory approach to modeling industrial scenes using geometric primitives

    SciTech Connect

    Luck, J.P.; Little, C.Q.; Roberts, R.S.

    1997-11-19

    A three-dimensional world model is crucial for many robotic tasks. Modeling techniques tend to be either fully manual or autonomous. Manual methods are extremely time consuming but also highly accurate and flexible. Autonomous techniques are fast but inflexible and, with real-world data, often inaccurate. The method presented in this paper combines the two, yielding a highly efficient, flexible, and accurate mapping tool. The segmentation and modeling algorithms that compose the method are specifically designed for industrial environments, and are described in detail. A mapping system based on these algorithms has been designed. It enables a human supervisor to quickly construct a fully defined world model from unfiltered and unsegmented real-world range imagery. Examples of how industrial scenes are modeled with the mapping system are provided.

  2. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    NASA Astrophysics Data System (ADS)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  3. MISR empirical stray light corrections in high-contrast scenes

    NASA Astrophysics Data System (ADS)

    Limbacher, J. A.; Kahn, R. A.

    2015-07-01

    We diagnose the potential causes for the Multi-angle Imaging SpectroRadiometer's (MISR) persistent high aerosol optical depth (AOD) bias at low AOD with the aid of coincident MODerate-resolution Imaging Spectroradiometer (MODIS) imagery from NASA's Terra satellite. Stray light in the MISR instrument is responsible for a large portion of the high AOD bias in high-contrast scenes, such as broken-cloud scenes that are quite common over ocean. Discrepancies among MODIS and MISR nadir-viewing blue, green, red, and near-infrared images are used to optimize seven parameters individually for each wavelength, along with a background reflectance modulation term that is modeled separately, to represent the observed features. Independent surface-based AOD measurements from the AErosol RObotic NETwork (AERONET) and the Marine Aerosol Network (MAN) are compared with MISR research aerosol retrieval algorithm (RA) AOD retrievals for 1118 coincidences to validate the corrections when applied to the nadir and off-nadir cameras. With these corrections, plus the baseline RA corrections and enhanced cloud screening applied, the median AOD bias for all data in the mid-visible (green, 558 nm) band decreases from 0.006 (0.020 for the MISR standard algorithm (SA)) to 0.000, and the RMSE decreases by 5 % (27 % compared to the SA). For AOD558 nm < 0.10, which includes about half the validation data, 68th percentile absolute AOD558 nm errors for the RA have dropped from 0.022 (0.034 for the SA) to < 0.02 (~ 0.018).

  4. On scene injury severity prediction (OSISP) algorithm for car occupants.

    PubMed

    Buendia, Ruben; Candefjord, Stefan; Fagerlind, Helen; Bálint, András; Sjöqvist, Bengt Arne

    2015-08-01

    Many victims in traffic accidents do not receive optimal care due to the fact that the severity of their injuries is not realized early on. Triage protocols are based on physiological and anatomical criteria and subsequently on mechanisms of injury in order to reduce undertriage. In this study the value of accident characteristics for field triage is evaluated by developing an on scene injury severity prediction (OSISP) algorithm using only accident characteristics that are feasible to assess at the scene of accident. A multivariate logistic regression model is constructed to assess the probability of a car occupant being severely injured following a crash, based on the Swedish Traffic Accident Data Acquisition (STRADA) database. Accidents involving adult occupants for calendar years 2003-2013 included in both police and hospital records, with no missing data for any of the model variables, were included. The total number of subjects was 29128, who were involved in 22607 accidents. Partition between severe and non-severe injury was done using the Injury Severity Score (ISS) with two thresholds: ISS>8 and ISS>15. The model variables are: belt use, airbag deployment, posted speed limit, type of accident, location of accident, elderly occupant (>55 years old), sex and occupant seat position. The area under the receiver operator characteristic curve (AUC) is 0.78 and 0.83 for ISS>8 and ISS>15, respectively, as estimated by 10-fold cross-validation. Belt use is the strongest predictor followed by type of accident. Posted speed limit, age and accident location contribute substantially to increase model accuracy, whereas sex and airbag deployment contribute to a smaller extent and seat position is of limited value. These findings can be used to refine triage protocols used in Sweden and possibly other countries with similar traffic environments. PMID:26005884

  5. Adaptive fusion of infrared and visible images in dynamic scene

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Yin, Yafeng; Man, Hong; Desai, Sachi

    2011-11-01

    Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years to improve human visual perception for object detection. One of the main challenges for visible and infrared image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an acceptable computational cost. This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations. After that, the region surrounding the target area is segmented as the background regions. Then image fusion is locally applied on the selected target and background regions by computing different linear combination of color components from registered visible and infrared images. After obtaining different fused images, histogram distributions are computed on these local fusion images as the fusion feature set. The variance ratio which is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate that our proposed method achieved a competitive performance compared with other fusion algorithms at a relatively low computational cost.

  6. Evaluation of the ERBE scene identification algo-rithm

    NASA Technical Reports Server (NTRS)

    Vemury, S. K.

    1987-01-01

    The sensitivity of the radiation budget parameters and the scene selection process to different factors is evaluated. The use of ERB-7 CLE models provides instantaneous albedo and longwave flux values which are in essential agreement in the 70 deg cutoff case with those from the SAB method. The increase in albedo with increasing satellite zenith angle cutoff is not apparent at the target area level for different surface types. GOES models seem to show an increase in instantaneous albedo at the global level with satellite zenith angle of the same nature as the ERB-7 models investigated in a previous report and are, therefore, probably not an improvement. The use of recently derived NCLE models did not make a noticeable change in the budget parameters; but the cloud classification did show hemispherical differences and caused a day-night readjustment of cloudiness amounts. Preliminary results for an additional month (December 1979) indicate good agreement between the SAB and MLE methods. Additional work is required to establish the agreement for all seasons. Comparison of derived cloud amounts with other data sets such as THIR/TOMS indicates good zonal agreement. Sampling adequacy investigations at different temporal averaging intervals with the SAB method, indicate large uncertainties in small time average cases. For a 6-day averaging period, sampling is very poor with only 23% of the globe contributing toward the global mean albedo. Effect of modifications to the MLE procedure seem to have little effect on the derived budget quantities in an averaged sense. Significant differences in the flux values due to differences in scene selection are apparent in individual target areas studies. A modification of the MLE procedure to mimic the perpendicular bisector algorithm indicated no effect on the gross radiation budget quantities.

  7. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset. PMID:23955796

  8. A unification framework for best-of-breed real-time scene generation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Ballard, Gary H.; Trimble, Darian E.; Bunfield, Dennis H.; Mayhall, Anthony J.

    2010-04-01

    AMRDEC sought out an improved framework for real-time hardware-in-the-loop (HWIL) scene generation to provide the flexibility needed to adapt to rapidly changing hardware advancements and provide the ability to more seamlessly integrate external third party codes for Best-of-Breed real-time scene generation. As such, AMRDEC has developed Continuum, a new software architecture foundation to allow for the integration of these codes into a HWIL lab facility while enhancing existing AMRDEC HWIL scene generation codes such as the Joint Signature Image Generator (JSIG). This new real-time framework is a minimalistic modular approach based on the National Institute of Standards (NIST) Neutral Messaging Language (NML) that provides the basis for common HWIL scene generation. High speed interconnects and protocols were examined to support distributed scene generation whereby the scene graph, associated phenomenology, and resulting scene can be designed around the data rather than a framework, and the scene elements can be dynamically distributed across multiple high performance computing assets. Because of this open architecture approach, the framework facilitates scaling from a single GPU "traditional" PC scene generation system to a multi-node distributed system requiring load distribution and scene compositing across multiple high performance computing platforms. This takes advantage of the latest advancements in GPU hardware, such as NVIDIA's Tesla and Fermi architectures, providing an increased benefit in both fidelity and performance of the associated scene's phenomenology. Other features of the Continuum easily extend the use of this framework to include visualization, diagnostic, analysis, configuration, and other HWIL and all digital simulation tools.

  9. Caustic-Side Solvent Extraction: Prediction of Cesium Extraction for Actual Wastes and Actual Waste Simulants

    SciTech Connect

    Delmau, L.H.; Haverlock, T.J.; Sloop, F.V., Jr.; Moyer, B.A.

    2003-02-01

    This report presents the work that followed the CSSX model development completed in FY2002. The developed cesium and potassium extraction model was based on extraction data obtained from simple aqueous media. It was tested to ensure the validity of the prediction for the cesium extraction from actual waste. Compositions of the actual tank waste were obtained from the Savannah River Site personnel and were used to prepare defined simulants and to predict cesium distribution ratios using the model. It was therefore possible to compare the cesium distribution ratios obtained from the actual waste, the simulant, and the predicted values. It was determined that the predicted values agree with the measured values for the simulants. Predicted values also agreed, with three exceptions, with measured values for the tank wastes. Discrepancies were attributed in part to the uncertainty in the cation/anion balance in the actual waste composition, but likely more so to the uncertainty in the potassium concentration in the waste, given the demonstrated large competing effect of this metal on cesium extraction. It was demonstrated that the upper limit for the potassium concentration in the feed ought to not exceed 0.05 M in order to maintain suitable cesium distribution ratios.

  10. Retrieved actual ET using SEBS model from Landsat-5 TM data for irrigation area of Australia

    NASA Astrophysics Data System (ADS)

    Ma, Weiqiang; Hafeez, Mohsin; Rabbani, Umair; Ishikawa, Hirohiko; Ma, Yaoming

    2012-11-01

    The idea of ground-based evapotranspiration (ET) is of the most interesting for land-atmosphere interactions, such as water-saving irrigation, the performance of irrigation systems, crop water deficit, drought mitigation strategies and accurate initialization of climate prediction models especially in arid and semiarid catchments where water shortage is a critical problem. The recent year's drought in Australia and concerns about climate change has prominent the need to manage water resources more sustainably especially in the Murrumbidgee catchment which utilizes bulk water for food security and production. This paper discusses the application of a Surface Energy Balance System (SEBS) model based on Landsat-5 TM data and field observations has been used and tested for deriving ET over Coleambally Irrigation Area (CIA), located in the southwest of NSW, Australia. 16 Landsat-5 TM scenes were selected covering the time period of 2009, 2010 and 2011 for estimating the actual ET in CIA. To do the validation the used methodology, the ground-measured ET was compared to the Landsat-5 TM retrieved actual ET results for CIA. The derived ET value over CIA is much closer to the field measurement. From the remote sensing results and observations, the root mean square error (RMSE) is 0.74 and the mean APD is 7.5%. The derived satellite remote sensing values belong to reasonable range.

  11. Providing Study Notes: Comparison of Three Types of Notes for Review.

    ERIC Educational Resources Information Center

    Kiewra, Kenneth A.; And Others

    1988-01-01

    Forty-four undergraduates received different types of notes for review of a lecture (complete text, linear outline, or matrix), or received no notes. Any form of notes increased performance over no notes, with matrix and outline notes producing higher recall and matrix notes producing greatest transfer. (SLD)

  12. Some Notes on a Functional Equation. Classroom Notes

    ERIC Educational Resources Information Center

    Ren, Zhong-Pu; Wu, Zhi-Qin; Zhou, Qi-Fa; Guo, Bai-Ni; Qi, Feng

    2004-01-01

    In this short note, a mathematical proposition on a functional equation for f(xy)=xf(y) + yf(x)for x,y [does not equal] 0, which is encountered in calculus, is generalized step by step. These steps involve continuity, differentiability, a functional equation, an ordinary differential linear equation of the first order, and relationships between…

  13. Analysing Sexual Experiences through "Scenes": A Framework for the Evaluation of Sexuality Education

    ERIC Educational Resources Information Center

    Paiva, Vera

    2005-01-01

    This paper describes an alternative approach to undertaking and evaluating sexual health promotion, focusing on the notion of "sexual scenes" and "scenarios" and how these can provide a vantage point from which to examine sexual experience. The concept of sexual scenes provides a context within which to analyse many of the behavioural and…

  14. Fundamental remote sensing science research program. Part 1: Scene radiation and atmospheric effects characterization project

    NASA Technical Reports Server (NTRS)

    Murphy, R. E.; Deering, D. W.

    1984-01-01

    Brief articles summarizing the status of research in the scene radiation and atmospheric effect characterization (SRAEC) project are presented. Research conducted within the SRAEC program is focused on the development of empirical characterizations and mathematical process models which relate the electromagnetic energy reflected or emitted from a scene to the biophysical parameters of interest.

  15. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  16. Direct versus indirect processing changes the influence of color in natural scene categorization.

    PubMed

    Otsuka, Sachio; Kawaguchi, Jun

    2009-10-01

    We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization. PMID:19801618

  17. Stage Movement with Scripts and More Work with Scenes. TAP (Theatre Arts Package) 211 and 212.

    ERIC Educational Resources Information Center

    Engelsman, Alan; Thalden, Irene

    The purpose of these lessons is to provide learning experiences which facilitate junior high and senior high school actors' mastery of stage movements when working with scripts. Suggested exercises include practice in finding motivation for actors' stage movements, acting a scene (from "West Side Story"), and interpreting and acting scenes of…

  18. Assessment of Subtraction Scene Understanding Using a Story-Generation Task

    ERIC Educational Resources Information Center

    Kinda, Shigehiro

    2010-01-01

    The present study used a new assessment technique, the story-generation task, to examine students' understanding of subtraction scenes. The students from four grade levels (110 first-, 107 third-, 110 fourth- and 119 sixth-graders) generated stories under the constraints provided by a picture (representing Change, Combine or Compare scene) and a…

  19. SmartScene: An Immersive, Realtime, Assembly, Verification and Training Application

    NASA Technical Reports Server (NTRS)

    Homan, Ray

    1997-01-01

    There are four major components to SmartScene. First, it is shipped with everything necessary to quickly be able to do productive work. It is immersive in that when a user is working in SmartScene he or she cannot see anything except the world being manipulated.

  20. 22 CFR 102.10 - Rendering assistance at the scene of the accident.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Rendering assistance at the scene of the accident. 102.10 Section 102.10 Foreign Relations DEPARTMENT OF STATE ECONOMIC AND OTHER FUNCTIONS CIVIL AVIATION United States Aircraft Accidents Abroad § 102.10 Rendering assistance at the scene of the...