Science.gov

Sample records for actual scene note

  1. Exocentric direction judgements in computer-generated displays and actual scenes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Smith, Stephen; Mcgreevy, Michael W.; Grunwald, Arthur J.

    1989-01-01

    One of the most remarkable perceptual properties of common experience is that the perceived shapes of known objects are constant despite movements about them which transform their projections on the retina. This perceptual ability is one aspect of shape constancy (Thouless, 1931; Metzger, 1953; Borresen and Lichte, 1962). It requires that the viewer be able to sense and discount his or her relative position and orientation with respect to a viewed object. This discounting of relative position may be derived directly from the ranging information provided from stereopsis, from motion parallax, from vestibularly sensed rotation and translation, or from corollary information associated with voluntary movement. It is argued that: (1) errors in exocentric judgements of the azimuth of a target generated on an electronic perspective display are not viewpoint-independent, but are influenced by the specific geometry of their perspective projection; (2) elimination of binocular conflict by replacing electronic displays with actual scenes eliminates a previously reported equidistance tendency in azimuth error, but the viewpoint dependence remains; (3) the pattern of exocentrically judged azimuth error in real scenes viewed with a viewing direction depressed 22 deg and rotated + or - 22 deg with respect to a reference direction could not be explained by overestimation of the depression angle, i.e., a slant overestimation.

  2. Considerations for the Composition of Visual Scene Displays: Potential Contributions of Information from Visual and Cognitive Sciences (Forum Note)

    PubMed Central

    Wilkinson, Krista M.; Light, Janice; Drager, Kathryn

    2013-01-01

    Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing – that is, how a user attends, perceives, and makes sense of the visual information on the display – therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations. PMID:22946989

  3. Technical Note: Development of an automated lysimeter for the calculation of peat soil actual evapotranspiration

    NASA Astrophysics Data System (ADS)

    Proulx-McInnis, S.; St-Hilaire, A.; Rousseau, A. N.; Jutras, S.; Carrer, G.; Levrel, G.

    2011-05-01

    A limited number of publications in the literature deal with the measurement of actual evapotranspiration (AET) from a peat soil. AET is an important parameter in the description of water pathways of an ecosystem. In peatlands, where the water table is near the surface and the vegetation is composed of nonvascular plants without stomatal resistance, the AET measurement represents a challenge. This paper discusses the development of an automated lysimeter installed between 12 and 27 July 2010, at a 11-ha bog site, Pont-Rouge (42 km west of Quebec City, Canada). This system was made of an isolated block of peat, maintained at the same water level as the surrounding water table by a system of submersible pressure transmitters and pumps. The change in water level in millimetres in the isolated block of peat was used to calculate the water lost through evapotranspiration (ET) while accounting the precipitation. The rates of AET were calculated for each day of the study period. Temperature fluctuated between 17.2 and 23.3 °C and total rainfall was 43.76 mm. AET rates from 0.6 to 6.9 mm day-1 were recorded, with a ΣAET/ΣP ratio of 1.38. The estimated potential ET (PET) resulting from Thornthwaite's semi-empirical formula suggested values between 2.8 and 3.9 mm day-1. The average AET/PET ratio was 1.13. According to the literature, the results obtained are plausible. This system, relatively inexpensive and simple to install, may eventually be used to calculate AET on peaty soils in the years to come.

  4. Noted

    ERIC Educational Resources Information Center

    Nunberg, Geoffrey

    2013-01-01

    Considering how much attention people lavish on the technologies of writing--scroll, codex, print, screen--it's striking how little they pay to the technologies for digesting and regurgitating it. One way or another, there's no sector of the modern world that is not saturated with note-taking--the bureaucracy, the liberal professions, the…

  5. Diacria Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image provides a representative view of the vast martian northern plains in the Diacria region near 52.8oN, 184.7oW. This is what the plains looked like in late northern spring in August 2004, after the seasonal winter frost had sublimed away and dust devils began to leave dark streaks on the surface. Many of the dark dust devil streaks in this image are concentrated near a low mound -- the location of a shallowly-filled and buried impact crater. The picture covers an area about 3 km (1.9 mi) wide. Sunlight illuminates the scene from the lower left.

  6. Analyzing crime scene videos

    NASA Astrophysics Data System (ADS)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  7. Constructing, Perceiving, and Maintaining Scenes: Hippocampal Activity and Connectivity

    PubMed Central

    Zeidman, Peter; Mullally, Sinéad L.; Maguire, Eleanor A.

    2015-01-01

    In recent years, evidence has accumulated to suggest the hippocampus plays a role beyond memory. A strong hippocampal response to scenes has been noted, and patients with bilateral hippocampal damage cannot vividly recall scenes from their past or construct scenes in their imagination. There is debate about whether the hippocampus is involved in the online processing of scenes independent of memory. Here, we investigated the hippocampal response to visually perceiving scenes, constructing scenes in the imagination, and maintaining scenes in working memory. We found extensive hippocampal activation for perceiving scenes, and a circumscribed area of anterior medial hippocampus common to perception and construction. There was significantly less hippocampal activity for maintaining scenes in working memory. We also explored the functional connectivity of the anterior medial hippocampus and found significantly stronger connectivity with a distributed set of brain areas during scene construction compared with scene perception. These results increase our knowledge of the hippocampus by identifying a subregion commonly engaged by scenes, whether perceived or constructed, by separating scene construction from working memory, and by revealing the functional network underlying scene construction, offering new insights into why patients with hippocampal lesions cannot construct scenes. PMID:25405941

  8. Hydrological AnthropoScenes

    NASA Astrophysics Data System (ADS)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y

  9. Animal Detection Precedes Access to Scene Category

    PubMed Central

    Crouzet, Sébastien M.; Joubert, Olivier R.; Thorpe, Simon J.; Fabre-Thorpe, Michèle

    2012-01-01

    The processes underlying object recognition are fundamental for the understanding of visual perception. Humans can recognize many objects rapidly even in complex scenes, a task that still presents major challenges for computer vision systems. A common experimental demonstration of this ability is the rapid animal detection protocol, where human participants earliest responses to report the presence/absence of animals in natural scenes are observed at 250–270 ms latencies. One of the hypotheses to account for such speed is that people would not actually recognize an animal per se, but rather base their decision on global scene statistics. These global statistics (also referred to as spatial envelope or gist) have been shown to be computationally easy to process and could thus be used as a proxy for coarse object recognition. Here, using a saccadic choice task, which allows us to investigate a previously inaccessible temporal window of visual processing, we showed that animal – but not vehicle – detection clearly precedes scene categorization. This asynchrony is in addition validated by a late contextual modulation of animal detection, starting simultaneously with the availability of scene category. Interestingly, the advantage for animal over scene categorization is in opposition to the results of simulations using standard computational models. Taken together, these results challenge the idea that rapid animal detection might be based on early access of global scene statistics, and rather suggests a process based on the extraction of specific local complex features that might be hardwired in the visual system. PMID:23251545

  10. Underwater Scene Composition

    ERIC Educational Resources Information Center

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  11. Strategic scene generation model

    NASA Astrophysics Data System (ADS)

    Heckathorn, Harry M.; Anding, David C.

    1992-09-01

    The Strategic Defense Initiative (SDI) must simulate the detection, acquisition, discrimination and tracking of anticipated targets and predict the effect of natural and man-made background phenomena on optical sensor systems designed to perform these tasks. NRL is developing such a capability using a computerized methodology to provide modeled data in the form of digital realizations of complex, dynamic scenes. The Strategic Scene Generation Model (SSGM) is designed to integrate state-of-science knowledge, data bases and computerized phenomenology models to simulate strategic engagement scenarios and to support the design, development and test of advanced surveillance systems. Multi-phenomenology scenes are produced from validated codes--thereby serving as a standard against which different SDI concepts and designs can be tested. This paper describes the SSGM design architecture, the software modules and databases which are used to create scene elements, the synthesis of deterministic and/or stochastic structured scene elements into composite scenes, the software system to manage the various databases and digital image libraries, and verification and validation by comparison with measured data. The focus will be on the functionality and development schedule of the Baseline Model (SSGMB) which is currently being implemented.

  12. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.

    1975-01-01

    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.

  13. Crime Scene Investigation.

    ERIC Educational Resources Information Center

    Harris, Barbara; Kohlmeier, Kris; Kiel, Robert D.

    Casting students in grades 5 through 12 in the roles of reporters, lawyers, and detectives at the scene of a crime, this interdisciplinary activity involves participants in the intrigue and drama of crime investigation. Using a hands-on, step-by-step approach, students work in teams to investigate a crime and solve a mystery. Through role-playing…

  14. Assembling a holographic scene

    NASA Astrophysics Data System (ADS)

    Mrongovius, Martina

    2013-03-01

    A series of art projects that use multiplex holography as a medium to combine and spatially animate multiple photographic perspectives are presented. Through the process of image collection and compilation into holograms, several concepts are explored. The animate spatial qualities of multiplex holograms are used to express an urban gaze of moving through cites and the multiplicity of perceptual experience. A question of how we understand ourselves to be located and the complexity of this sense is also addressed. The ability to assemble multiple photographic views together into a scene is considered as a method to document the collective experience of event. How these holographic scenes are viewed is compared to the compositional activity, showing both how the holographic medium inspired the compositions and is used as a means of expression.

  15. Opportunity's Heat Shield Scene

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This image from NASA's Mars Exploration Rover Opportunity reveals the scene of the rover's heat shield impact. In this view, Opportunity is approximately 130 meters (427 feet) away from the device that protected it while hurtling through the martian atmosphere.

    The rover spent 36 sols investigating how the severe heating during entry through the atmosphere affected the heat shield. The most obvious is the fact that the heat shield inverted upon impact.

    This is the panoramic camera team's best current attempt at generating a true-color view of what this scene would look like if viewed by a human on Mars. It was generated from a mathematical combination of six calibrated, left-eye panoramic camera images acquired around 1:50 p.m. local solar time on Opportunity's sol 322 (Dec. 19, 2004) using filters ranging in wavelengths from 430 to 750 nanometers.

  16. Capturing, processing, and rendering real-world scenes

    NASA Astrophysics Data System (ADS)

    Nyland, Lars S.; Lastra, Anselmo A.; McAllister, David K.; Popescu, Voicu; McCue, Chris; Fuchs, Henry

    2000-12-01

    While photographs vividly capture a scene from a single viewpoint, it is our goal to capture a scene in such a way that a viewer can freely move to any viewpoint, just as he or she would in an actual scene. We have built a prototype system to quickly digitize a scene using a laser rangefinder and a high-resolution digital camera that accurately captures a panorama of high-resolution range and color information. With real-world scenes, we have provided data to fuel research in many area, including representation, registration, data fusion, polygonization, rendering, simplification, and reillumination. The real-world scene data can be used for many purposes, including immersive environments, immersive training, re-engineering and engineering verification, renovation, crime-scene and accident capture and reconstruction, archaeology and historic preservation, sports and entertainment, surveillance, remote tourism and remote sales. We will describe our acquisition system, the necessary processing to merge data from the multiple input devices and positions. We will also describe high quality rendering using the data we have collected. Issues about specific rendering accelerators and algorithms will also be presented. We will conclude by describing future uses and methods of collection for real- world scene data.

  17. South Polar Scene

    NASA Technical Reports Server (NTRS)

    2004-01-01

    5 February 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a portion of the south polar residual cap. Sunlight illuminates this scene from the upper left, thus the somewhat kidney bean-shaped features are pits, not mounds. These pits and their neighboring polygonal cracks are formed in a material composed mostly of carbon dioxide ice. The image is located near 87.0oS, 5.7oW, and covers an area 3 km (1.9 mi) wide.

  18. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  19. Use of Data Mining Techniques to Model Crime Scene Investigator Performance

    NASA Astrophysics Data System (ADS)

    Adderley, Richard; Townsley, Michael; Bond, John

    This paper examines how data mining techniques can assist the monitoring of Crime Scene Investigator performance. The findings show that Investigators can be placed in one of four groups according to their ability to recover DNA and fingerprints from crime scenes. They also show that their ability to predict which crime scenes will yield the best opportunity of recovering forensic samples has no correlation to their actual ability to recover those samples.

  20. CAD programs: a tool for crime scene processing and reconstruction

    NASA Astrophysics Data System (ADS)

    Boggiano, Daniel; De Forest, Peter R.; Sheehan, Francis X.

    1997-02-01

    Computer aided drafting (CAD) programs have great potential for helping the forensic scientist. One of their most direct and useful applications is crime scene documentation, as an aid in rendering neat, unambiguous line drawings of crime scenes. Once the data has been entered, it can easily be displayed, printed, or plotted in a variety of formats. Final renditions from this initial data entry can take multiple forms and can have multiple uses. As a demonstrative aid, a CAD program can produce two dimensional (2-D) drawings of the scene from one's notes to scale. These 2-D renditions are court display quality and help to make the forensic scientists's testimony easily understood. Another use for CAD is as an analytical tool for scene reconstruction. More than just a drawing aid, CAD can generate useful information from the data input. It can help reconstruct bullet paths or locations of furniture in a room when it is critical to the reconstruction. Data entry at the scene, on a notebook computer, can assist in framing and answering questions so that the forensic scientist can test hypotheses while actively documenting the scene. Further, three dimensional (3-D) renditions of items can be viewed from many 'locations' by using the program to rotate the object and the observers' viewpoint.

  1. One-step reconstruction of assembled 3D holographic scenes

    NASA Astrophysics Data System (ADS)

    Velez Zea, Alejandro; Barrera-Ramírez, John Fredy; Torroba, Roberto

    2015-12-01

    We present a new experimental approach for reconstructing in one step 3D scenes otherwise not feasible in a single snapshot from standard off-axis digital hologram architecture, due to a lack of illuminating resources or a limited setup size. Consequently, whenever a scene could not be wholly illuminated or the size of the scene surpasses the available setup disposition, this protocol can be implemented to solve these issues. We need neither to alter the original setup in every step nor to cover the whole scene by the illuminating source, thus saving resources. With this technique we multiplex the processed holograms of actual diffuse objects composing a scene using a two-beam off-axis holographic setup in a Fresnel approach. By registering individually the holograms of several objects and applying a spatial filtering technique, the filtered Fresnel holograms can then be added to produce a compound hologram. The simultaneous reconstruction of all objects is performed in one step using the same recovering procedure employed for single holograms. Using this technique, we were able to reconstruct, for the first time to our knowledge, a scene by multiplexing off-axis holograms of the 3D objects without cross talk. This technique is important for quantitative visualization of optically packaged multiple images and is useful for a wide range of applications. We present experimental results to support the method.

  2. Scene gist categorization by pigeons.

    PubMed

    Kirkpatrick, Kimberly; Bilton, Tannis; Hansen, Bruce C; Loschky, Lester C

    2014-04-01

    Scene gist categorization in humans is rapid, accurate, and tuned to the statistical regularities in the visual world. However, no studies have investigated whether scene gist categorization is a general process shared across species, or whether it may be influenced by species-specific adaptive specializations relying on specific low-level scene statistical regularities of the environment. Although pigeons form many types of categorical judgments, little research has examined pigeons' scene categorization, and no studies have examined pigeons' ability to do so rapidly. In Experiment 1, pigeons were trained to discriminate between either 2 basic-level categories (beach vs. mountain) or a superordinate-level natural versus a manmade scene category distinction (beach vs. street). The birds learned both tasks to a high degree of accuracy and transferred their discrimination to novel images. Furthermore, the pigeons successfully discriminated stimuli presented in the 0.2- to 0.35-s duration range. Therefore, pigeons, a highly divergent species from humans, are also capable of rapid scene categorization, but they require longer stimulus durations than humans. Experiment 2 examined whether pigeons make use of complex statistical regularities during scene gist categorization across multiple viewpoints. Pigeons were trained with the 2 natural categories from Experiment 1 (beach vs. mountain) with zenith (90°), bird's eye (45°), and terrestrial (0°) viewpoints. A sizable portion of the variability in pigeon categorization performance was explained by the systematic variation in scene category-specific statistical regularities, as with humans. Thus, rapid scene categorization is a process that is shared across pigeons and humans, but shows a degree of adaptive specialization.

  3. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Barrow, H. G.; Weyl, S. A.

    1976-01-01

    Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography.

  4. Refreshing and integrating visual scenes in scene-selective cortex

    PubMed Central

    Park, Soojin; Chun, Marvin M.; Johnson, Marcia K.

    2010-01-01

    Constructing a rich and coherent visual experience involves maintaining visual information that is not perceptually available in the current view. Recent studies suggest that briefly thinking about a stimulus (refreshing, Johnson, 1992) can modulate activity in category specific visual areas. Here, we tested the nature of such perceptually refreshed representations in the parahippocampal place area (PPA) and retrosplenial cortex (RSC) using fMRI. We asked whether a refreshed representation is specific to a restricted view of a scene, or more view-invariant. Participants saw a panoramic scene and were asked to think back to (refresh) a part of the scene after it disappeared. In some trials, the refresh cue appeared twice on the same side (e.g., refresh left - refresh left), and other trials, the refresh cue appeared on different sides (e.g., refresh left - refresh right). A control condition presented halves of the scene twice on same sides (e.g., perceive left - perceive left) or different sides (e.g., perceive left - perceive right). When scenes were physically repeated, both the PPA and RSC showed greater activation for the different side repetition than the same side repetition, suggesting view-specific representations. When participants refreshed scenes, the PPA showed view-specific activity just as in the physical repeat conditions, whereas the RSC showed an equal amount of activation for different and same side conditions. This finding suggests that in the RSC, refreshed representations were not restricted to a specific view of a scene, but extended beyond the target half into the entire scene. Thus, RSC activity associated with refreshing may provide a mechanism for integrating multiple views in the mind. PMID:19929756

  5. Forensic 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Little, Charles Q.; Small, Daniel E.; Peters, Ralph R.; Rigdon, J. B.

    2000-05-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a fieldable prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  6. Forensic 3D Scene Reconstruction

    SciTech Connect

    LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN; SMALL,DANIEL E.

    1999-10-12

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.

  7. Simulating Scenes In Outer Space

    NASA Technical Reports Server (NTRS)

    Callahan, John D.

    1989-01-01

    Multimission Interactive Picture Planner, MIP, computer program for scientifically accurate and fast, three-dimensional animation of scenes in deep space. Versatile, reasonably comprehensive, and portable, and runs on microcomputers. New techniques developed to perform rapidly calculations and transformations necessary to animate scenes in scientifically accurate three-dimensional space. Written in FORTRAN 77 code. Primarily designed to handle Voyager, Galileo, and Space Telescope. Adapted to handle other missions.

  8. Monocular visual scene understanding: understanding multi-object traffic scenes.

    PubMed

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.

  9. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1980-01-01

    Presents four notes that report new equipment and techniques of interest to physics teachers. These notes deal with collosions of atoms in solids, determining the viscosity of a liquid, measuring the speed of sound and demonstrating Doppler effect. (HM)

  10. Physics Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1980

    1980-01-01

    Presents nine physics notes for British secondary school teachers. Some of these notes are: (1) speed of sound in a steel rod; (2) physics extracts-part four (1978); and (3) a graphical approach to acceleration. (HM)

  11. Suicide notes.

    PubMed

    O'Donnell, I; Farmer, R; Catalan, J

    1993-07-01

    Detailed case reports of incidents of suicide and attempted suicide on the London Underground railway system between 1985 and 1989 were examined for the presence of suicide notes. The incidence of note-leaving was 15%. Notes provided little insight into the causes of suicide as subjectively perceived, or strategies for suicide prevention. PMID:8353698

  12. On the psychology and psychopathology of primal-scene experience.

    PubMed

    Hoyt, M F

    1980-07-01

    The importance of primal-scene experience is suggested by the wide range of attention it has received, with a multitude of derivative phenomena being attributed to its influence. Emphasis has been on possible psychiatric problems, and almost all available reports are clinical and anecdotal. The classical psychoanalytic view has been that such stimulation, be it through actual witnessing or fantasy, results (especially in children) in experience of anxiety, intense eroticization, and sadomasochistic confusions about sexuality. It is suggested here that issues of affectional love and fears of aloneness and feelings of vulnerability may often be the focus of primal-scene reactions. A wide range of evidence has been presented here to support the view that primal-scene experience per se is not necessarily deleterious, and that traumatic or pathogenic effects usually occur only within a context of general brutality or disturbed family relationships. In contradistinction, some emphasis here has been placed on possible positive effects of primal-scene experience. There is a clear need for further study, especially among nonpsychiatrically selected persons, for understanding to be advanced regarding the vicissitudes of both normal and pathological primal-scene experience. PMID:7410144

  13. Multispectral polarized scene projector (MPSP)

    NASA Astrophysics Data System (ADS)

    Yu, Haiping; Wei, Hong; Guo, Lei; Wang, Shenggang; Li, Le; Lippert, Jack R.; Serati, Steve; Gupta, Neelam; Carlen, Frank R.

    2011-06-01

    This newly developed prototype Multispectral Polarized Scene Projector (MPSP), configured for the short wave infrared (SWIR) regime, can be used for the test & evaluation (T&E) of spectro-polarimetric imaging sensors. The MPSP system generates both static and video images (up to 200 Hz) with 512×512 spatial resolution with active spatial, spectral, and polarization modulation with controlled bandwidth. It projects input SWIR radiant intensity scenes from stored memory with user selectable wavelength (850-1650 nm) and bandwidth (12-100 nm), as well as polarization states (six different states) controllable on a pixel by pixel basis. The system consists of one spectrally tunable liquid crystal filter with variable bandpass, and multiple liquid crystal on silicon (LCoS) spatial light modulators (SLMs) for intensity control and polarization modulation. In addition to the spectro-polarimetric sensor test, the instrument also simulates polarized multispectral images of military scenes/targets for hardware-in-the loop (HIL) testing.

  14. Toward integrated scene text reading.

    PubMed

    Weinman, Jerod J; Butler, Zachary; Knoll, Dugan; Feild, Jacqueline

    2014-02-01

    The growth in digital camera usage combined with a worldly abundance of text has translated to a rich new era for a classic problem of pattern recognition, reading. While traditional document processing often faces challenges such as unusual fonts, noise, and unconstrained lexicons, scene text reading amplifies these challenges and introduces new ones such as motion blur, curved layouts, perspective projection, and occlusion among others. Reading scene text is a complex problem involving many details that must be handled effectively for robust, accurate results. In this work, we describe and evaluate a reading system that combines several pieces, using probabilistic methods for coarsely binarizing a given text region, identifying baselines, and jointly performing word and character segmentation during the recognition process. By using scene context to recognize several words together in a line of text, our system gives state-of-the-art performance on three difficult benchmark data sets. PMID:24356356

  15. Categorization of Natural Dynamic Audiovisual Scenes

    PubMed Central

    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville

    2014-01-01

    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database. PMID:24788808

  16. How to Make a Scene

    ERIC Educational Resources Information Center

    Varian, Hal R.

    2004-01-01

    Each Thursday, the New York Times publishes a column called "Economic Scene" on page C2 of the Business Section. The authorship of the column rotates among four individuals: Alan Krueger, Virginia Postrel, Jeff Madrick, and the author. This essay is about how he came to be a columnist and how he goes about writing the columns.

  17. Creating Three-Dimensional Scenes

    ERIC Educational Resources Information Center

    Krumpe, Norm

    2005-01-01

    Persistence of Vision Raytracer (POV-Ray), a free computer program for creating photo-realistic, three-dimensional scenes and a link for Mathematica users interested in generating POV-Ray files from within Mathematica, is discussed. POV-Ray has great potential in secondary mathematics classrooms and helps in strengthening students' visualization…

  18. The Musical Scene in Hawaii

    ERIC Educational Resources Information Center

    Vaught, Raymond

    1975-01-01

    Music education does not exist in a vacuum; it must be viewed as part of a total musical scene. Article considered the wide spectrum of musical events in Hawaii that made music education there so rich and varied. (Author/RK)

  19. Comics Make the Job Scene

    ERIC Educational Resources Information Center

    Training Bus Ind, 1970

    1970-01-01

    Making the Job Scene," a series of 11 short, full color comic books, for ghetto residents, tells about job opportunities, where to get training, and how to behave on the job. For single copies, write: Manpower Administration Information Office, Washington, D.C. 20210. (LY)

  20. When is scene identification just texture recognition?

    PubMed

    Renninger, Laura Walker; Malik, Jitendra

    2004-01-01

    Subjects were asked to identify scenes after very brief exposures (<70 ms). Their performance was always above chance and improved with exposure duration, confirming that subjects can get the gist of a scene with one fixation. We propose that a simple texture analysis of the image can provide a useful cue towards rapid scene identification. Our model learns texture features across scene categories and then uses this knowledge to identify new scenes. The texture analysis leads to similar identifications and confusions as subjects with limited processing time. We conclude that early scene identification can be explained with a simple texture recognition model. PMID:15208015

  1. Improving AIRS radiance spectra in high contrast scenes using MODIS

    NASA Astrophysics Data System (ADS)

    Pagano, Thomas S.; Aumann, Hartmut H.; Manning, Evan M.; Elliott, Denis A.; Broberg, Steven E.

    2015-09-01

    The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on May 4, 2002. AIRS acquires hyperspectral infrared radiances in 2378 channels ranging in wavelength from 3.7-15.4 um with spectral resolution of better than 1200, and spatial resolution of 13.5 km with global daily coverage. The AIRS is designed to measure temperature and water vapor profiles for improvement in weather forecast accuracy and improved understanding of climate processes. As with most instruments, the AIRS Point Spread Functions (PSFs) are not the same for all detectors. When viewing a non-uniform scene, this causes a significant radiometric error in some channels that is scene dependent and cannot be removed without knowledge of the underlying scene. The magnitude of the error depends on the combination of non-uniformity of the AIRS spatial response for a given channel and the non-uniformity of the scene, but is typically only noticeable in about 1% of the scenes and about 10% of the channels. The current solution is to avoid those channels when performing geophysical retrievals. In this effort we use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument to provide information on the scene uniformity that is used to correct the AIRS data. For the vast majority of channels and footprints the technique works extremely well when compared to a Principal Component (PC) reconstruction of the AIRS channels. In some cases where the scene has high inhomogeneity in an irregular pattern, and in some channels, the method can actually degrade the spectrum. Most of the degraded channels appear to be slightly affected by random noise introduced in the process, but those with larger degradation may be affected by alignment errors in the AIRS relative to MODIS or uncertainties in the PSF. Despite these errors, the methodology shows the ability to correct AIRS radiances in non-uniform scenes under some of the worst case conditions and improves the ability to match

  2. Scene-of-crime analysis by a 3-dimensional optical digitizer: a useful perspective for forensic science.

    PubMed

    Sansoni, Giovanna; Cattaneo, Cristina; Trebeschi, Marco; Gibelli, Daniele; Poppa, Pasquale; Porta, Davide; Maldarella, Monica; Picozzi, Massimo

    2011-09-01

    Analysis and detailed registration of the crime scene are of the utmost importance during investigations. However, this phase of activity is often affected by the risk of loss of evidence due to the limits of traditional scene of crime registration methods (ie, photos and videos). This technical note shows the utility of the application of a 3-dimensional optical digitizer on different crime scenes. This study aims in fact at verifying the importance and feasibility of contactless 3-dimensional reconstruction and modeling by optical digitization to achieve an optimal registration of the crime scene. PMID:21811148

  3. Scene-of-crime analysis by a 3-dimensional optical digitizer: a useful perspective for forensic science.

    PubMed

    Sansoni, Giovanna; Cattaneo, Cristina; Trebeschi, Marco; Gibelli, Daniele; Poppa, Pasquale; Porta, Davide; Maldarella, Monica; Picozzi, Massimo

    2011-09-01

    Analysis and detailed registration of the crime scene are of the utmost importance during investigations. However, this phase of activity is often affected by the risk of loss of evidence due to the limits of traditional scene of crime registration methods (ie, photos and videos). This technical note shows the utility of the application of a 3-dimensional optical digitizer on different crime scenes. This study aims in fact at verifying the importance and feasibility of contactless 3-dimensional reconstruction and modeling by optical digitization to achieve an optimal registration of the crime scene.

  4. Chemistry Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1980

    1980-01-01

    Presents 12 chemistry notes for British secondary school teachers. Some of these notes are: (1) a simple device for testing pH-meters; (2) portable fume cupboard safety screen; and (3) Mass spectroscopy-analysis of a mass peak. (HM)

  5. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  6. Scanning scene tunnel for city traversing.

    PubMed

    Zheng, Jiang Yu; Zhou, Yu; Milli, Panayiotis

    2006-01-01

    This paper proposes a visual representation named scene tunnel for capturing urban scenes along routes and visualizing them on the Internet. We scan scenes with multiple cameras or a fish-eye camera on a moving vehicle, which generates a real scene archive along streets that is more complete than previously proposed route panoramas. Using a translating spherical eye, properly set planes of scanning, and unique parallel-central projection, we explore the image acquisition of the scene tunnel from camera selection and alignment, slit calculation, scene scanning, to image integration. The scene tunnels cover high buildings, ground, and various viewing directions and have uniformed resolutions along the street. The sequentially organized scene tunnel benefits texture mapping onto the urban models. We analyze the shape characteristics in the scene tunnels for designing visualization algorithms. After combining this with a global panorama and forward image caps, the capped scene tunnels can provide continuous views directly for virtual or real navigation in a city. We render scene tunnel dynamically by view warping, fast transmission, and flexible interaction. The compact and continuous scene tunnel facilitates model construction, data streaming, and seamless route traversing on the Internet and mobile devices.

  7. Scanning scene tunnel for city traversing.

    PubMed

    Zheng, Jiang Yu; Zhou, Yu; Milli, Panayiotis

    2006-01-01

    This paper proposes a visual representation named scene tunnel for capturing urban scenes along routes and visualizing them on the Internet. We scan scenes with multiple cameras or a fish-eye camera on a moving vehicle, which generates a real scene archive along streets that is more complete than previously proposed route panoramas. Using a translating spherical eye, properly set planes of scanning, and unique parallel-central projection, we explore the image acquisition of the scene tunnel from camera selection and alignment, slit calculation, scene scanning, to image integration. The scene tunnels cover high buildings, ground, and various viewing directions and have uniformed resolutions along the street. The sequentially organized scene tunnel benefits texture mapping onto the urban models. We analyze the shape characteristics in the scene tunnels for designing visualization algorithms. After combining this with a global panorama and forward image caps, the capped scene tunnels can provide continuous views directly for virtual or real navigation in a city. We render scene tunnel dynamically by view warping, fast transmission, and flexible interaction. The compact and continuous scene tunnel facilitates model construction, data streaming, and seamless route traversing on the Internet and mobile devices. PMID:16509375

  8. Simulator scene display evaluation device

    NASA Technical Reports Server (NTRS)

    Haines, R. F. (Inventor)

    1986-01-01

    An apparatus for aligning and calibrating scene displays in an aircraft simulator has a base on which all of the instruments for the aligning and calibrating are mounted. Laser directs beam at double right prism which is attached to pivoting support on base. The pivot point of the prism is located at the design eye point (DEP) of simulator during the aligning and calibrating. The objective lens in the base is movable on a track to follow the laser beam at different angles within the field of vision at the DEP. An eyepiece and a precision diopter are movable into a position behind the prism during the scene evaluation. A photometer or illuminometer is pivotable about the pivot into and out of position behind the eyepiece.

  9. Chemistry Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1978

    1978-01-01

    Describes experiments, demonstrations, activities and ideas relating to various fields of chemistry to be used in chemistry courses of secondary schools. Three experiments concerning differential thermal analysis are among these notes presented. (HM)

  10. Blue Note

    ScienceCinema

    Murray Gibson

    2016-07-12

    Argonne's Murray Gibson is a physicist whose life's work includes finding patterns among atoms. The love of distinguishing patterns also drives Gibson as a musician and Blues enthusiast."Blue" notes are very harmonic notes that are missing from the equal temperament scale.The techniques of piano blues and jazz represent the melding of African and Western music into something totally new and exciting.

  11. Blue Note

    SciTech Connect

    Murray Gibson

    2007-04-27

    Argonne's Murray Gibson is a physicist whose life's work includes finding patterns among atoms. The love of distinguishing patterns also drives Gibson as a musician and Blues enthusiast."Blue" notes are very harmonic notes that are missing from the equal temperament scale.The techniques of piano blues and jazz represent the melding of African and Western music into something totally new and exciting.

  12. Crime scene investigation, reporting, and reconstuction (CSIRR)

    NASA Astrophysics Data System (ADS)

    Booth, John F.; Young, Jeffrey M.; Corrigan, Paul

    1997-02-01

    Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.

  13. Form and Actuality

    NASA Astrophysics Data System (ADS)

    Bitbol, Michel

    A basic choice underlies physics. It consists of banishing actual situations from theoretical descriptions, in order to reach a universal formal construct. Actualities are then thought of as mere local appearances of a transcendent reality supposedly described by the formal construct. Despite its impressive success, this method has left major loopholes in the foundations of science. In this paper, I document two of these loopholes. One is the problem of time asymmetry in statistical thermodynamics, and the other is the measurement problem of quantum mechanics. Then, adopting a broader philosophical standpoint, I try to turn the whole picture upside down. Here, full priority is given to actuality (construed as a mode of the immanent reality self-reflectively being itself) over formal constructs. The characteristic aporias of this variety of "Copernican revolution" are discussed.

  14. Scenes, Spaces, and Memory Traces

    PubMed Central

    Maguire, Eleanor A.; Intraub, Helene; Mullally, Sinéad L.

    2015-01-01

    The hippocampus is one of the most closely scrutinized brain structures in neuroscience. While traditionally associated with memory and spatial cognition, in more recent years it has also been linked with other functions, including aspects of perception and imagining fictitious and future scenes. Efforts continue apace to understand how the hippocampus plays such an apparently wide-ranging role. Here we consider recent developments in the field and in particular studies of patients with bilateral hippocampal damage. We outline some key findings, how they have subsequently been challenged, and consider how to reconcile the disparities that are at the heart of current lively debates in the hippocampal literature. PMID:26276163

  15. Scene-Based Contextual Cueing in Pigeons

    PubMed Central

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  16. Scene-based contextual cueing in pigeons.

    PubMed

    Wasserman, Edward A; Teng, Yuejia; Brooks, Daniel I

    2014-10-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target, which could appear in 1 of 4 locations on color photographs of real-world scenes. On half of the trials, each of 4 scenes was consistently paired with 1 of 4 possible target locations; on the other half of the trials, each of 4 different scenes was randomly paired with the same 4 possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons.

  17. Associative Processing Is Inherent in Scene Perception

    PubMed Central

    Aminoff, Elissa M.; Tarr, Michael J.

    2015-01-01

    How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional

  18. Spatial and temporal scene analysis

    NASA Astrophysics Data System (ADS)

    Rollins, J. Michael; Chaapel, Charles; Bleiweiss, Max P.

    1994-06-01

    Current efforts to design reliable background scene generation programs require validation using real images for comparison. A crucial step in making objective comparisons is to parameterize the real and generated images into a common set of feature metrics. Such metrics can be derived from statistical and transform-based analyses and yield information about the structures and textures present in various image regions of interest. This paper presents the results of such a metrics-development process for the smart weapons operability enhancement (SWOE) joint test and evaluation (JT&E) program. Statistical and transform based techniques were applied to images obtained from two separate locations, Grayling, Michigan and Yuma, Arizona, at various times of day and under a variety of environmental conditions. Statistical analyses of scene radiance distributions and `clutter' content were performed both spatially and temporally. Fourier and wavelet transform methods were applied as well. Results and their interpretations are given for the image analyses. The metrics that provide the clearest and most reliable distinction between feature classes are recommended.

  19. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1980-01-01

    This collection of notes describes (1) an optoelectronic apparatus for classroom demonstrations of mechanical laws, (2) a more efficient method for demonstrated nuclear chain reactions using electrically energized "traps" and ping-pong balls, and (3) an inexpensive demonstration for qualitative analysis of temperature-dependent resistance. (CS)

  20. Classroom Notes

    ERIC Educational Resources Information Center

    International Journal of Mathematical Education in Science and Technology, 2007

    2007-01-01

    In this issue's "Classroom Notes" section, the following papers are discussed: (1) "Constructing a line segment whose length is equal to the measure of a given angle" (W. Jacob and T. J. Osler); (2) "Generating functions for the powers of Fibonacci sequences" (D. Terrana and H. Chen); (3) "Evaluation of mean and variance integrals without…

  1. Biology Notes.

    ERIC Educational Resources Information Center

    School Science Review, 1984

    1984-01-01

    Presents information on the teaching of nutrition (including new information relating to many current O-level syllabi) and part 16 of a reading list for A- and S-level biology. Also includes a note on using earthworms as a source of material for teaching meiosis. (JN)

  2. Apparatus Notes.

    ERIC Educational Resources Information Center

    Eaton, Bruce G., Ed.

    1979-01-01

    Presents brief notes on new ideas, equipment, techniques, or materials of interest to teachers of physics. An apparatus that demonstrates the uniform acceleration of gravity, and a simple way to demonstrate nuclear blocking patterns of crystal lattices are among new ideas presented. (HM)

  3. Classroom Notes

    ERIC Educational Resources Information Center

    International Journal of Mathematical Education in Science and Technology, 2007

    2007-01-01

    In this issue's "Classroom Notes" section, the following papers are described: (1) "Sequences of Definite Integrals" by T. Dana-Picard; (2) "Structural Analysis of Pythagorean Monoids" by M.-Q Zhan and J. Tong; (3) "A Random Walk Phenomenon under an Interesting Stopping Rule" by S. Chakraborty; (4) "On Some Confidence Intervals for Estimating the…

  4. An Accurate Scene Segmentation Method Based on Graph Analysis Using Object Matching and Audio Feature

    NASA Astrophysics Data System (ADS)

    Yamamoto, Makoto; Haseyama, Miki

    A method for accurate scene segmentation using two kinds of directed graph obtained by object matching and audio features is proposed. Generally, in audiovisual materials, such as broadcast programs and movies, there are repeated appearances of similar shots that include frames of the same background, object or place, and such shots are included in a single scene. Many scene segmentation methods based on this idea have been proposed; however, since they use color information as visual features, they cannot provide accurate scene segmentation results if the color features change in different shots for which frames include the same object due to camera operations such as zooming and panning. In order to solve this problem, scene segmentation by the proposed method is realized by using two novel approaches. In the first approach, object matching is performed between two frames that are each included in different shots. By using these matching results, repeated appearances of shots for which frames include the same object can be successfully found and represented as a directed graph. The proposed method also generates another directed graph that represents the repeated appearances of shots with similar audio features in the second approach. By combined use of these two directed graphs, degradation of scene segmentation accuracy, which results from using only one kind of graph, can be avoided in the proposed method and thereby accurate scene segmentation can be realized. Experimental results performed by applying the proposed method to actual broadcast programs are shown to verify the effectiveness of the proposed method.

  5. Audiovisual integration facilitates unconscious visual scene processing.

    PubMed

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration.

  6. Teaching Notes

    NASA Astrophysics Data System (ADS)

    2001-05-01

    If you would like to contribute a teaching note for any of these sections please contact ped@iop.org. Contents: LET'S INVESTIGATE: Standing waves on strings MY WAY: Physics slips, trips and falls PHYSICS ON A SHOESTRING The McOhm: using fast food to explain resistance Eggs and a sheet STARTING OUT: After a nervous start, I'm flying ON THE MAP: Christ's Hospital CURIOSITY: The Levitron TECHNICAL TRIMMINGS: Brownian motion smoke cell LET'S INVESTIGATE

  7. Teaching Notes

    NASA Astrophysics Data System (ADS)

    2001-03-01

    If you would like to contribute a teaching note for any of these sections please contact ped@iop.org. Contents: PHYSICS ON A SHOESTRING: Demonstrating resolution Magnetic tea patterns LET'S INVESTIGATE: Conducting foam TECHNICAL TRIMMINGS: Polarimeter Old experiments on air-tracks gain new fans MY WAY: Newton's laws ON THE MAP: The International School of Lusaka CURIOSITY: Inflation theory PHYSICS ON A SHOESTRING

  8. Application note :

    SciTech Connect

    Russo, Thomas V.

    2013-08-01

    The development of the XyceTM Parallel Electronic Simulator has focused entirely on the creation of a fast, scalable simulation tool, and has not included any schematic capture or data visualization tools. This application note will describe how to use the open source schematic capture tool gschem and its associated netlist creation tool gnetlist to create basic circuit designs for Xyce, and how to access advanced features of Xyce that are not directly supported by either gschem or gnetlist.

  9. Eye Movement Control during Scene Viewing: Immediate Effects of Scene Luminance on Fixation Durations

    ERIC Educational Resources Information Center

    Henderson, John M.; Nuthmann, Antje; Luke, Steven G.

    2013-01-01

    Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation…

  10. When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes

    ERIC Educational Resources Information Center

    Vo, Melissa L. -H.; Wolfe, Jeremy M.

    2012-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…

  11. The Jordanian Library Scene, 1973

    ERIC Educational Resources Information Center

    International Library Review, 1974

    1974-01-01

    This discussion is made up of several individual papers by different authors: Introduction; Libraries in Jordan; The Royal Scientific Society and its Library; The Royal Scientific Society Course in Librarianship; Jordan Library Association Course in Librarianship; Secondary School Libraries in Jordan; and Notes on West Bank Libraries, 1972. (JB)

  12. Surreal Scene Part of Lives.

    ERIC Educational Resources Information Center

    Freeman, Christina

    1999-01-01

    Describes a school newspaper editor's attempts to cover the devastating tornado that severely damaged her school--North Hall High School in Gainesville, Georgia. Notes that the 16-page special edition she and the staff produced included first-hand accounts, tributes to victims, tales of survival, and pictures of the tragedy. (RS)

  13. Framework of passive millimeter-wave scene simulation based on material classification

    NASA Astrophysics Data System (ADS)

    Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun

    2006-05-01

    using actual PMMW sensors. With the reliable PMMW scene simulator, it will be more efficient to apply the PMMW sensor to various applications.

  14. History Scene Investigations: From Clues to Conclusions

    ERIC Educational Resources Information Center

    McIntyre, Beverly

    2011-01-01

    In this article, the author introduces a social studies lesson that allows students to learn history and practice reading skills, critical thinking, and writing. The activity is called History Scene Investigation or HSI, which derives its name from the popular television series based on crime scene investigations (CSI). HSI uses discovery learning…

  15. The visual light field in real scenes

    PubMed Central

    Xia, Ling; Pont, Sylvia C.; Heynderickx, Ingrid

    2014-01-01

    Human observers' ability to infer the light field in empty space is known as the “visual light field.” While most relevant studies were performed using images on computer screens, we investigate the visual light field in a real scene by using a novel experimental setup. A “probe” and a scene were mixed optically using a semitransparent mirror. Twenty participants were asked to judge whether the probe fitted the scene with regard to the illumination intensity, direction, and diffuseness. Both smooth and rough probes were used to test whether observers use the additional cues for the illumination direction and diffuseness provided by the 3D texture over the rough probe. The results confirmed that observers are sensitive to the intensity, direction, and diffuseness of the illumination also in real scenes. For some lighting combinations on scene and probe, the awareness of a mismatch between the probe and scene was found to depend on which lighting condition was on the scene and which on the probe, which we called the “swap effect.” For these cases, the observers judged the fit to be better if the average luminance of the visible parts of the probe was closer to the average luminance of the visible parts of the scene objects. The use of a rough instead of smooth probe was found to significantly improve observers' abilities to detect mismatches in lighting diffuseness and directions. PMID:25926970

  16. Illumination discrimination in real and simulated scenes

    PubMed Central

    Radonjić, Ana; Pearce, Bradley; Aston, Stacey; Krieger, Avery; Dubin, Hilary; Cottaris, Nicolas P.; Brainard, David H.; Hurlbert, Anya C.

    2016-01-01

    Characterizing humans' ability to discriminate changes in illumination provides information about the visual system's representation of the distal stimulus. We have previously shown that humans are able to discriminate illumination changes and that sensitivity to such changes depends on their chromatic direction. Probing illumination discrimination further would be facilitated by the use of computer-graphics simulations, which would, in practice, enable a wider range of stimulus manipulations. There is no a priori guarantee, however, that results obtained with simulated scenes generalize to real illuminated scenes. To investigate this question, we measured illumination discrimination in real and simulated scenes that were well-matched in mean chromaticity and scene geometry. Illumination discrimination thresholds were essentially identical for the two stimulus types. As in our previous work, these thresholds varied with illumination change direction. We exploited the flexibility offered by the use of graphics simulations to investigate whether the differences across direction are preserved when the surfaces in the scene are varied. We show that varying the scene's surface ensemble in a manner that also changes mean scene chromaticity modulates the relative sensitivity to illumination changes along different chromatic directions. Thus, any characterization of sensitivity to changes in illumination must be defined relative to the set of surfaces in the scene.

  17. Stages as models of scene geometry.

    PubMed

    Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark

    2010-09-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations. PMID:20634560

  18. Look Closely: The Finer Points of Scene Analysis.

    ERIC Educational Resources Information Center

    Miller, Bruce

    1998-01-01

    Continues a discussion of script analysis for actors. Focuses on specific scenes and how an eventual scene-by-scene analysis will help students determine a "throughline" of a play's action. Uses a scene from "Romeo and Juliet" to illustrate scene analysis. Gives 13 script questions for students to answer. Presents six tips for scoring the action.…

  19. Adaptive enhancement method of infrared image based on scene feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    All objects emit radiation in amounts related to their temperature and their ability to emit radiation. The infrared image shows the invisible infrared radiation emitted directly. Because of the advantages, the technology of infrared imaging is applied to many kinds of fields. But compared with visible image, the disadvantages of infrared image are obvious. The characteristics of low luminance, low contrast and the inconspicuous difference target and background are the main disadvantages of infrared image. The aim of infrared image enhancement is to improve the interpretability or perception of information in infrared image for human viewers, or to provide 'better' input for other automated image processing techniques. Most of the adaptive algorithm for image enhancement is mainly based on the gray-scale distribution of infrared image, and is not associated with the actual image scene of the features. So the pertinence of infrared image enhancement is not strong, and the infrared image is not conducive to the application of infrared surveillance. In this paper we have developed a scene feature-based algorithm to enhance the contrast of infrared image adaptively. At first, after analyzing the scene feature of different infrared image, we have chosen the feasible parameters to describe the infrared image. In the second place, we have constructed the new histogram distributing base on the chosen parameters by using Gaussian function. In the last place, the infrared image is enhanced by constructing a new form of histogram. Experimental results show that the algorithm has better performance than other methods mentioned in this paper for infrared scene images.

  20. Teaching Notes

    NASA Astrophysics Data System (ADS)

    2001-07-01

    If you would like to contribute a teaching note for any of these sections please contact ped@iop.org Contents: LET'S INVESTIGATE: Bows and arrows STARTING OUT: A late start ON THE MAP: A South African school making a world of difference TECHNICAL TRIMMINGS: May the force be with you an easily constructed force sensor Modelling Ultrasound A-scanning with the Pico Technology ADC-200 Virtual Instrument PHYSICS ON A SHOESTRING: Sugar cube radioactivity models CURIOSITY: Euler's disk MY WAY: Why heavy things don't fall faster

  1. Real-time scene generator

    NASA Astrophysics Data System (ADS)

    Lord, Eric; Shand, David J.; Cantle, Allan J.

    1996-05-01

    This paper describes the techniques which have been developed for an infra-red (IR) target, countermeasure and background image generation system working in real time for HWIL and Trial Proving applications. Operation is in the 3 to 5 and 8 to 14 micron bands. The system may be used to drive a scene projector (otherwise known as a thermal picture synthesizer) or for direct injection into equipment under test. The provision of realistic IR target and countermeasure trajectories and signatures, within representative backgrounds, enables the full performance envelope of a missile system to be evaluated. It also enables an operational weapon system to be proven in a trials environment without compromising safety. The most significant technique developed has been that of line by line synthesis. This minimizes the processing delays to the equivalent of 1.5 frames from input of target and sightline positions to the completion of an output image scan. Using this technique a scene generator has been produced for full closed loop HWIL performance analysis for the development of an air to air missile system. Performance of the synthesis system is as follows: 256 * 256 pixels per frame; 350 target polygons per frame; 100 Hz frame rate; and Gouraud shading, simple reflections, variable geometry targets and atmospheric scaling. A system using a similar technique has also bee used for direct insertion into the video path of a ground to air weapon system in live firing trials. This has provided realistic targets without degrading the closed loop performance. Delay of the modified video signal has been kept to less than 5 lines. The technique has been developed using a combination of 4 high speed Intel i860 RISC processors in parallel with the 4000 series XILINX field programmable gate arrays (FPGA). Start and end conditions for each line of target pixels are prepared and ordered in the I860. The merging with background pixels and output shading and scaling is then carried out in

  2. Actual use scene of Han-Character for proper name and coded character set

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tatsuo

    This article discusses the following two issues. One is overview of standardization of Han-Character in coded character set including Universal coded character set (ISO/IEC 10646), with the relation to Japanese language policy of the government. The other is the difference and particularity of Han-Character usage for proper name and difficulty to implement in ICT systems.

  3. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed.

    PubMed

    Slavich, George M; Zimbardo, Philip G

    2013-12-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person's past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness, a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior.

  4. Visual scenes are categorized by function.

    PubMed

    Greene, Michelle R; Baldassano, Christopher; Esteva, Andre; Beck, Diane M; Fei-Fei, Li

    2016-01-01

    How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene's category may be determined by the scene's function. PMID:26709590

  5. The number of discernible colors perceived by dichromats in natural scenes and the effects of colored lenses.

    PubMed

    Linhares, João M M; Pinto, Paulo D; Nascimento, Sérgio M C

    2008-01-01

    The number of discernible colors perceived by normal trichromats when viewing natural scenes can be estimated by analyzing idealized color volumes or hyperspectral data obtained from actual scenes. The purpose of the present work was to estimate the relative impairment in chromatic diversity experienced by dichromats when viewing natural scenes and to investigate the effects of colored lenses. The estimates were obtained computationally from the analysis of hyperspectral images of natural scenes and using a quantitative model of dichromats' vision. The color volume corresponding to each scene was represented in CIELAB color space and segmented into cubes of unitary side. For normal trichromats, the number of discernible colors was estimated by counting the number of non-empty cubes. For dichromats, an algorithm simulating for normal observers the appearance of the scenes for dichromats was used, and the number of discernible colors was then counted as for normal trichromats. The effects of colored lenses were estimated by prior filtering the spectral radiance from the scenes with the spectral transmittance function of the lenses. It was found that in dichromatic vision the number of discernible colors was about 7% of normal trichromatic vision. With some colored lenses considerable improvements in chromatic diversity were obtained for trichromats; for dichromats, however, only modest improvements could be obtained with efficiency levels dependent on the combination of scene, lens and type of deficiency.

  6. The number of discernible colors perceived by dichromats in natural scenes and the effects of colored lenses.

    PubMed

    Linhares, João M M; Pinto, Paulo D; Nascimento, Sérgio M C

    2008-01-01

    The number of discernible colors perceived by normal trichromats when viewing natural scenes can be estimated by analyzing idealized color volumes or hyperspectral data obtained from actual scenes. The purpose of the present work was to estimate the relative impairment in chromatic diversity experienced by dichromats when viewing natural scenes and to investigate the effects of colored lenses. The estimates were obtained computationally from the analysis of hyperspectral images of natural scenes and using a quantitative model of dichromats' vision. The color volume corresponding to each scene was represented in CIELAB color space and segmented into cubes of unitary side. For normal trichromats, the number of discernible colors was estimated by counting the number of non-empty cubes. For dichromats, an algorithm simulating for normal observers the appearance of the scenes for dichromats was used, and the number of discernible colors was then counted as for normal trichromats. The effects of colored lenses were estimated by prior filtering the spectral radiance from the scenes with the spectral transmittance function of the lenses. It was found that in dichromatic vision the number of discernible colors was about 7% of normal trichromatic vision. With some colored lenses considerable improvements in chromatic diversity were obtained for trichromats; for dichromats, however, only modest improvements could be obtained with efficiency levels dependent on the combination of scene, lens and type of deficiency. PMID:18598424

  7. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  8. Behind the Scenes: 'Fishing' For Rockets

    NASA Video Gallery

    In this episode of NASA "Behind the Scenes," go on board the two ships -- Liberty Star and Freedom Star -- which retrieve the shuttle's solid rocket boosters after every launch. Astronaut Mike Mass...

  9. Cognition inspired framework for indoor scene annotation

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    We present a simple yet effective scene annotation framework based on a combination of bag-of-visual words (BoVW), three-dimensional scene structure estimation, scene context, and cognitive theory. From a macroperspective, the proposed cognition-based hybrid motivation framework divides the annotation problem into empirical inference and real-time classification. Inspired by the inference ability of human beings, common objects of indoor scenes are defined for experience-based inference, while in the real-time classification stage, an improved BoVW-based multilayer abstract semantics labeling method is proposed by introducing abstract semantic hierarchies to narrow the semantic gap and improve the performance of object categorization. The proposed framework was evaluated on a variety of common data sets and experimental results proved its effectiveness.

  10. Behind the Scenes: Under the Shuttle

    NASA Video Gallery

    In this episode of "NASA Behind the Scenes," astronaut Mike Massimino takes you up to - and under - the space shuttle as it waits on launch pad 39A at the Kennedy Space Center for the start of a re...

  11. Behind the Scenes: Discovery Crew Practices Landing

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, Astronaut Mike Massimino introduces you to Commander Steve Lindsey and the crewmembers of STS-133, space shuttle Discovery's last mission. Go inside one o...

  12. Behind the Scenes: Discovery Crew Performs Swimmingly

    NASA Video Gallery

    In this episode of NASA "Behind the Scenes," astronaut Mike Massimino visits the Johnson Space Center's Neutral Buoyancy Laboratory. The world's largest indoor pool is where Al Drew, Tim Kopra, Mik...

  13. Behind the Scenes: Astronauts Get Float Training

    NASA Video Gallery

    In this episode of "NASA Behind the Scenes," astronaut Mike Massimino continues his visit with safety divers and flight doctors at the Johnson Space Center's Neutral Buoyancy Laboratory as they com...

  14. Editorial Note

    NASA Astrophysics Data System (ADS)

    van der Meer, F.; Ommen Kloeke, E.

    2015-07-01

    With this editorial note we would like to update you on the performance of the International Journal of Applied Earth Observation and Geoinformation (JAG) and inform you about changes that have been made to the composition of the editorial team. Our Journal publishes original papers that apply earth observation data for the management of natural resources and the environment. Environmental issues include biodiversity, land degradation, industrial pollution and natural hazards such as earthquakes, floods and landslides. As such the scope is broad and ranges from conceptual and more fundamental work on earth observation and geospatial sciences to the more problem-solving type of work. When I took over the role of Editor-in-Chief in 2012, I together with the Publisher set myself the mission to position JAG in the top-3 of the remote sensing and GIS journals. To do so we strived at attracting high quality and high impact papers to the journal and to reduce the review turnover time to make JAG a more attractive medium for publications. What has been achieved? Have we reached our ambitions? We can say that: The submissions have increased over the years with over 23% for the last 12 months. Naturally not all may lead to more papers, but at least a portion of the additional submissions should lead to a growth in journal content and quality.

  15. Statistics of high-level scene context

    PubMed Central

    Greene, Michelle R.

    2013-01-01

    Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by

  16. Tablet Computing for Disaster Scene Managers

    PubMed Central

    Chan, Theodore C.; Buono, Colleen J.; Killeen, James P.; Griswold, William G.; Huang, Ricky; Lenert, Leslie

    2006-01-01

    WIISARD utilizes wireless technology to improve the care of victims following a mass casualty disaster. The WIISARD Scene Manager device (WSM) is designed to enhance the collection and accessibility of real-time data on victims, ambulances and hospitals for disaster supervisors and managers. We recently deployed WSM during a large-scale disaster exercise. The WSM performed well logging and tracking victims and ambulances. Scene managers had access to data and utilized the WSM to coordinate patient care and disposition. PMID:17238495

  17. Scene analysis in the natural environment

    PubMed Central

    Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740

  18. Scene change detection based on multimodal integration

    NASA Astrophysics Data System (ADS)

    Zhu, Yingying; Zhou, Dongru

    2003-09-01

    Scene change detection is an essential step to automatic and content-based video indexing, retrieval and browsing. In this paper, a robust scene change detection and classification approach is presented, which analyzes audio, visual and textual sources and accounts for their inter-relations and coincidence to semantically identify and classify video scenes. Audio analysis focuses on the segmentation of audio stream into four types of semantic data such as silence, speech, music and environmental sound. Further processing on speech segments aims at locating speaker changes. Video analysis partitions visual stream into shots. Text analysis can provide a supplemental source of clues for scene classification and indexing information. We integrate the video and audio analysis results to identify video scenes and use the text information detected by the video OCR technology or derived from transcripts available to refine scene classification. Results from single source segmentation are in some cases suboptimal. By combining visual, aural features adn the accessorial text information, the scence extraction accuracy is enhanced, and more semantic segmentations are developed. Experimental results are proven to rather promising.

  19. [Suicidal single intraoral shooting by a shotgun--risk of misinterpretation at the crime scene].

    PubMed

    Woźniak, Krzysztof; Pohl, Jerzy

    2003-01-01

    The authors presented two cases of suicidal single intraoral shooting by a shotgun. The first case relates to a victim found near the peak of Swinica in the Tatra mountains. When the circumstances could have suggested fatal fall from a height and minute, insignificant external injuries were found, the pistol found at the scene has been the most important indicator leading to the actual cause of death. The second case relates to a 38-year-old male found in this family house in a village. Severe internal cranial injury (bone fragmentation) was diagnosed at the scene. A self-made weapon was previously removed and hidden from the scene by a relative of the victim. Before regular forensic autopsy X-ray examination was conducted which revealed multiple intracranial foreign bodies of a shape of a shot. After the results of the autopsy the relative of the deceased indicated the location of the weapon.

  20. Auditory scene analysis by echolocation in bats.

    PubMed

    Moss, C F; Surlykke, A

    2001-10-01

    Echolocating bats transmit ultrasonic vocalizations and use information contained in the reflected sounds to analyze the auditory scene. Auditory scene analysis, a phenomenon that applies broadly to all hearing vertebrates, involves the grouping and segregation of sounds to perceptually organize information about auditory objects. The perceptual organization of sound is influenced by the spectral and temporal characteristics of acoustic signals. In the case of the echolocating bat, its active control over the timing, duration, intensity, and bandwidth of sonar transmissions directly impacts its perception of the auditory objects that comprise the scene. Here, data are presented from perceptual experiments, laboratory insect capture studies, and field recordings of sonar behavior of different bat species, to illustrate principles of importance to auditory scene analysis by echolocation in bats. In the perceptual experiments, FM bats (Eptesicus fuscus) learned to discriminate between systematic and random delay sequences in echo playback sets. The results of these experiments demonstrate that the FM bat can assemble information about echo delay changes over time, a requirement for the analysis of a dynamic auditory scene. Laboratory insect capture experiments examined the vocal production patterns of flying E. fuscus taking tethered insects in a large room. In each trial, the bats consistently produced echolocation signal groups with a relatively stable repetition rate (within 5%). Similar temporal patterning of sonar vocalizations was also observed in the field recordings from E. fuscus, thus suggesting the importance of temporal control of vocal production for perceptually guided behavior. It is hypothesized that a stable sonar signal production rate facilitates the perceptual organization of echoes arriving from objects at different directions and distances as the bat flies through a dynamic auditory scene. Field recordings of E. fuscus, Noctilio albiventris, N

  1. Visual Scenes are Categorized by Function

    PubMed Central

    Greene, Michelle R.; Baldassano, Christopher; Esteva, Andre; Beck, Diane M.; Fei-Fei, Li

    2015-01-01

    How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. We therefore test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether two images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r=0.50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r=0.33), visual features from a convolutional neural network (r=0.39), lexical distance (r=0.27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was due to their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scene’s category may be determined by the scene’s function. PMID:26709590

  2. The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2005-01-01

    In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or…

  3. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  4. Visual scene perception in navigating wood ants.

    PubMed

    Lent, David D; Graham, Paul; Collett, Thomas S

    2013-04-22

    Ants, like honeybees, can set their travel direction along foraging routes using just the surrounding visual panorama. This ability gives us a way to explore how visual scenes are perceived. By training wood ants to follow a path in an artificial scene and then examining their path within transformed scenes, we identify several perceptual operations that contribute to the ants' choice of direction. The first is a novel extension to the known ability of insects to compute the "center of mass" of large shapes: ants learn a desired heading toward a point on a distant shape as the proportion of the shape that lies to the left and right of the aiming point--the 'fractional position of mass' (FPM). The second operation, the extraction of local visual features like oriented edges, is familiar from studies of shape perception. Ants may use such features for guidance by keeping them in desired retinal locations. Third, ants exhibit segmentation. They compute the learned FPM over the whole of a simple scene, but over a segmented region of a complex scene. We suggest how the three operations may combine to provide efficient directional guidance.

  5. Scene Construction, Visual Foraging, and Active Inference

    PubMed Central

    Mirza, M. Berk; Adams, Rick A.; Mathys, Christoph D.; Friston, Karl J.

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  6. High resolution animated scenes from stills.

    PubMed

    Lin, Zhouchen; Wang, Lifeng; Wang, Yunbo; Kang, Sing Bing; Fang, Tian

    2007-01-01

    Current techniques for generating animated scenes involve either videos (whose resolution is limited) or a single image (which requires a significant amount of user interaction). In this paper, we describe a system that allows the user to quickly and easily produce a compelling-looking animation from a small collection of high resolution stills. Our system has two unique features. First, it applies an automatic partial temporal order recovery algorithm to the stills in order to approximate the original scene dynamics. The output sequence is subsequently extracted using a second-order Markov Chain model. Second, a region with large motion variation can be automatically decomposed into semiautonomous regions such that their temporal orderings are softly constrained. This is to ensure motion smoothness throughout the original region. The final animation is obtained by frame interpolation and feathering. Our system also provides a simple-to-use interface to help the user to fine-tune the motion of the animated scene. Using our system, an animated scene can be generated in minutes. We show results for a variety of scenes. PMID:17356221

  7. Scene Construction, Visual Foraging, and Active Inference.

    PubMed

    Mirza, M Berk; Adams, Rick A; Mathys, Christoph D; Friston, Karl J

    2016-01-01

    This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899

  8. Maxwellian Eye Fixation during Natural Scene Perception

    PubMed Central

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A.

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  9. Maxwellian eye fixation during natural scene perception.

    PubMed

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  10. Moving through a multiplex holographic scene

    NASA Astrophysics Data System (ADS)

    Mrongovius, Martina

    2013-02-01

    This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.

  11. Research on target scene generation for hardware-in-the-loop simulation of four-element infrared seeker

    NASA Astrophysics Data System (ADS)

    Yu, Jinsong; Xu, Bo; Hao, Wangsong; Li, Xingshan

    2006-11-01

    To satisfy the need of hardware-in-the-loop simulation of four-element infrared seeker, a method of dynamic infrared scene generation based on "direct signal inject" is proposed. Infrared scene signals generated by model calculation are composed of target movement, disturbers launching and complex background of sky or ground. The signals are directly injects into the electrical cabin of seeker for verification and modification of the algorithms of tracking and anti-jamming, thus the complicated target simulator consisting of black body, turntable, and optical system is not required. The dynamic infrared scene generation techniques based on the four-element infrared guidance principle and the modeling of infrared scene are investigated in detail. Moreover, the implementation of the actual system is given to prove the feasibility of the method in practice.

  12. Defining event reconstruction of digital crime scenes.

    PubMed

    Carrier, Brian D; Spafford, Eugene H

    2004-11-01

    Event reconstruction plays a critical role in solving physical crimes by explaining why a piece of physical evidence has certain characteristics. With digital crimes, the current focus has been on the recognition and identification of digital evidence using an object's characteristics, but not on the identification of the events that caused the characteristics. This paper examines digital event reconstruction and proposes a process model and procedure that can be used for a digital crime scene. The model has been designed so that it can apply to physical crime scenes, can support the unique aspects of a digital crime scene, and can be implemented in software to automate part of the process. We also examine the differences between physical event reconstruction and digital event reconstruction. PMID:15568702

  13. Dynamic infrared scene projection: a review

    NASA Astrophysics Data System (ADS)

    Williams, Owen M.

    1998-12-01

    Since the early 1990s, there has been major progress in the developing field of dynamic infrared scene projection, driven principally by the need for hardware-in-the-loop simulation of the oncoming generation of imaging infrared missile seekers and more recently by the needs for realistic simulation of the new generation of thermal imagers and forward-looking infrared systems. In this paper the current status of the dynamic infrared projection field is reviewed, commencing with an outline of its history. The requirements for dynamic infrared scene projection are examined, allowing a set of validity criteria to be developed. Each class of infrared projector that has been investigated—emissive, transmissive, reflective, laser scanner and phosphor—together with the specific technology initiatives within the class is described and examined against the validity criteria. In this way the leading dynamic infrared scene projection technologies are identified.

  14. Perception of saturation in natural scenes.

    PubMed

    Schiller, Florian; Gegenfurtner, Karl R

    2016-03-01

    We measured how well perception of color saturation in natural scenes can be predicted by different measures that are available in the literature. We presented 80 color images of natural scenes or their gray-scale counterparts to our observers, who were asked to choose the pixel from each image that appeared to be the most saturated. We compared our observers' choices to the predictions of seven popular saturation measures. For the color images, all of the measures predicted perception of saturation quite well, with CIECAM02 performing best. Differences between the measures were small but systematic. When gray-scale images were viewed, observers still chose pixels whose counterparts in the color images were saturated above average. This indicates that image structure and prior knowledge can be relevant to perception of saturation. Nevertheless, our results also show that saturation in natural scenes can be specified quite well without taking these factors into account. PMID:26974924

  15. Improving semantic scene understanding using prior information

    NASA Astrophysics Data System (ADS)

    Laddha, Ankit; Hebert, Martial

    2016-05-01

    Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.

  16. The polymorphism of crime scene investigation: An exploratory analysis of the influence of crime and forensic intelligence on decisions made by crime scene examiners.

    PubMed

    Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin

    2015-12-01

    A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing.

  17. The polymorphism of crime scene investigation: An exploratory analysis of the influence of crime and forensic intelligence on decisions made by crime scene examiners.

    PubMed

    Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin

    2015-12-01

    A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing. PMID:26583959

  18. The Temporal Dynamics of Scene Processing: A Multifaceted EEG Investigation

    PubMed Central

    Kravitz, Dwight J.

    2016-01-01

    Abstract Our remarkable ability to process complex visual scenes is supported by a network of scene-selective cortical regions. Despite growing knowledge about the scene representation in these regions, much less is known about the temporal dynamics with which these representations emerge. We conducted two experiments aimed at identifying and characterizing the earliest markers of scene-specific processing. In the first experiment, human participants viewed images of scenes, faces, and everyday objects while event-related potentials (ERPs) were recorded. We found that the first ERP component to evince a significantly stronger response to scenes than the other categories was the P2, peaking ∼220 ms after stimulus onset. To establish that the P2 component reflects scene-specific processing, in the second experiment, we recorded ERPs while the participants viewed diverse real-world scenes spanning the following three global scene properties: spatial expanse (open/closed), relative distance (near/far), and naturalness (man-made/natural). We found that P2 amplitude was sensitive to these scene properties at both the categorical level, distinguishing between open and closed natural scenes, as well as at the single-image level, reflecting both computationally derived scene statistics and behavioral ratings of naturalness and spatial expanse. Together, these results establish the P2 as an ERP marker for scene processing, and demonstrate that scene-specific global information is available in the neural response as early as 220 ms.

  19. The Temporal Dynamics of Scene Processing: A Multifaceted EEG Investigation

    PubMed Central

    Kravitz, Dwight J.

    2016-01-01

    Abstract Our remarkable ability to process complex visual scenes is supported by a network of scene-selective cortical regions. Despite growing knowledge about the scene representation in these regions, much less is known about the temporal dynamics with which these representations emerge. We conducted two experiments aimed at identifying and characterizing the earliest markers of scene-specific processing. In the first experiment, human participants viewed images of scenes, faces, and everyday objects while event-related potentials (ERPs) were recorded. We found that the first ERP component to evince a significantly stronger response to scenes than the other categories was the P2, peaking ∼220 ms after stimulus onset. To establish that the P2 component reflects scene-specific processing, in the second experiment, we recorded ERPs while the participants viewed diverse real-world scenes spanning the following three global scene properties: spatial expanse (open/closed), relative distance (near/far), and naturalness (man-made/natural). We found that P2 amplitude was sensitive to these scene properties at both the categorical level, distinguishing between open and closed natural scenes, as well as at the single-image level, reflecting both computationally derived scene statistics and behavioral ratings of naturalness and spatial expanse. Together, these results establish the P2 as an ERP marker for scene processing, and demonstrate that scene-specific global information is available in the neural response as early as 220 ms. PMID:27699208

  20. Extracting text from real-world scenes

    NASA Technical Reports Server (NTRS)

    Bixler, J. Patrick; Miller, David P.

    1989-01-01

    Many scenes contain significant textual information that can be extremely helpful for understanding and/or navigation. For example, text-based information can frequently be the primary cure used for navigating inside buildings. A subject might first read a marquee, then look for an appropriate hallway and walk along reading door signs and nameplates until the destination is found. Optical character recognition has been studied extensively in recent years, but has been applied almost exclusively to printed documents. As these techniques improve it becomes reasonable to ask whether they can be applied to an arbitrary scene in an attempt to extract text-based information. Before an automated system can be expected to navigate by reading signs, however, the text must first be segmented from the rest of the scene. This paper discusses the feasibility of extracting text from an arbitrary scene and using that information to guide the navigation of a mobile robot. Considered are some simple techniques for first locating text components and then tracking the individual characters to form words and phrases. Results for some sample images are also presented.

  1. Common high-resolution MMW scene generator

    NASA Astrophysics Data System (ADS)

    Saylor, Annie V.; McPherson, Dwight A.; Satterfield, H. DeWayne; Sholes, William J.; Mobley, Scott B.

    2001-08-01

    The development of a modularized millimeter wave (MMW) target and background high resolution scene generator is reported. The scene generator's underlying algorithms are applicable to both digital and real-time hardware-in-the-loop (HWIL) simulations. The scene generator will be configurable for a variety of MMW and multi-mode sensors employing state of the art signal processing techniques. At present, digital simulations for MMW and multi-mode sensor development and testing are custom-designed by the seeker vendor and are verified, validated, and operated by both the vendor and government in simulation-based acquisition. A typical competition may involve several vendors, each requiring high resolution target and background models for proper exercise of seeker algorithms. There is a need and desire by both the government and sensor vendors to eliminate costly re-design and re-development of digital simulations. Additional efficiencies are realized by assuring commonality between digital and HWIL simulation MMW scene generators, eliminating duplication of verification and validation efforts.

  2. Augustus De Morgan behind the Scenes

    ERIC Educational Resources Information Center

    Simmons, Charlotte

    2011-01-01

    Augustus De Morgan's support was crucial to the achievements of the four mathematicians whose work is considered greater than his own. This article explores the contributions he made to mathematics from behind the scenes by supporting the work of Hamilton, Boole, Gompertz, and Ramchundra.

  3. Scene reduction for subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Lewandowska (Tomaszewska), Anna

    2016-01-01

    Evaluation of image quality is important for many image processing systems, such as those used for acquisition, compression, restoration, enhancement, or reproduction. Its measurement is often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time consuming and do not guarantee conclusive results. This paper is intended to help design an efficient and rigorous quality assessment experiment. We propose a method of limiting the number of scenes that need to be tested, which can significantly reduce the experimental effort and still capture relevant scene-dependent effects. To achieve it, we employ a clustering technique and evaluate it on the basis of compactness and separation criteria. The correlation between the results obtained from a set of images in an initial database and the results received from reduced experiment are analyzed. Finally, we propose a procedure for reducing the initial scenes number. Four different assessment techniques were tested: single stimulus, double stimulus, forced choice, and similarity judgments. We conclude that in most cases, 9 to 12 judgments per evaluated algorithm for a large scene collection is sufficient to reduce the initial set of images.

  4. Aerial Scene Recognition using Efficient Sparse Representation

    SciTech Connect

    Cheriyadat, Anil M

    2012-01-01

    Advanced scene recognition systems for processing large volumes of high-resolution aerial image data are in great demand today. However, automated scene recognition remains a challenging problem. Efficient encoding and representation of spatial and structural patterns in the imagery are key in developing automated scene recognition algorithms. We describe an image representation approach that uses simple and computationally efficient sparse code computation to generate accurate features capable of producing excellent classification performance using linear SVM kernels. Our method exploits unlabeled low-level image feature measurements to learn a set of basis vectors. We project the low-level features onto the basis vectors and use simple soft threshold activation function to derive the sparse features. The proposed technique generates sparse features at a significantly lower computational cost than other methods~\\cite{Yang10, newsam11}, yet it produces comparable or better classification accuracy. We apply our technique to high-resolution aerial image datasets to quantify the aerial scene classification performance. We demonstrate that the dense feature extraction and representation methods are highly effective for automatic large-facility detection on wide area high-resolution aerial imagery.

  5. Light field constancy within natural scenes

    NASA Astrophysics Data System (ADS)

    Mury, Alexander A.; Pont, Sylvia C.; Koenderink, Jan J.

    2007-10-01

    The structure of light fields of natural scenes is highly complex due to high frequencies in the radiance distribution function. However it is the low-order properties of light that determine the appearance of common matte materials. We describe the local light field in terms of spherical harmonics and analyze the qualitative properties and physical meaning of the low-order components. We take a first step in the further development of Gershun's classical work on the light field by extending his description beyond the 3D vector field, toward a more complete description of the illumination using tensors. We show that the three first components, namely, the monopole (density of light), the dipole (light vector), and the quadrupole (squash tensor) suffice to describe a wide range of qualitatively different light fields. In this paper we address a related issue, namely, the spatial properties of light fields within natural scenes. We want to find out to what extent local light fields change from point to point and how different orders behave. We found experimentally that the low-order components of the light field are rather constant over the scenes whereas high-order components are not. Using very simple models, we found a strong relationship between the low-order components and the geometrical layouts of the scenes.

  6. Parafoveal Semantic Processing of Emotional Visual Scenes

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Lang, Peter J.

    2005-01-01

    The authors investigated whether emotional pictorial stimuli are especially likely to be processed in parafoveal vision. Pairs of emotional and neutral visual scenes were presented parafoveally (2.1[degrees] or 2.5[degrees] of visual angle from a central fixation point) for 150-3,000 ms, followed by an immediate recognition test (500-ms delay).…

  7. Vocational Guidance Requests within the International Scene

    ERIC Educational Resources Information Center

    Goodman, Jane; Gillis, Sarah

    2009-01-01

    This article summarizes the work of a diverse group of researchers and practitioners from 5 continents on "Vocational Guidance Requests Within the International Scene" presented in the discussion group at a symposium of the International Association for Educational and Vocational Guidance, the Society for Vocational Psychology, and the National…

  8. A graph theoretic approach to scene matching

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1991-01-01

    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.

  9. Scene identification probabilities for evaluating radiation flux errors due to scene misidentification

    NASA Technical Reports Server (NTRS)

    Manalo, Natividad D.; Smith, G. L.

    1991-01-01

    The scene identification probabilities (Pij) are fundamentally important in evaluations of the top-of-the-atmosphere (TOA) radiation-flux errors due to the scene misidentification. In this paper, the scene identification error probabilities were empirically derived from data collected in 1985 by the Earth Radiation Budget Experiment (ERBE) scanning radiometer when the ERBE satellite and the NOAA-9 spacecraft were rotated so as to scan alongside during brief periods in January and August 1985. Radiation-flux error computations utilizing these probabilities were performed, using orbit specifications for the ERBE, the Cloud and Earth's Radiant Energy System (CERES), and the SCARAB missions for a scene that was identified as partly cloudy over ocean. Typical values of the standard deviation of the random shortwave error were in the order of 1.5-5 W/sq m, but could reach values as high as 18.0 W/sq m as computed from NOAA-9.

  10. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed

    PubMed Central

    Zimbardo, Philip G.

    2013-01-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person’s past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness, a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior. PMID:24363542

  11. Detecting and representing predictable structure during auditory scene analysis.

    PubMed

    Sohoglu, Ediz; Chait, Maria

    2016-01-01

    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to 'scenes' comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces 'surprise'. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA.

  12. Scene and Position Specificity in Visual Memory for Objects

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2006-01-01

    This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object…

  13. Plasma display technology for scene projector application

    NASA Astrophysics Data System (ADS)

    Solomon, Steve; Hawkins, Mikhel; Mastronardi, Nick

    2005-05-01

    Plasma display technology was investigated to determine its suitability for scene projection, particularly in the ultraviolet portion of the electromagnetic spectrum. This technology, in several guises, was found to hold considerable promise for projecting very high radiance, broadband or narrowband scenes across the spectrum, from the ultraviolet to the infrared. Performance metrics such as temporal response and dynamic range were also found to be promising for this technology. High manufacturing yields at relatively low display cost (e.g. cost/pixel) are expected due to the simplicity of the devices, the ability to leverage modern microelectronics-based deposition, pattern and etching techniques as well as the commercial plasma display community that continues to improve performance and drive manufacturing costs down.

  14. Additional Crime Scenes for Projectile Motion Unit

    NASA Astrophysics Data System (ADS)

    Fullerton, Dan; Bonner, David

    2011-12-01

    Building students' ability to transfer physics fundamentals to real-world applications establishes a deeper understanding of underlying concepts while enhancing student interest. Forensic science offers a great opportunity for students to apply physics to highly engaging, real-world contexts. Integrating these opportunities into inquiry-based problem solving in a team environment provides a terrific backdrop for fostering communication, analysis, and critical thinking skills. One such activity, inspired jointly by the museum exhibit "CSI: The Experience"2 and David Bonner's TPT article "Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene,"3 provides students with three different crime scenes, each requiring an analysis of projectile motion. In this lesson students socially engage in higher-order analysis of two-dimensional projectile motion problems by collecting information from 3-D scale models and collaborating with one another on its interpretation, in addition to diagramming and mathematical analysis typical to problem solving in physics.

  15. Combining MMW radar and radiometer images for enhanced characterization of scenes

    NASA Astrophysics Data System (ADS)

    Peichl, Markus; Dill, Stephan

    2016-05-01

    Since several years the use of active (radar) and passive (radiometer) MMW remote sensing is considered as an appropriate tool for a lot of security related applications. Those are personnel screening for concealed object detection under clothing, or enhanced vision for vehicles or aircraft, just to mention few examples. Radars, having a transmitter for scene illumination and a receiver for echo recording, are basically range measuring devices which deliver in addition information about a target's reflectivity behavior. Radiometers, having only a receiver to record natural thermal radiation power, provide typically emission and reflection properties of a scene using the environment and the cosmic background radiation as a natural illumination source. Consequently, the active and passive signature of a scene and its objects is quite different depending on the target and its scattering characteristics, and the actual illumination properties. Typically technology providers are working either purely on radar or purely on radiometers for gathering information about a scene of interest. Rather rarely both information sources are really combined for enhanced information extraction, and then the sensor's imaging geometries usually do not fit adequately so that the benefit of doing that cannot be fully exploited. Consequently, investigations on adequate combinations of MMW radar and radiometer data have been performed. A mechanical scanner used from earlier experiments on personnel screening was modified to provide similar imaging geometry for Ka-band radiometer and K-band radar. First experimental results are shown and discussed.

  16. Viewing Complex, Dynamic Scenes "Through the Eyes" of Another Person: The Gaze-Replay Paradigm.

    PubMed

    Bush, Jennifer Choe; Pantelis, Peter Christopher; Morin Duchesne, Xavier; Kagemann, Sebastian Alexander; Kennedy, Daniel Patrick

    2015-01-01

    We present a novel "Gaze-Replay" paradigm that allows the experimenter to directly test how particular patterns of visual input-generated from people's actual gaze patterns-influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze "donor." This was intended to simulate the donor's visual selection, such that a participant could effectively view scenes "through the eyes" of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition. PMID:26252493

  17. The time course of natural scene perception with reduced attention.

    PubMed

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention.

  18. The time course of natural scene perception with reduced attention.

    PubMed

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention. PMID:26609116

  19. Linguistic Theory and Actual Language.

    ERIC Educational Resources Information Center

    Segerdahl, Par

    1995-01-01

    Examines Noam Chomsky's (1957) discussion of "grammaticalness" and the role of linguistics in the "correct" way of speaking and writing. It is argued that the concern of linguistics with the tools of grammar has resulted in confusion, with the tools becoming mixed up with the actual language, thereby becoming the central element in a metaphysical…

  20. El Observatorio Gemini - Status actual

    NASA Astrophysics Data System (ADS)

    Levato, H.

    Se hace una breve descripción de la situación actual del Observatorio Gemini y de las últimas decisiones del Board para incrementar la eficiencia operativa. Se hace también una breve referencia al uso argentino del observatorio.

  1. Bulk silicon as photonic dynamic infrared scene projector

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Bogatyrenko, V. V.; Malyutenko, O. Yu.

    2013-04-01

    A Si-based fast (frame rate >1 kHz), large-scale (scene area 100 cm2), broadband (3-12 μm), dynamic contactless infrared (IR) scene projector is demonstrated. An IR movie appears on a scene because of the conversion of a visible scenario projected at a scene kept at elevated temperature. Light down conversion comes as a result of free carrier generation in a bulk Si scene followed by modulation of its thermal emission output in the spectral band of free carrier absorption. The experimental setup, an IR movie, figures of merit, and the process's advantages in comparison to other projector technologies are discussed.

  2. Microcounseling Skill Discrimination Scale: A Methodological Note

    ERIC Educational Resources Information Center

    Stokes, Joseph; Romer, Daniel

    1977-01-01

    Absolute ratings on the Microcounseling Skill Discrimination Scale (MSDS) confound the individual's use of the rating scale and actual ability to discriminate effective and ineffective counselor behaviors. This note suggests methods of scoring the MSDS that will eliminate variability attributable to response language and improve the validity of…

  3. Scene recognition by manifold regularized deep learning architecture.

    PubMed

    Yuan, Yuan; Mou, Lichao; Lu, Xiaoqiang

    2015-10-01

    Scene recognition is an important problem in the field of computer vision, because it helps to narrow the gap between the computer and the human beings on scene understanding. Semantic modeling is a popular technique used to fill the semantic gap in scene recognition. However, most of the semantic modeling approaches learn shallow, one-layer representations for scene recognition, while ignoring the structural information related between images, often resulting in poor performance. Modeled after our own human visual system, as it is intended to inherit humanlike judgment, a manifold regularized deep architecture is proposed for scene recognition. The proposed deep architecture exploits the structural information of the data, making for a mapping between visible layer and hidden layer. By the proposed approach, a deep architecture could be designed to learn the high-level features for scene recognition in an unsupervised fashion. Experiments on standard data sets show that our method outperforms the state-of-the-art used for scene recognition.

  4. Basic level scene understanding: categories, attributes and structures

    PubMed Central

    Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude

    2013-01-01

    A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590

  5. Rapid 3D video/laser sensing and digital archiving with immediate on-scene feedback for 3D crime scene/mass disaster data collection and reconstruction

    NASA Astrophysics Data System (ADS)

    Altschuler, Bruce R.; Oliver, William R.; Altschuler, Martin D.

    1996-02-01

    We describe a system for rapid and convenient video data acquisition and 3-D numerical coordinate data calculation able to provide precise 3-D topographical maps and 3-D archival data sufficient to reconstruct a 3-D virtual reality display of a crime scene or mass disaster area. Under a joint U.S. army/U.S. Air Force project with collateral U.S. Navy support, to create a 3-D surgical robotic inspection device -- a mobile, multi-sensor robotic surgical assistant to aid the surgeon in diagnosis, continual surveillance of patient condition, and robotic surgical telemedicine of combat casualties -- the technology is being perfected for remote, non-destructive, quantitative 3-D mapping of objects of varied sizes. This technology is being advanced with hyper-speed parallel video technology and compact, very fast laser electro-optics, such that the acquisition of 3-D surface map data will shortly be acquired within the time frame of conventional 2-D video. With simple field-capable calibration, and mobile or portable platforms, the crime scene investigator could set up and survey the entire crime scene, or portions of it at high resolution, with almost the simplicity and speed of video or still photography. The survey apparatus would record relative position, location, and instantly archive thousands of artifacts at the site with 3-D data points capable of creating unbiased virtual reality reconstructions, or actual physical replicas, for the investigators, prosecutors, and jury.

  6. Lecture Notes on Multigrid Methods

    SciTech Connect

    Vassilevski, P S

    2010-06-28

    The Lecture Notes are primarily based on a sequence of lectures given by the author while been a Fulbright scholar at 'St. Kliment Ohridski' University of Sofia, Sofia, Bulgaria during the winter semester of 2009-2010 academic year. The notes are somewhat expanded version of the actual one semester class he taught there. The material covered is slightly modified and adapted version of similar topics covered in the author's monograph 'Multilevel Block-Factorization Preconditioners' published in 2008 by Springer. The author tried to keep the notes as self-contained as possible. That is why the lecture notes begin with some basic introductory matrix-vector linear algebra, numerical PDEs (finite element) facts emphasizing the relations between functions in finite dimensional spaces and their coefficient vectors and respective norms. Then, some additional facts on the implementation of finite elements based on relation tables using the popular compressed sparse row (CSR) format are given. Also, typical condition number estimates of stiffness and mass matrices, the global matrix assembly from local element matrices are given as well. Finally, some basic introductory facts about stationary iterative methods, such as Gauss-Seidel and its symmetrized version are presented. The introductory material ends up with the smoothing property of the classical iterative methods and the main definition of two-grid iterative methods. From here on, the second part of the notes begins which deals with the various aspects of the principal TG and the numerous versions of the MG cycles. At the end, in part III, we briefly introduce algebraic versions of MG referred to as AMG, focusing on classes of AMG specialized for finite element matrices.

  7. Primal scene derivatives in the work of Yukio Mishima: the primal scene fantasy.

    PubMed

    Turco, Ronald N

    2002-01-01

    This article discusses the preoccupation with fire, revenge, crucifixion, and other fantasies as they relate to the primal scene. The manifestations of these fantasies are demonstrated in a work of fiction by Yukio Mishima. The Temple of the Golden Pavillion. As is the case in other writings of Mishima there is a fusion of aggressive and libidinal drives and a preoccupation with death. The primal scene is directly connected with pyromania and destructive "acting out" of fantasies. This article is timely with regard to understanding contemporary events of cultural and national destruction.

  8. TMS to object cortex affects both object and scene remote networks while TMS to scene cortex only affects scene networks.

    PubMed

    Rafique, Sara A; Solomon-Harris, Lily M; Steeves, Jennifer K E

    2015-12-01

    Viewing the world involves many computations across a great number of regions of the brain, all the while appearing seamless and effortless. We sought to determine the connectivity of object and scene processing regions of cortex through the influence of transient focal neural noise in discrete nodes within these networks. We consecutively paired repetitive transcranial magnetic stimulation (rTMS) with functional magnetic resonance-adaptation (fMR-A) to measure the effect of rTMS on functional response properties at the stimulation site and in remote regions. In separate sessions, rTMS was applied to the object preferential lateral occipital region (LO) and scene preferential transverse occipital sulcus (TOS). Pre- and post-stimulation responses were compared using fMR-A. In addition to modulating BOLD signal at the stimulation site, TMS affected remote regions revealing inter and intrahemispheric connections between LO, TOS, and the posterior parahippocampal place area (PPA). Moreover, we show remote effects from object preferential LO to outside the ventral perception network, in parietal and frontal areas, indicating an interaction of dorsal and ventral streams and possibly a shared common framework of perception and action.

  9. Characteristics of the Self-Actualized Person: Visions from the East and West.

    ERIC Educational Resources Information Center

    Chang, Raylene; Page, Richard C.

    1991-01-01

    Compares and contrasts the ways that Chinese Taoism and Zen Buddhism view the development of human potential with the ways that the self-actualization theories of Rogers and Maslow describe the human potential movement. Notes many similarities between the ways that Taoism, Zen Buddhism, and the self-actualization theories of Rogers and Maslow…

  10. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  11. Semantic guidance of eye movements in real-world scenes

    PubMed Central

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914

  12. Learning a Probabilistic Topology Discovering Model for Scene Categorization.

    PubMed

    Zhang, Luming; Ji, Rongrong; Xia, Yingjie; Zhang, Ying; Li, Xuelong

    2015-08-01

    A recent advance in scene categorization prefers a topological based modeling to capture the existence and relationships among different scene components. To that effect, local features are typically used to handle photographing variances such as occlusions and clutters. However, in many cases, the local features alone cannot well capture the scene semantics since they are extracted from tiny regions (e.g., 4×4 patches) within an image. In this paper, we mine a discriminative topology and a low-redundant topology from the local descriptors under a probabilistic perspective, which are further integrated into a boosting framework for scene categorization. In particular, by decomposing a scene image into basic components, a graphlet model is used to describe their spatial interactions. Accordingly, scene categorization is formulated as an intergraphlet matching problem. The above procedure is further accelerated by introducing a probabilistic based representative topology selection scheme that makes the pairwise graphlet comparison trackable despite their exponentially increasing volumes. The selected graphlets are highly discriminative and independent, characterizing the topological characteristics of scene images. A weak learner is subsequently trained for each topology, which are boosted together to jointly describe the scene image. In our experiment, the visualized graphlets demonstrate that the mined topological patterns are representative to scene categories, and our proposed method beats state-of-the-art models on five popular scene data sets.

  13. Full Scenes Produce More Activation than Close-Up Scenes and Scene-Diagnostic Objects in Parahippocampal and Retrosplenial Cortex: An fMRI Study

    ERIC Educational Resources Information Center

    Henderson, John M.; Larson, Christine L.; Zhu, David C.

    2008-01-01

    We used fMRI to directly compare activation in two cortical regions previously identified as relevant to real-world scene processing: retrosplenial cortex and a region of posterior parahippocampal cortex functionally defined as the parahippocampal place area (PPA). We compared activation in these regions to full views of scenes from a global…

  14. Applying artificial vision models to human scene understanding

    PubMed Central

    Aminoff, Elissa M.; Toneva, Mariya; Shrivastava, Abhinav; Chen, Xinlei; Misra, Ishan; Gupta, Abhinav; Tarr, Michael J.

    2015-01-01

    How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective—the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)—have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN—the models that best accounted for the patterns obtained from PPA and TOS—were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method—NEIL (“Never-Ending-Image-Learner”), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes—showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network. PMID:25698964

  15. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes.

    PubMed

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  16. Tachistoscopic illumination and masking of real scenes

    PubMed Central

    Chichka, David; Philbeck, John W.; Gajewski, Daniel A.

    2014-01-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and the directional locations of objects in 2D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues may be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This paper describes the system and the timing characteristics of each component. Verification of the ability to control exposure to time scales as low as a few milliseconds is demonstrated. PMID:24519496

  17. Interactive Display of Scenes with Annotations

    NASA Technical Reports Server (NTRS)

    Vona, Marsette; Powell, Mark; Backes, Paul; Norris, Jeffrey; Steinke, Robert

    2005-01-01

    ThreeDView is a computer program that enables high-performance interactive display of real-world scenes with annotations. ThreeDView was developed primarily as a component of the Science Activity Planner (SAP) software, wherein it is to be used to display annotated images of terrain acquired by exploratory robots on Mars and possibly other remote planets. The images can be generated from sets of multiple-texture image data in the Visible Scalable Terrain (ViSTa) format, which was described in "Format for Interchange and Display of 3D Terrain Data" (NPO-30600) NASA Tech Briefs, Vol. 28, No. 12 (December 2004), page 25. In ThreeDView, terrain data can be loaded rapidly, the geometric level of detail and texture resolution can be selected, false colors can be used to represent scientific data mapped onto terrain, and the user can select among navigation modes. ThreeDView consists largely of modular Java software components that can easily be reused and extended to produce new high-performance, application-specific software systems for displaying images of three-dimensional real-world scenes.

  18. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  19. Recognition and memory for briefly presented scenes.

    PubMed

    Potter, Mary C

    2012-01-01

    Three times per second, our eyes make a new fixation that generates a new bottom-up analysis in the visual system. How much is extracted from each glimpse? For how long and in what form is that information remembered? To answer these questions, investigators have mimicked the effect of continual shifts of fixation by using rapid serial visual presentation of sequences of unrelated pictures. Experiments in which viewers detect specified target pictures show that detection on the basis of meaning is possible at presentation durations as brief as 13 ms, suggesting that understanding may be based on feedforward processing, without feedback. In contrast, memory for what was just seen is poor unless the viewer has about 500 ms to think about the scene: the scene does not need to remain in view. Initial memory loss after brief presentations occurs over several seconds, suggesting that at least some of the information from the previous few fixations persists long enough to support a coherent representation of the current environment. In contrast to marked memory loss shortly after brief presentations, memory for pictures viewed for 1 s or more is excellent. Although some specific visual information persists, the form and content of the perceptual and memory representations of pictures over time indicate that conceptual information is extracted early and determines most of what remains in longer-term memory. PMID:22371707

  20. The scene and the unseen: manipulating photographs for experiments on change blindness and scene memory: image manipulation for change blindness.

    PubMed

    Ball, Felix; Elzemann, Anne; Busch, Niko A

    2014-09-01

    The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.

  1. Crime scene units: a look to the future

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    The scientific examination of physical evidence is well recognized as a critical element in conducting successful criminal investigations and prosecutions. The forensic science field is an ever changing discipline. With the arrival of DNA, new processing techniques for latent prints, portable lasers, and electro-static dust print lifters, and training of evidence technicians has become more important than ever. These scientific and technology breakthroughs have increased the possibility of collecting and analyzing physical evidence that was never possible before. The problem arises with the collection of physical evidence from the crime scene not from the analysis of the evidence. The need for specialized units in the processing of all crime scenes is imperative. These specialized units, called crime scene units, should be trained and equipped to handle all forms of crime scenes. The crime scenes units would have the capability to professionally evaluate and collect pertinent physical evidence from the crime scenes.

  2. How People Actually Use Thermostats

    SciTech Connect

    Meier, Alan; Aragon, Cecilia; Hurwitz, Becky; Mujumdar, Dhawal; Peffer, Therese; Perry, Daniel; Pritoni, Marco

    2010-08-15

    Residential thermostats have been a key element in controlling heating and cooling systems for over sixty years. However, today's modern programmable thermostats (PTs) are complicated and difficult for users to understand, leading to errors in operation and wasted energy. Four separate tests of usability were conducted in preparation for a larger study. These tests included personal interviews, an on-line survey, photographing actual thermostat settings, and measurements of ability to accomplish four tasks related to effective use of a PT. The interviews revealed that many occupants used the PT as an on-off switch and most demonstrated little knowledge of how to operate it. The on-line survey found that 89% of the respondents rarely or never used the PT to set a weekday or weekend program. The photographic survey (in low income homes) found that only 30% of the PTs were actually programmed. In the usability test, we found that we could quantify the difference in usability of two PTs as measured in time to accomplish tasks. Users accomplished the tasks in consistently shorter times with the touchscreen unit than with buttons. None of these studies are representative of the entire population of users but, together, they illustrate the importance of improving user interfaces in PTs.

  3. Global scene layout modulates contextual learning in change detection.

    PubMed

    Conci, Markus; Müller, Hermann J

    2014-01-01

    Change in the visual scene often goes unnoticed - a phenomenon referred to as "change blindness." This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical) layouts or as global-incongruent (random) arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts). Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of "global precedence" in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  4. A qualitative approach for recovering relative depths in dynamic scenes

    NASA Technical Reports Server (NTRS)

    Haynes, S. M.; Jain, R.

    1987-01-01

    This approach to dynamic scene analysis is a qualitative one. It computes relative depths using very general rules. The depths calculated are qualitative in the sense that the only information obtained is which object is in front of which others. The motion is qualitative in the sense that the only required motion data is whether objects are moving toward or away from the camera. Reasoning, which takes into account the temporal character of the data and the scene, is qualitative. This approach to dynamic scene analysis can tolerate imprecise data because in dynamic scenes the data are redundant.

  5. Research on hyperspectral dynamic infrared scene simulation technology

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Hu, Yu; Ding, Na; Sun, Kefeng; Sun, Dandan; Xie, Junhu; Wu, Wenli; Gao, Jiaobo

    2015-02-01

    The paper presents a hardware in loop dynamic IR scene simulation technology for IR hyperspectral imaging system. Along with fleetly development of new type EO detecting, remote sensing and hyperspectral imaging technique, not only static parameters' calibration of hyperspectral IR imaging system but also dynamic parameters' testing and evaluation are required, thus hyperspectral dynamic IR simulation and evaluation become more and more important. Hyperspectral dynamic IR scene projector utilizes hyperspectral space and time domain features controlling spectrum and time synchronously to realize hardware in loop simulation. Hyperspectral IR target and background simulating image can be gained by the accomplishment of 3D model and IR characteristic romancing, hyperspectral dynamic IR scene is produced by image converting device. The main parameters of a developed hyperspectral dynamic IR scene projector: wave band range is 3~5μm, 8~12μm Field of View (FOV) is 8°; spatial resolution is 1024×768 spectrum resolution is 1%~2%. IR source and simulating scene features should be consistent with spectrum characters of target, and different spectrum channel's images can be gotten from calibration. A hyperspectral imaging system splits light with dispersing type grating, pushbrooms and collects the output signal of dynamic IR scene projector. With hyperspectral scene spectrum modeling, IR features romancing, atmosphere transmission feature modeling and IR scene projecting, target and scene in outfield can be simulated ideally, simulation and evaluation of IR hyperspectral imaging system's dynamic features are accomplished in laboratory.

  6. The occipital place area represents the local elements of scenes.

    PubMed

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. PMID:26931815

  7. Templates for rejection can specify semantic properties of nontargets in natural scenes.

    PubMed

    Daffron, Jennifer L; Davis, Greg

    2015-01-01

    In contrast to standard search templates that specify a target object's expected features, templates for rejection (TFR) may specify features of nontargets, biasing attention away from irrelevant objects. Little is known about TFR, and virtually nothing is known about their role in guiding search across natural scenes. In such scenes, targets and nontargets may not be easily distinguished on the basis of their visual features; it has been claimed that standard search templates may therefore specify target objects' semantic features to guide attention. Here, we ask whether TFR can do so. Noting a limitation of previous procedures used to study standard search templates, we trialed an alternative method to examine semantic templates for nontarget exclusion in natural scene search. We found that when nontargets belonged unpredictably to either of two physically distinct categories, search was less efficient than when targets belonged to one known category. This two-category cost, attributed to inefficient application of search templates, was absent for two physically dissimilar but semantically related categories. Adding a training phase to highlight semantic distinctiveness of two object categories reinstated the two-category cost, precluding stimulus-based accounts of the effect. These patterns were not observed for one-image displays or when observers searched for object categories rather than ignoring them, demonstrating their specificity to TFR, the inadequacy of search-and-destroy models to account for them, and likely basis in attentional guidance. TFR can specify semantic information to guide attention away from nontargets.

  8. Imaging polarimetry in scene element discrimination

    NASA Astrophysics Data System (ADS)

    Duggin, Michael J.

    1999-10-01

    Recent work has shown that the use of a calibrated digital camera fitted with a rotating linear polarizer can facilitate the study of Stokes parameter images across a wide dynamic range of scene radiance values. Here, we show images of a MacBeth color chips, Spectralon gray scale targets and Kodak gray cards. We also consider a static aircraft mounted on a platform against a clear sky background. We show that the contrast in polarization is greater than for intensity, and that polarization contrast increases as intensity contrast decreases. We also show that there is a great variation in the polarization in and between each of the bandpasses: this variation is comparable to the magnitude of the variation in intensity.

  9. Real time moving scene holographic camera system

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1973-01-01

    A holographic motion picture camera system producing resolution of front surface detail is described. The system utilizes a beam of coherent light and means for dividing the beam into a reference beam for direct transmission to a conventional movie camera and two reflection signal beams for transmission to the movie camera by reflection from the front side of a moving scene. The system is arranged so that critical parts of the system are positioned on the foci of a pair of interrelated, mathematically derived ellipses. The camera has the theoretical capability of producing motion picture holograms of projectiles moving at speeds as high as 900,000 cm/sec (about 21,450 mph).

  10. [A doctor's action within possible crime scene].

    PubMed

    Sowizdraniuk, Joanna

    2016-01-01

    Every doctor regardless of specialization in his practice may meet the need to provide assistance to victims of crime-related action. In this article there were disscused the issues of informing the investigative authorities about the crime, ensuring the safety of themselves and the environment at the scene. It also shows the specific elements of necessary procedures and practice to deal with the victims designed to securing any evidence present of potential or committed crime in proper manner. Special attention has been given to medical operation and other, necessary in case of certain criminal groups, among the latter we need to underline: actions against sexual freedom and decency, bodily integrity, life and well-being of human, and specially homicide, infanticide and suicide.

  11. India' energy scene: Options for the future

    SciTech Connect

    Not Available

    1988-01-01

    This book provides a concise yet thorough and up-to-date survey of the Indian energy scene, focusing on its major features, current problems, and future prospects. India's renewable and nonrenewable energy reserves are described and compared with those of a number of other countries. Trends in energy consumption and production over the 1970-1985 period are examined. Energy conservation methods are discussed. A detailed review is given on the renewable energy resources. Indias nuclear resourecs are assessed and it is recommended that nuclear energy be emphasized because of the nations ample thorium ores. How R and D can help alleviate Indias energy problems is discussed and specific recommendations are made on which a national agenda for action can be based. Information is provided on the contribution of nuclear power to electricity production in developing industrialized countries.

  12. Scene simulation for passive IR systems

    NASA Technical Reports Server (NTRS)

    Holt, J. D.; Dawbarn, R.; Bailey, A. B.

    1986-01-01

    The development of large mosaic detector arrays will allow for the construction of staring long wave infrared (LWIR) sensors which can observe large fields of view instantaneously and continuously. In order to evaluate and exercise these new systems, it will be necessary to provide simulated scenes of many moving targets against an infrared clutter background. Researchers are currently developing a projector/screen system. This system is comprised of a mechanical scanner, a diffuse screen, and a miniature blackbody. A prototype of the mechanical scanner, which is comprised of four independently driven scanners, has been designed, fabricated, and evaluated under room and cryogenic vacuum conditions. A large diffuse screen has been constructed and tested for structural integrity under cryogenic/vacuum thermal cycling. Construction techniques have been developed for the fabrication of miniature high-temperature blackbody sources. Finally, a concept has been developed to use this miniature blackbody to produce a spectrally tailorable source.

  13. Decoding individual natural scene representations during perception and imagery

    PubMed Central

    Johnson, Matthew R.; Johnson, Marcia K.

    2014-01-01

    We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI) data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA) using a support vector machine (SVM) classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA), parahippocampal place area (PPA), retrosplenial cortex (RSC), and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS). Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex). Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA) and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant) component of one's mental representation of visual scenes. PMID:24574998

  14. A cardinal orientation bias in scene-selective visual cortex.

    PubMed

    Nasr, Shahin; Tootell, Roger B H

    2012-10-24

    It has long been known that human vision is more sensitive to contours at cardinal (horizontal and vertical) orientations, compared with oblique orientations; this is the "oblique effect." However, the real-world relevance of the oblique effect is not well understood. Experiments here suggest that this effect is linked to scene perception, via a common bias in the image statistics of scenes. This statistical bias for cardinal orientations is found in many "carpentered environments" such as buildings and indoor scenes, and some natural scenes. In Experiment 1, we confirmed the presence of a perceptual oblique effect in a specific set of scene stimuli. Using those scenes, we found that a well known "scene-selective" visual cortical area (the parahippocampal place area; PPA) showed distinctively higher functional magnetic resonance imaging (fMRI) activity to cardinal versus oblique orientations. This fMRI-based oblique effect was not observed in other cortical areas (including scene-selective areas transverse occipital sulcus and retrosplenial cortex), although all three scene-selective areas showed the expected inversion effect to scenes. Experiments 2 and 3 tested for an analogous selectivity for cardinal orientations using computer-generated arrays of simple squares and line segments, respectively. The results confirmed the preference for cardinal orientations in PPA, thus demonstrating that the oblique effect can also be produced in PPA by simple geometrical images, with statistics similar to those in scenes. Thus, PPA shows distinctive fMRI selectivity for cardinal orientations across a broad range of stimuli, which may reflect a perceptual oblique effect.

  15. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    PubMed

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes. PMID:24783559

  16. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    PubMed

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes.

  17. Sticky-Note Murals

    ERIC Educational Resources Information Center

    Sands, Ian

    2011-01-01

    In this article, the author describes a sticky-note mural project that originated from his desire to incorporate contemporary materials into his assignments as well as to inspire collaboration between students. The process takes much more than sticking sticky notes to the wall. It takes critical thinking skills and teamwork to design and complete…

  18. On that Note...

    ERIC Educational Resources Information Center

    Stein, Harry

    1988-01-01

    Provides suggestions for note-taking from books, lectures, visual presentations, and laboratory experiments to enhance student knowledge, memory, and length of attention span during instruction. Describes topical and structural outlines, visual mapping, charting, three-column note-taking, and concept mapping. Benefits and application of…

  19. Infection prevention in NOTES.

    PubMed

    Kantsevoy, Sergey V

    2008-04-01

    Prevention of infection during natural orifice translumenal endoscopic surgery (NOTES) was identified as one of the most important challenges for translumenal surgery. Does infection prevention during NOTES warrant such attention? This article summarizes the accumulated data about septic complications during translumenal surgery.

  20. High-fidelity real-time maritime scene rendering

    NASA Astrophysics Data System (ADS)

    Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin

    2011-06-01

    The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.

  1. Dynamically Reconfigurable Multiprocessor System For Scene Segmentation In Histopathology

    NASA Astrophysics Data System (ADS)

    Shoemaker, Richard L.; Stucky, Oliver; Maenner, Reinhard; Thompson, Deborah B.; Griswold, W. G.; Bartels, Peter H.

    1989-06-01

    The Heidelberg Polyp multiprocessor and its application to scene segmentation problems in histopathology is discussed, including ways in which the architecture can be utilized to support expert system-guided scene segmentation software, the system's current performance, and some major improvements currently being made to the system.

  2. Binding actions and scenes in visual long-term memory.

    PubMed

    Urgolites, Zhisen Jiang; Wood, Justin N

    2013-12-01

    How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action-scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80%). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59%). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors. PMID:23653419

  3. Intrinsic Frames of Reference and Egocentric Viewpoints in Scene Recognition

    ERIC Educational Resources Information Center

    Mou, Weimin; Fan, Yanli; McNamara, Timothy P.; Owen, Charles B.

    2008-01-01

    Three experiments investigated the roles of intrinsic directions of a scene and observer's viewing direction in recognizing the scene. Participants learned the locations of seven objects along an intrinsic direction that was different from their viewing direction and then recognized spatial arrangements of three or six of these objects from…

  4. Scenes: Social Context in an Age of Contingency

    ERIC Educational Resources Information Center

    Silver, Daniel; Clark, Terry Nichols; Yanez, Clemente Jesus Navarro

    2010-01-01

    This article builds on an important but underdeveloped social science concept--the "scene" as a cluster of urban amenities--to contribute to social science theory and subspecialties such as urban and rural, class, race and gender studies. Scenes grow more important in less industrial, more expressively-oriented and contingent societies where…

  5. Simulation of partially obscured scenes using the radiosity method

    SciTech Connect

    Gerstl, S.A.W.; Borel, C.C.

    1990-01-01

    Using the extended radiosity method or zonal method realistic synthetic images are constructed of visual scenes in the visible and infrared containing radiatively participating media such as smoke, fog and clouds. Computational methods will be discussed as well as the rendering of various scenes using computer graphics methods.

  6. CRISP: A Computational Model of Fixation Durations in Scene Viewing

    ERIC Educational Resources Information Center

    Nuthmann, Antje; Smith, Tim J.; Engbert, Ralf; Henderson, John M.

    2010-01-01

    Eye-movement control during scene viewing can be represented as a series of individual decisions about where and when to move the eyes. While substantial behavioral and computational research has been devoted to investigating the placement of fixations in scenes, relatively little is known about the mechanisms that control fixation durations.…

  7. Detecting and representing predictable structure during auditory scene analysis.

    PubMed

    Sohoglu, Ediz; Chait, Maria

    2016-01-01

    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to 'scenes' comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces 'surprise'. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA. PMID:27602577

  8. Parametric Modeling of Visual Search Efficiency in Real Scenes

    PubMed Central

    Zhang, Xing; Li, Qingquan; Zou, Qin; Fang, Zhixiang; Zhou, Baoding

    2015-01-01

    How should the efficiency of searching for real objects in real scenes be measured? Traditionally, when searching for artificial targets, e.g., letters or rectangles, among distractors, efficiency is measured by a reaction time (RT) × Set Size function. However, it is not clear whether the set size of real scenes is as effective a parameter for measuring search efficiency as the set size of artificial scenes. The present study investigated search efficiency in real scenes based on a combination of low-level features, e.g., visible size and target-flanker separation factors, and high-level features, e.g., category effect and target template. Visible size refers to the pixel number of visible parts of an object in a scene, whereas separation is defined as the sum of the flank distances from a target to the nearest distractors. During the experiment, observers searched for targets in various urban scenes, using pictures as the target templates. The results indicated that the effect of the set size in real scenes decreased according to the variances of other factors, e.g., visible size and separation. Increasing visible size and separation factors increased search efficiency. Based on these results, an RT × Visible Size × Separation function was proposed. These results suggest that the proposed function is a practicable predictor of search efficiency in real scenes. PMID:26030908

  9. Being There: (Re)Making the Assessment Scene

    ERIC Educational Resources Information Center

    Gallagher, Chris W.

    2011-01-01

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the assessment…

  10. Mental Layout Extrapolations Prime Spatial Processing of Scenes

    ERIC Educational Resources Information Center

    Gottesman, Carmela V.

    2011-01-01

    Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…

  11. The Influence of Color on the Perception of Scene Gist

    ERIC Educational Resources Information Center

    Castelhano, Monica S.; Henderson, John M.

    2008-01-01

    In 3 experiments the authors used a new contextual bias paradigm to explore how quickly information is extracted from a scene to activate gist, whether color contributes to this activation, and how color contributes, if it does. Participants were shown a brief presentation of a scene followed by the name of a target object. The target object could…

  12. Partial scene reconstruction using Time-of-Flight imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yuchen; Xiong, Hongkai

    2014-11-01

    This paper is devoted to generating the coordinates of partial 3D points in scene reconstruction via time of flight (ToF) images. Assuming the camera does not move, only the coordinates of the points in images are accessible. The exposure time is two trillionths of a second and the synthetic visualization shows that the light moves at half a trillion frames per second. In global light transport, direct components signify that the light is emitted from a light point and reflected from a scene point only once. Considering that the camera and source light point are supposed to be two focuses of an ellipsoid and have a constant distance at a time, we take into account both the constraints: (1) the distance is the sum of distances which light travels between the two focuses and the scene point; and (2) the focus of the camera, the scene point and the corresponding image point are in a line. It is worth mentioning that calibration is necessary to obtain the coordinates of the light point. The calibration can be done in the next two steps: (1) choose a scene that contains some pairs of points in the same depth, of which positions are known; and (2) take the positions into the last two constraints and get the coordinates of the light point. After calculating the coordinates of scene points, MeshLab is used to build the partial scene model. The proposed approach is favorable to estimate the exact distance between two scene points.

  13. Emotional Scene Content Drives the Saccade Generation System Reflexively

    ERIC Educational Resources Information Center

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2009-01-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster…

  14. Joint 3d Estimation of Vehicles and Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, M.; Heipke, C.; Geiger, A.

    2015-08-01

    driving. While much progress has been made in recent years, imaging conditions in natural outdoor environments are still very challenging for current reconstruction and recognition methods. In this paper, we propose a novel unified approach which reasons jointly about 3D scene flow as well as the pose, shape and motion of vehicles in the scene. Towards this goal, we incorporate a deformable CAD model into a slanted-plane conditional random field for scene flow estimation and enforce shape consistency between the rendered 3D models and the parameters of all superpixels in the image. The association of superpixels to objects is established by an index variable which implicitly enables model selection. We evaluate our approach on the challenging KITTI scene flow dataset in terms of object and scene flow estimation. Our results provide a prove of concept and demonstrate the usefulness of our method.

  15. Improving text recognition by distinguishing scene and overlay text

    NASA Astrophysics Data System (ADS)

    Quehl, Bernhard; Yang, Haojin; Sack, Harald

    2015-02-01

    Video texts are closely related to the content of a video. They provide a valuable source for indexing and interpretation of video data. Text detection and recognition task in images or videos typically distinguished between overlay and scene text. Overlay text is artificially superimposed on the image at the time of editing and scene text is text captured by the recording system. Typically, OCR systems are specialized on one kind of text type. However, in video images both types of text can be found. In this paper, we propose a method to automatically distinguish between overlay and scene text to dynamically control and optimize post processing steps following text detection. Based on a feature combination a Support Vector Machine (SVM) is trained to classify scene and overlay text. We show how this distinction in overlay and scene text improves the word recognition rate. Accuracy of the proposed methods has been evaluated by using publicly available test data sets.

  16. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  17. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  18. Recognition of 3-D Scene with Partially Occluded Objects

    NASA Astrophysics Data System (ADS)

    Lu, Siwei; Wong, Andrew K. C...

    1987-03-01

    This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.

  19. Does object view influence the scene consistency effect?

    PubMed

    Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2015-04-01

    Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information. PMID:25522833

  20. Hysteresis in the dynamic perception of scenes and objects.

    PubMed

    Poltoratski, Sonia; Tong, Frank

    2014-10-01

    Scenes and objects are effortlessly processed and integrated by the human visual system. Given the distinct neural and behavioral substrates of scene and object processing, it is likely that individuals sometimes preferentially rely on one process or the other when viewing canonical "scene" or "object" stimuli. This would allow the visual system to maximize the specific benefits of these 2 types of processing. It is less obvious which of these modes of perception would be invoked during naturalistic visual transition between a focused view of a single object and an expansive view of an entire scene, particularly at intermediate views that may not be assigned readily to either stimulus category. In the current study, we asked observers to report their online perception of such dynamic image sequences, which zoomed and panned between a canonical view of a single object and an entire scene. We found a large and consistent effect of prior perception, or hysteresis, on the classification of the sequence: observers classified the sequence as an object for several seconds longer if the trial started at the object view and zoomed out, whereas scenes were perceived for longer on trials beginning with a scene view. This hysteresis effect resisted several manipulations of the movie stimulus and of the task performed, but hinged on the perceptual history built by unidirectional progression through the image sequence. Multiple experiments confirmed that this hysteresis effect was not purely decisional and was more prominent for transitions between corresponding objects and scenes than between other high-level stimulus classes. This finding suggests that the competitive mechanisms underlying hysteresis may be especially prominent in the perception of objects and scenes. We propose that hysteresis aids in disambiguating perception during naturalistic visual transitions, which may facilitate a dynamic balance between scene and object processing to enhance processing efficiency.

  1. One high performance technology of infrared scene projection

    NASA Astrophysics Data System (ADS)

    Wang, Hong-jie; Qian, Li-xun; Cao, Chun; Li, Zhuo

    2014-11-01

    Infrared scenes generation technologies are used to simulate the infrared radiation characteristics of target and background in the laboratory. They provide synthetic infrared imagery for thermal imager test and evaluation application in the infrared imaging systems. At present, many Infrared scenes generation technologies have been widely used, and they make a lot of achievements. In this paper, we design and manufacture one high performance IR scene generation technology, and the whole thin film type transducer is the key, which is fabricated based on micro electro mechanical systems (MEMS). The specific MEMS technological process parameters are obtained from a large number of experiments. The properties of infrared scene generation chip are investigated experimentally. It achieves high resolution, high frame, and reliable performance, which can meet the requirements of most simulation system. The radiation coefficient of the thin film transducer is measured to be 0.86. The frame rate is 160 Hz. The emission spectrum is from 2μm to 12μm in infrared band. Illuminated by the visible light with different intensities the equivalent black body temperature of transducer could be varied in the range of 290K to 440K. The spatial resolution is more than 256×256.The geometric distortion and the uniformity of the generated infrared scene is 5 percent. The infrared scene generator based on the infrared scene generation chip include three parts, which are visual image projector, visual to thermal transducer and the infrared scene projector. The experimental results show that this thin film type infrared scene generation chip meets the requirements of most of hardware-in-the-loop scene simulation systems for IR sensors testing.

  2. The actual goals of geoethics

    NASA Astrophysics Data System (ADS)

    Nemec, Vaclav

    2014-05-01

    The most actual goals of geoethics have been formulated as results of the International Conference on Geoethics (October 2013) held at the geoethics birth-place Pribram (Czech Republic): In the sphere of education and public enlightenment an appropriate needed minimum know how of Earth sciences should be intensively promoted together with cultivating ethical way of thinking and acting for the sustainable well-being of the society. The actual activities of the Intergovernmental Panel of Climate Changes are not sustainable with the existing knowledge of the Earth sciences (as presented in the results of the 33rd and 34th International Geological Congresses). This knowledge should be incorporated into any further work of the IPCC. In the sphere of legislation in a large international co-operation following steps are needed: - to re-formulate the term of a "false alarm" and its legal consequences, - to demand very consequently the needed evaluation of existing risks, - to solve problems of rights of individuals and minorities in cases of the optimum use of mineral resources and of the optimum protection of the local population against emergency dangers and disasters; common good (well-being) must be considered as the priority when solving ethical dilemmas. The precaution principle should be applied in any decision making process. Earth scientists presenting their expert opinions are not exempted from civil, administrative or even criminal liabilities. Details must be established by national law and jurisprudence. The well known case of the L'Aquila earthquake (2009) should serve as a serious warning because of the proven misuse of geoethics for protecting top Italian seismologists responsible and sentenced for their inadequate superficial behaviour causing lot of human victims. Another recent scandal with the Himalayan fossil fraud will be also documented. A support is needed for any effort to analyze and to disclose the problems of the deformation of the contemporary

  3. Just Another Social Scene: Evidence for Decreased Attention to Negative Social Scenes in High-Functioning Autism

    ERIC Educational Resources Information Center

    Santos, Andreia; Chaminade, Thierry; Da Fonseca, David; Silva, Catarina; Rosset, Delphine; Deruelle, Christine

    2012-01-01

    The adaptive threat-detection advantage takes the form of a preferential orienting of attention to threatening scenes. In this study, we compared attention to social scenes in 15 high-functioning individuals with autism (ASD) and matched typically developing (TD) individuals. Eye-tracking was recorded while participants were presented with pairs…

  4. Memory efficient atmospheric effects modeling for infrared scene generators

    NASA Astrophysics Data System (ADS)

    Kavak, Çaǧlar; Özsaraç, Seçkin

    2015-05-01

    The infrared (IR) energy radiated from any source passes through the atmosphere before reaching the sensor. As a result, the total signature captured by the IR sensor is significantly modified by the atmospheric effects. The dominant physical quantities that constitute the mentioned atmospheric effects are the atmospheric transmittance and the atmospheric path radiance. The incoming IR radiation is attenuated by the transmittance and path radiance is added on top of the attenuated radiation. In IR scene simulations OpenGL is widely used for rendering purposes. In the literature there are studies, which model the atmospheric effects in an IR band using OpenGLs exponential fog model as suggested by Beers law. In the standard pipeline of OpenGL, the related fog model needs single equivalent OpenGL variables for the transmittance and path radiance, which actually depend on both the distance between the source and the sensor and also on the wavelength of interest. However, in the conditions where the range dependency cannot be modeled as an exponential function, it is not accurate to replace the atmospheric quantities with a single parameter. The introduction of OpenGL Shading Language (GLSL) has enabled the developers to use the GPU more flexible. In this paper, a novel method is proposed for the atmospheric effects modeling using the least squares estimation with polynomial fitting by programmable OpenGL shader programs built with GLSL. In this context, a radiative transfer model code is used to obtain the transmittance and path radiance data. Then, polynomial fits are computed for the range dependency of these variables. Hence, the atmospheric effects model data that will be uploaded in the GPU memory is significantly reduced. Moreover, the error because of fitting is negligible as long as narrow IR bands are used.

  5. LUVOIR Tech Notes

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew R.; Shaklan, Stuart; Roberge, Aki; Rioux, Norman; Feinberg, Lee; Werner, Michael; Rauscher, Bernard; Mandell, Avi; France, Kevin; Schiminovich, David

    2016-01-01

    We present nine "tech notes" prepared by the Large UV/Optical/Infrared (LUVOIR) Science and Technology Definition Team (STDT), Study Office, and Technology Working Group. These tech notes are intended to highlight technical challenges that represent boundaries in the trade space for developing the LUVOIR architecture that may impact the science objectives being developed by the STDT. These tech notes are intended to be high-level discussions of the technical challenges and will serve as starting points for more in-depth analysis as the LUVOIR study progresses.

  6. NOTES: the future.

    PubMed

    Giday, Samuel A; Magno, Priscilla; Kalloo, Anthony N

    2008-04-01

    The concept of natural orifice translumenal endoscopic surgery (NOTES) has grown in acceptance since the time of its introduction in 2000. Developments in techniques of peritoneal access and closure, surgical techniques, and equipment modification have already been published and intensive research is ongoing. Current and future endoscopists will reap the benefit of this research because many techniques and devices that are developed for NOTES will enhance the ability to perform luminal intervention, including polypectomy, endoluminal hemostasis, and submucosal resection. The authors attempt to predict the future of NOTES by describing potential applications for certain clinical scenarios and conditions.

  7. Fractal-based description of natural scenes.

    PubMed

    Pentland, A P

    1984-06-01

    This paper addresses the problems of 1) representing natural shapes such as mountains, trees, and clouds, and 2) computing their description from image data. To solve these problems, we must be able to relate natural surfaces to their images; this requires a good model of natural surface shapes. Fractal functions are a good choice for modeling 3-D natural surfaces because 1) many physical processes produce a fractal surface shape, 2) fractals are widely used as a graphics tool for generating natural-looking shapes, and 3) a survey of natural imagery has shown that the 3-D fractal surface model, transformed by the image formation process, furnishes an accurate description of both textured and shaded image regions. The 3-D fractal model provides a characterization of 3-D surfaces and their images for which the appropriateness of the model is verifiable. Furthermore, this characterization is stable over transformations of scale and linear transforms of intensity. The 3-D fractal model has been successfully applied to the problems of 1) texture segmentation and classification, 2) estimation of 3-D shape information, and 3) distinguishing between perceptually ``smooth'' and perceptually ``textured'' surfaces in the scene.

  8. Scotopic hue percepts in natural scenes.

    PubMed

    Elliott, Sarah L; Cao, Dingcai

    2013-01-01

    Traditional trichromatic theories of color vision conclude that color perception is not possible under scotopic illumination in which only one type of photoreceptor, rods, is active. The current study demonstrates the existence of scotopic color perception and indicates that perceived hue is influenced by spatial context and top-down processes of color perception. Experiment 1 required observers to report the perceived hue in various natural scene images under purely rod-mediated vision. The results showed that when the test patch had low variation in the luminance distribution and was a decrement in luminance compared to the surrounding area, reddish or orangish percepts were more likely to be reported compared to all other percepts. In contrast, when the test patch had a high variation and was an increment in luminance, the probability of perceiving blue, green, or yellow hues increased. In addition, when observers had a strong, but singular, daylight hue association for the test patch, color percepts were reported more often and hues appeared more saturated compared to patches with no daylight hue association. This suggests that experience in daylight conditions modulates the bottom-up processing for rod-mediated color perception. In Experiment 2, observers reported changes in hue percepts for a test ring surrounded by inducing rings that varied in spatial context. In sum, the results challenge the classic view that rod vision is achromatic and suggest that scotopic hue perception is mediated by cortical mechanisms. PMID:24233245

  9. Adopting Abstract Images for Semantic Scene Understanding.

    PubMed

    Zitnick, C Lawrence; Vedantam, Ramakrishna; Parikh, Devi

    2016-04-01

    Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages over real images. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of real images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of real images that are semantically similar would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract images with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity. Finally, we study the relation between the saliency and memorability of objects and their semantic importance.

  10. Crime scene investigation (as seen on TV).

    PubMed

    Durnal, Evan W

    2010-06-15

    A mysterious green ooze is injected into a brightly illuminated and humming machine; 10s later, a printout containing a complete biography of the substance is at the fingertips of an attractive young investigator who exclaims "we found it!" We have all seen this event occur countless times on any and all of the three CSI dramas, Cold Cases, Crossing Jordans, and many more. With this new style of "infotainment" (Surette, 2007), comes an increasingly blurred line between the hard facts of reality and the soft, quick solutions of entertainment. With these advances in technology, how can crime rates be anything but plummeting as would-be criminals cringe at the idea of leaving the smallest speck of themselves at a crime scene? Surely there are very few serious crimes that go unpunished in today's world of high-tech, fast-paced gadgetry. Science and technology have come a great distance since Sir Arthur Conan Doyle first described the first famous forensic scientist (Sherlock Holmes), but still have light-years to go.

  11. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  12. Detecting and representing predictable structure during auditory scene analysis

    PubMed Central

    Sohoglu, Ediz; Chait, Maria

    2016-01-01

    We use psychophysics and MEG to test how sensitivity to input statistics facilitates auditory-scene-analysis (ASA). Human subjects listened to ‘scenes’ comprised of concurrent tone-pip streams (sources). On occasional trials a new source appeared partway. Listeners were more accurate and quicker to detect source appearance in scenes comprised of temporally-regular (REG), rather than random (RAND), sources. MEG in passive listeners and those actively detecting appearance events revealed increased sustained activity in auditory and parietal cortex in REG relative to RAND scenes, emerging ~400 ms of scene-onset. Over and above this, appearance in REG scenes was associated with increased responses relative to RAND scenes. The effect of temporal structure on appearance-evoked responses was delayed when listeners were focused on the scenes relative to when listening passively, consistent with the notion that attention reduces ‘surprise’. Overall, the results implicate a mechanism that tracks predictability of multiple concurrent sources to facilitate active and passive ASA. DOI: http://dx.doi.org/10.7554/eLife.19113.001 PMID:27602577

  13. Decoding Representations of Scenes in the Medial Temporal Lobes

    PubMed Central

    Bonnici, Heidi M; Kumaran, Dharshan; Chadwick, Martin J; Weiskopf, Nikolaus; Hassabis, Demis; Maguire, Eleanor A

    2012-01-01

    Recent theoretical perspectives have suggested that the function of the human hippocampus, like its rodent counterpart, may be best characterized in terms of its information processing capacities. In this study, we use a combination of high-resolution functional magnetic resonance imaging, multivariate pattern analysis, and a simple decision making task, to test specific hypotheses concerning the role of the medial temporal lobe (MTL) in scene processing. We observed that while information that enabled two highly similar scenes to be distinguished was widely distributed throughout the MTL, more distinct scene representations were present in the hippocampus, consistent with its role in performing pattern separation. As well as viewing the two similar scenes, during scanning participants also viewed morphed scenes that spanned a continuum between the original two scenes. We found that patterns of hippocampal activity during morph trials, even when perceptual inputs were held entirely constant (i.e., in 50% morph trials), showed a robust relationship with participants' choices in the decision task. Our findings provide evidence for a specific computational role for the hippocampus in sustaining detailed representations of complex scenes, and shed new light on how the information processing capacities of the hippocampus may influence the decision making process. © 2011 Wiley Periodicals, Inc. PMID:21656874

  14. Motion parallax links visual motion areas and scene regions.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2016-01-15

    When we move, the retinal velocities of objects in our surrounding differ according to their relative distances and give rise to a powerful three-dimensional visual cue referred to as motion parallax. Motion parallax allows us to infer our surrounding's 3D structure as well as self-motion based on 2D retinal information. However, the neural substrates mediating the link between visual motion and scene processing are largely unexplored. We used fMRI in human observers to study motion parallax by means of an ecologically relevant yet highly controlled stimulus that mimicked the observer's lateral motion past a depth-layered scene. We found parallax selective responses in parietal regions IPS3 and IPS4, and in a region lateral to scene selective occipital place area (OPA). The traditionally defined scene responsive regions OPA, the para-hippocampal place area (PPA) and the retrosplenial cortex (RSC) did not respond to parallax. During parallax processing, the occipital parallax selective region entertained highly specific functional connectivity with IPS3 and with scene selective PPA. These results establish a network linking dorsal motion and ventral scene processing regions specifically during parallax processing, which may underlie the brain's ability to derive 3D scene information from motion parallax. PMID:26515906

  15. Perceptual effects of scene context on object identification.

    PubMed

    De Graef, P; Christiaens, D; d'Ydewalle, G

    1990-01-01

    In a number of studies the context provided by a real-world scene has been claimed to have a mandatory, perceptual effect on the identification of individual objects in such a scene. This claim has provided a basis for challenging widely accepted data-driven models of visual perception in order to advocate alternative models with an outspoken top-down character. The present paper offers a review of the evidence to demonstrate that the observed scene-context effects may be the product of post-perceptual and task-dependent guessing strategies. A new research paradigm providing an on-line measure of genuine perceptual effects of context on object identification is proposed. First-fixation durations for objects incidentally fixated during the free exploration of real-world scenes are shown to increase when the objects are improbable in the scene or violate certain aspects of their typical spatial appearance in it. These effects of contextual violations are shown to emerge only at later stages of scene exploration, contrary to the notion of schema-driven scene perception effective from the very first scene fixation. In addition, evidence is reported in support of the existence of a facilitatory component in scene-context effects. This is taken to indicate that the context directly affects the ease of perceptual object processing and does not merely serve as a framework for checking the plausibility of the output of perceptual processes. Finally, our findings are situated against other contrasting results. Some future research questions are high-lighted.

  16. Preference for luminance histogram regularities in natural scenes.

    PubMed

    Graham, Daniel; Schwarz, Bianca; Chatterjee, Anjan; Leder, Helmut

    2016-03-01

    Natural scene luminance distributions typically have positive skew, and for single objects, there is evidence that higher skew is a correlate (but not a guarantee) of glossiness. Skewness is also relevant to aesthetics: preference for glossy single objects (with high skew) has been shown even in infants, and skewness is a good predictor of fruit freshness. Given that primate vision appears to efficiently encode natural scene luminance variation, and given evidence that natural scene regularities may be a prerequisite for aesthetic perception in the spatial domain, here we ask whether humans in general prefer natural scenes with more positively skewed luminance distributions. If humans generally prefer images with the higher-order regularities typical of natural scenes and/or shiny objects, we would expect this to be the case. By manipulating luminance distribution skewness (holding mean and variance constant) for individual natural images, we show that in fact preference varies inversely with increasing positive skewness. This finding holds for: artistic landscape images and calibrated natural scenes; scenes with and without glossy surfaces; landscape scenes and close-up objects; and noise images with natural luminance histograms. Across conditions, humans prefer images with skew near zero over higher skew images, and they prefer skew lower than that of the unmodified scenes. These results suggest that humans prefer images with luminances that are distributed relatively evenly about the mean luminance, i.e., images with similar amounts of light and dark. We propose that our results reflect an efficient processing advantage of low-skew images over high-skew images, following evidence from prior brain imaging results.

  17. Scene change detection for video retrieval on MPEG streams

    NASA Astrophysics Data System (ADS)

    Kang, Eung-Kwan; Kim, Sung-Joo; Jahng, SurngGabb; Song, Ho-Keun; Choi, Jong S.

    2000-05-01

    IN this paper, we propose a new scene change detection (SCD) algorithm, and also provide a novel video-indexing scheme for fast content-based browsing and retrieval in video databases. We detect scene changes from the MPEG video sequence, and extract key frames to represent contents of a shot. Then, we perform the video indexing by applying the rosette pattern to the extracted key frames, and retrieve them. Our SCD method is better than the conventional ones in terms of the SCD performance. Moreover, by applying the rosette pattern for indexing, we can remarkably reduce the number of pixels required to index and excellently retrieve the video scene.

  18. AgRISTARS. Supporting research: Algorithms for scene modelling

    NASA Technical Reports Server (NTRS)

    Rassbach, M. E. (Principal Investigator)

    1982-01-01

    The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.

  19. Extended scene wavefront sensor for space application

    NASA Astrophysics Data System (ADS)

    Bomer, Thierry; Ravel, Karen; Corlay, Gilles

    2015-10-01

    The spatial resolution of optical monitoring satellites increases continuously and it is more and more difficult to satisfy the stability constraints of the instrument. The compactness requirements induce high sensitivity to drift during storage and launching. The implementation of an active loop for the control of the performances for the telescope becomes essential, in the same way of astronomy telescopes on ground. The active loop requires disposing of informations in real time of optical distortions of the wavefront, due to mirror deformations. It is the role of the Shack-Hartmann wave front sensor studied by Sodern. It is located in the focal plane of the telescope, in edge of field of view, in order not to disturb acquisition by the main instrument. Its particular characteristic, compared to a traditional wavefront sensor is not only to work on point source as star image, but also on extended scenes, as those observed by the instrument. The exit pupil of the telescope is imaged on a micro lenses array by a relay optics. Each element of the micro lenses array generates a small image, drifted by the local wavefront slope. The processing by correlation between small images allows to measure local slope and to recover the initial wavefront deformation according to Zernike decomposition. Sodern has realized the sensor dimensioning and has studied out the comparison of various algorithms of images correlation making it possible to measure the local slopes of the wave front. Simulations, taking into account several types of detectors, enabled to compare the performances of these solutions and a choice of detector was carried out. This article describes the state of progress of the work done so far. It shows the result of the comparisons on the choice of the detector, the main features of the sensor definition and the performances obtained.

  20. Hybrid infrared scene projector (HIRSP): a high dynamic range infrared scene projector, part II

    NASA Astrophysics Data System (ADS)

    Cantey, Thomas M.; Bowden, Mark; Cosby, David; Ballard, Gary

    2008-04-01

    This paper is a continuation of the merging of two dynamic infrared scene projector technologies to provide a unique and innovative solution for the simulation of high dynamic temperature ranges for testing infrared imaging sensors. This paper will present some of the challenges and performance issues encountered in implementing this unique projector system into a Hardware-in-the-Loop (HWIL) simulation facility. The projection system combines the technologies of a Honeywell BRITE II extended voltage range emissive resistor array device and an optically scanned laser diode array projector (LDAP). The high apparent temperature simulations are produced from the luminescent infrared radiation emitted by the high power laser diodes. The hybrid infrared projector system is being integrated into an existing HWIL simulation facility and is used to provide real-world high radiance imagery to an imaging infrared unit under test. The performance and operation of the projector is presented demonstrating the merit and success of the hybrid approach. The high dynamic range capability simulates a 250 Kelvin apparent background temperature to 850 Kelvin maximum apparent temperature signatures. This is a large increase in radiance projection over current infrared scene projection capabilities.

  1. Macrostructure logic arrays. Volume 2. Task 2: Seeker scene emulator. Final report, 28 June 1985-2 November 1990

    SciTech Connect

    Henshaw, A.; Melton, R.; Gieseking, S.; Alford, C.O.

    1990-11-07

    Under direction from the U.S. Army Strategic Defense Command, the Computer Engineering Research Laboratory at the Georgia Institute of Technology and BDM Corporation have developed a real-time Focal Plan Array Seeker Scene Emulator. This unit enhances Georgia Tech's capabilities in kinetic energy weapon system testing and performance demonstration. The Strategic Defense Initiative Organization HWIL Simulation Structure contains three paths for exercising the Signal Processing and Data Processing algorithms and hardware. Two of these methods use actual Focal Plane Array (FPA) hardware to generate signals for presentation to the SP and DP sub-systems. In many cases, the use of an FPA might be considered restrictive. The Georgia Tech Seeker Scene Emulator is designed to provide the third path in this simulation structure. By emulating the FPA, the Georgia Tech SSE can provide tests results would be costly or difficult to achieve using an actual FPA. The SSE can be used to fill in gaps in testing of components in stressing simulation scenarios, such as nuclear environments and high object counts. The FPA Seeker Scene Emulator combines advanced hardware developed at Georgia Tech with a Ballistic Defense Missile-generated database to produce signal based upon target radiometric information, seeker optical characterization, FPA detector characterization, and simulated background environments.

  2. Discomfort Glare: What Do We Actually Know?

    SciTech Connect

    Clear, Robert D.

    2012-04-19

    We reviewed glare models with an eye for missing conditions or inconsistencies. We found ambiguities as to when to use small source versus large source models, and as to what constitutes a glare source in a complex scene. We also found surprisingly little information validating the assumed independence of the factors driving glare. A barrier to progress in glare research is the lack of a standardized dependent measure of glare. We inverted the glare models to predict luminance, and compared model predictions against the 1949 Luckiesh and Guth data that form the basis of many of them. The models perform surprisingly poorly, particularly with regards to the luminance-size relationship and additivity. Evaluating glare in complex scenes may require fundamental changes to form of the glare models.

  3. Behind the Scenes: Sarafin Goes from Farm to Flight Director

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino chats with flight director Mike Sarafin about when he joined NASA and moved from his family's farm in New York to Houston...with ...

  4. NASA Social: Behind the Scenes at NASA Dryden

    NASA Video Gallery

    More than 50 followers of NASA's social media websites went behind the scenes at NASA's Dryden Flight Research Center during a "NASA Social" on May 4, 2012. The visitors were briefed on what Dryden...

  5. LADAR scene projector for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Cornell, Michael C.; Naumann, Charles B.; Stockbridge, Robert G.; Snyder, Donald R.

    2002-07-01

    Future types of direct detection LADAR seekers will employ focal plane arrays in their receivers. Existing LADAR scene projection technology cannot meet the needs of testing these types of seekers in a Hardware-in-the-Loop environment. It is desired that the simulated LADAR return signals generated by the projection hardware be representative of the complex targets and background of a real LADAR image. A LADAR scene projector has been developed that is capable of meeting these demanding test needs. It can project scenes of simulated 2D LADAR return signals without scanning. In addition, each pixel in the projection can be represented by a 'complex' optical waveform, which can be delivered with sub-nanosecond precision. Finally, the modular nature of the projector allows it to be configured to operate at different wavelengths. This paper describes the LADAR Scene Projector and its full capabilities.

  6. Reconstruction of indoor scene from a single image

    NASA Astrophysics Data System (ADS)

    Wu, Di; Li, Hongyu; Zhang, Lin

    2015-03-01

    Given a single image of an indoor scene without any prior knowledge, is it possible for a computer to automatically reconstruct the structure of the scene? This letter proposes a reconstruction method, called RISSIM, to recover the 3D modelling of an indoor scene from a single image. The proposed method is composed of three steps: the estimation of vanishing points, the detection and classification of lines, and the plane mapping. To find vanishing points, a new feature descriptor, named "OCR", is defined to describe the texture orientation. With Phrase Congruency and Harris Detector, the line segments can be detected exactly, which is a prerequisite. Perspective transform is a defined as a reliable method whereby the points on the image can be represented on a 3D model. Experimental results show that the 3D structure of an indoor scene can be well reconstructed from a single image although the available depth information is limited.

  7. Behind the Scenes: Mission Control Practices Launching Discovery

    NASA Video Gallery

    Before every shuttle launch, the astronauts train with their ascent team in Mission Control Houston. In this episode of NASA Behind the Scenes, astronaut Mike Massimino introduces you to some of th...

  8. Behind the Scenes: Michoud Builder of Shuttle's External Tank

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino takes you on a tour of the Michoud Assembly Facility in New Orleans, La. This historic facility helped build the mighty Saturn V ...

  9. 8. PHOTOCOPY, 1880 STREET SCENE, called 'MULE TRAIN' (SHOWING INDIAN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. PHOTOCOPY, 1880 STREET SCENE, called 'MULE TRAIN' (SHOWING INDIAN FIREWOOD VENDER WITH HEAD BURDENS OF LIGHT FIREWOOD AND TWO BURROS WITH HEAVIER FIREWOOD LOADS.) - Barrio Libre, West Kennedy & West Seventeenth Streets, Meyer & Convent Avenues, Tucson, Pima County, AZ

  10. Behind the Scenes: Shuttle Crawls to Launch Pad

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, take a look at what's needed to roll a space shuttle out of the Vehicle Assembly Building and out to the launch pad. Astronaut Mike Massimino talks to som...

  11. Near field 3D scene simulation for passive microwave imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Wu, Ji

    2006-10-01

    Scene simulation is a necessary work in near field passive microwave remote sensing. A 3-D scene simulation model of microwave radiometric imaging based on ray tracing method is present in this paper. The essential influencing factors and general requirements are considered in this model such as the rough surface radiation, the sky radiation witch act as the uppermost illuminator in out door circumstance, the polarization rotation of the temperature rays caused by multiple reflections, and the antenna point spread function witch determines the resolution of the model final outputs. Using this model we simulate a virtual scene and analyzed the appeared microwave radiometric phenomenology, at last two real scenes of building and airstrip were simulated for validating the model. The comparison between the simulation and field measurements indicates that this model is completely feasible in practice. Furthermore, we analyzed the signatures of model outputs, and achieved some underlying phenomenology of microwave radiation witch is deferent with that in optical and infrared bands.

  12. Behind the Scenes: Rolling Room Greets Returning Astronauts

    NASA Video Gallery

    Have you ever wondered what is the first thing the shuttle crews see after they land? In this episode of NASA Behind the Scenes, astronaut Mike Massimino takes you into the Crew Transport Vehicle, ...

  13. Behind the Scenes: Astronauts Keep Trainers in BBQ Bliss

    NASA Video Gallery

    In this episode of NASA Behind the Scenes, astronaut Mike Massimino talks with astronaut Terry Virts as well as Stephanie Turner, one of the people who keeps the astronaut corps in line. Mass also ...

  14. Crime scene ethics: souvenirs, teaching material, and artifacts.

    PubMed

    Rogers, Tracy L

    2004-03-01

    Police and forensic specialists are ethically obliged to preserve the integrity of their investigations and their agencies' reputations. The American Academy of Forensic Sciences and the Canadian Society of Forensic Science provide no guidelines for crime scene ethics, or the retention of items from former crime scenes. Guidelines are necessary to define acceptable behavior relating to removing, keeping, or selling artifacts, souvenirs, or teaching specimens from former crime scenes, where such activities are not illegal, to prevent potential conflicts of interest and the appearance of impropriety. Proposed guidelines permit the retention of objects with educational value, provided they are not of significance to the case, they are not removed until the scene is released, permission has been obtained from the property owner and police investigator, and the item has no significant monetary value. Permission is necessary even if objects appear discarded, or are not typically regarded as property, e.g., animal bones. PMID:15027551

  15. TIFF Image Writer patch for OpenSceneGraph

    SciTech Connect

    Eldridge, Bryce

    2012-01-05

    This software consists of code modifications to the open-source OpenSceneGraph software package to enable the creation of TlFF images containing 16 bit unsigned data. They also allow the user to disable compression and set the DPI tags in the resulting TIFF Images. Some image analysis programs require uncompressed, 16 bit unsigned input data. These code modifications allow programs based on OpenSceneGraph to write out such images, improving connectivity between applications.

  16. Crime Scene Reconstruction Using a Fully Geomatic Approach

    PubMed Central

    Agosto, Eros; Ajmar, Andrea; Boccardo, Piero; Tonolo, Fabio Giulio; Lingua, Andrea

    2008-01-01

    This paper is focused on two main topics: crime scene reconstruction, based on a geomatic approach, and crime scene analysis, through GIS based procedures. According to the experience of the authors in performing forensic analysis for real cases, the aforesaid topics will be examined with the specific goal of verifying the relationship of human walk paths at a crime scene with blood patterns on the floor. In order to perform such analyses, the availability of pictures taken by first aiders is mandatory, since they provide information about the crime scene before items are moved or interfered with. Generally, those pictures are affected by large geometric distortions, thus - after a brief description of the geomatic techniques suitable for the acquisition of reference data (total station surveying, photogrammetry and laser scanning) - it will be shown the developed methodology, based on photogrammetric algorithms, aimed at calibrating, georeferencing and mosaicking the available images acquired on the scene. The crime scene analysis is based on a collection of GIS functionalities for simulating human walk movements and creating a statistically significant sample. The developed GIS software component will be described in detail, showing how the analysis of this statistical sample of simulated human walks allows to rigorously define the probability of performing a certain walk path without touching the bloodstains on the floor.

  17. DRDC's approach to IR scene generation for IRCM simulation

    NASA Astrophysics Data System (ADS)

    Lepage, Jean-François; Labrie, Marc-André; Rouleau, Eric; Richard, Jonathan; Ross, Vincent; Dion, Denis; Harrison, Nathalie

    2011-06-01

    An object oriented simulation framework, called KARMA, was developed over the last decade at Defence Research and Development Canada - Valcartier (DRDC Valcartier) to study infrared countermeasures (IRCM) methods and tactics. It provides a range of infrared (IR) guided weapon engagement services from constructive to HWIL simulations. To support the increasing level of detail of its seeker models, DRDC Valcartier recently developed an IR scene generation (IRSG) capacity for the KARMA framework. The approach relies on Open-Source based rendering of scenes composed of 3D models, using commercial off-the-shelf (COTS) graphics processing units (GPU) of standard PCs. The objective is to produce a high frame rate and medium fidelity representation of the IR scene, allowing to properly reproduce the spectral, spatial, and temporal characteristics of the aircraft's and flare's signature. In particular, the OpenSceneGraph library is used to manage the 3D models, and to send high-level rendering commands. The atmospheric module allows for accurate, run-time computation of the radiative components using a spectrally correlated wide-band mode. Advanced effects, such as surface reflections and zoom anti-aliasing, are computed by the GPU through the use of shaders. Also, in addition to the IR scene generation module, a signature modeling and analysis tool (SMAT) was developed to assist the modeler in building and validating signature models that are independent of a particular sensor type. Details of the IR scene generation module and the associated modeling tool will be presented.

  18. Strategic Scene Generation Model: baseline and operational software

    NASA Astrophysics Data System (ADS)

    Heckathorn, Harry M.; Anding, David C.

    1993-08-01

    The Strategic Defense Initiative (SDI) must simulate the detection, acquisition, discrimination and tracking of anticipated targets and predict the effect of natural and man-made background phenomena on optical sensor systems designed to perform these tasks. NRL is developing such a capability using a computerized methodology to provide modeled data in the form of digital realizations of complex, dynamic scenes. The Strategic Scene Generation Model (SSGM) is designed to integrate state-of-science knowledge, data bases and computerized phenomenology models to simulate strategic engagement scenarios and to support the design, development and test of advanced surveillance systems. Multi-phenomenology scenes are produced from validated codes--thereby serving as a traceable standard against which different SDI concepts and designs can be tested. This paper describes the SSGM design architecture, the software modules and databases which are used to create scene elements, the synthesis of deterministic and/or stochastic structured scene elements into composite scenes, the software system to manage the various databases and digital image libraries, and verification and validation by comparison with empirical data. The focus will be on the functionality of the SSGM Phase II Baseline MOdel (SSGMB) whose implementation is complete Recent enhancements for Theater Missile Defense will also be presented as will the development plan for the SSGM Phase III Operational Model (SSGMO) whose development has just begun.

  19. Political conservatism predicts asymmetries in emotional scene memory.

    PubMed

    Mills, Mark; Gonzalez, Frank J; Giuseffi, Karl; Sievert, Benjamin; Smith, Kevin B; Hibbing, John R; Dodd, Michael D

    2016-06-01

    Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology.

  20. Digital forensics: an analytical crime scene procedure model (ACSPM).

    PubMed

    Bulbul, Halil Ibrahim; Yavuzcan, H Guclu; Ozel, Mesut

    2013-12-10

    In order to ensure that digital evidence is collected, preserved, examined, or transferred in a manner safeguarding the accuracy and reliability of the evidence, law enforcement and digital forensic units must establish and maintain an effective quality assurance system. The very first part of this system is standard operating procedures (SOP's) and/or models, conforming chain of custody requirements, those rely on digital forensics "process-phase-procedure-task-subtask" sequence. An acceptable and thorough Digital Forensics (DF) process depends on the sequential DF phases, and each phase depends on sequential DF procedures, respectively each procedure depends on tasks and subtasks. There are numerous amounts of DF Process Models that define DF phases in the literature, but no DF model that defines the phase-based sequential procedures for crime scene identified. An analytical crime scene procedure model (ACSPM) that we suggest in this paper is supposed to fill in this gap. The proposed analytical procedure model for digital investigations at a crime scene is developed and defined for crime scene practitioners; with main focus on crime scene digital forensic procedures, other than that of whole digital investigation process and phases that ends up in a court. When reviewing the relevant literature and interrogating with the law enforcement agencies, only device based charts specific to a particular device and/or more general perspective approaches to digital evidence management models from crime scene to courts are found. After analyzing the needs of law enforcement organizations and realizing the absence of crime scene digital investigation procedure model for crime scene activities we decided to inspect the relevant literature in an analytical way. The outcome of this inspection is our suggested model explained here, which is supposed to provide guidance for thorough and secure implementation of digital forensic procedures at a crime scene. In digital forensic

  1. Notes on Linguistics, 1990.

    ERIC Educational Resources Information Center

    Notes on Linguistics, 1990

    1990-01-01

    This document consists of the four issues of "Notes on Linguistics" published during 1990. Articles in the four issues include: "The Indians Do Say Ugh-Ugh" (Howard W. Law); "Constraints of Relevance, A Key to Particle Typology" (Regina Blass); "Whatever Happened to Me? (An Objective Case Study)" (Aretta Loving); "Stop Me and Buy One (For $5...)"…

  2. Notes and Discussion

    ERIC Educational Resources Information Center

    American Journal of Physics, 1978

    1978-01-01

    Includes eleven short notes, comments and responses to comments on a variety of topics such as uncertainty in a least-squares fit, display of diffraction patterns, the dark night sky paradox, error in the dynamics of deformable bodies and relative velocities and the runner. (GA)

  3. NCTM Student Math Notes.

    ERIC Educational Resources Information Center

    Maletsky, Evan, Ed.; Yunker, Lee E., Ed.

    1986-01-01

    Five sets of activities for students are included in this document. Each is designed for use in junior high and secondary school mathematics instruction. The first Note concerns mathematics on postage stamps. Historical procedures and mathematicians, metric conversion, geometric ideas, and formulas are among the topics considered. Successful…

  4. Notes on Linguistics, 1999.

    ERIC Educational Resources Information Center

    Payne, David, Ed.

    1999-01-01

    The 1999 issues of "Notes on Linguistics," published quarterly, include the following articles, review articles, reviews, book notices, and reports: "A New Program for Doing Morphology: Hermit Crab"; "Lingualinks CD-ROM: Field Guide to Recording Language Data"; "'Unruly' Phonology: An Introduction to Optimality Theory"; "Borrowing vs. Code…

  5. Programmable Logic Application Notes

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Day, John H. (Technical Monitor)

    2001-01-01

    This report will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will continue a series of notes concentrating on analysis techniques with this issue's section discussing the use of Root-Sum-Square calculations for digital delays.

  6. Sawtooth Functions. Classroom Notes

    ERIC Educational Resources Information Center

    Hirst, Keith

    2004-01-01

    Using MAPLE enables students to consider many examples which would be very tedious to work out by hand. This applies to graph plotting as well as to algebraic manipulation. The challenge is to use these observations to develop the students' understanding of mathematical concepts. In this note an interesting relationship arising from inverse…

  7. Student Math Notes.

    ERIC Educational Resources Information Center

    Maletsky, Evan, Ed.

    1985-01-01

    Five sets of activities for students are included in this document. Each is designed for use in junior high and secondary school mathematics instruction. The first "Note" concerns magic squares in which the numbers in every row, column, and diagonal add up to the same sum. An etching by Albrecht Durer is presented, with four questions followed by…

  8. Exploring Eye Movements in Patients with Glaucoma When Viewing a Driving Scene

    PubMed Central

    Crabb, David P.; Smith, Nicholas D.; Rauscher, Franziska G.; Chisholm, Catharine M.; Barbur, John L.; Edgar, David F.; Garway-Heath, David F.

    2010-01-01

    Background Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). Methodology/Principal Findings The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Conclusions/Significance Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful

  9. Registration Study. Research Note.

    ERIC Educational Resources Information Center

    Baratta, Mary Kathryne

    During spring 1977 registration, 3,255 or 45% of Moraine Valley Community College (MVCC) registering students responded to a scheduling preferences and problems questionnaire covering enrollment status, curriculum load, program preference, ability to obtain courses, schedule conflicts, preferred times for class offerings, actual scheduling of…

  10. Hilots make the family planning scene.

    PubMed

    1974-10-01

    A hilot (birth attendant), Aling Melchora, of Roxas, Oriental Mindora, who does motivation work in family planning is typical of hilots who are found in every barrio throughout the Philippines. She is 58 years old and has been a hilot for more than 30 years. She learned birth attendance in a training course at the Pandacan Puericulture Center in 1940. She averages 3 deliveries a month and 8 IUD acceptances a month. The hilots are a possible strong force in family planning motivation because of their influence and the respect with which people in the community regard them. They are older, experienced, always available, and charge very reasonable rates for services highly trained clinic staff would balk at doing. The Institute of Maternal and Child Health (IMCH) has trained 400 such hilots to do motivation work in family planning. It is noted that in the Philippines, the hilot may yet provide the key to reach the people in the barrios, which is the most important and challenging task for the national program on family planning. PMID:12306912

  11. Hilots make the family planning scene.

    PubMed

    1974-10-01

    A hilot (birth attendant), Aling Melchora, of Roxas, Oriental Mindora, who does motivation work in family planning is typical of hilots who are found in every barrio throughout the Philippines. She is 58 years old and has been a hilot for more than 30 years. She learned birth attendance in a training course at the Pandacan Puericulture Center in 1940. She averages 3 deliveries a month and 8 IUD acceptances a month. The hilots are a possible strong force in family planning motivation because of their influence and the respect with which people in the community regard them. They are older, experienced, always available, and charge very reasonable rates for services highly trained clinic staff would balk at doing. The Institute of Maternal and Child Health (IMCH) has trained 400 such hilots to do motivation work in family planning. It is noted that in the Philippines, the hilot may yet provide the key to reach the people in the barrios, which is the most important and challenging task for the national program on family planning.

  12. Object shape classification and scene shape representation for three-dimensional laser scanned outdoor data

    NASA Astrophysics Data System (ADS)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2013-02-01

    Shape analysis of a three-dimensional (3-D) scene is an important issue and could be widely used for various applications: city planning, robot navigation, virtual tourism, etc. We introduce an approach for understanding the primitive shape of the scene to reveal the semantic scene shape structure and represent the scene using shape elements. The scene objects are labeled and recognized using the geometric and semantic features for each cluster, which is based on the knowledge of scene. Furthermore, the object in scene with a different primitive shape could also be classified and fitted using the Gaussian map of the segmented scene. We demonstrate the presented approach on several complex scenes from laser scanning. According to the experimental result, the proposed method can accurately represent the geometric structure of the 3-D scene.

  13. A comparison of actual and perceived residential proximity to toxic waste sites.

    PubMed

    Howe, H L

    1988-01-01

    Studies of Memphis and Three Mile Island have noted a positive association between actual residential distance and public concern about exposure to the potential of contamination, whereas none was found at Love Canal. In this study, concern about environmental contamination and exposure was examined in relation to both perceived and actual proximity to a toxic waste disposal site (TWDS). It was hypothesized that perceived residential proximity would better predict concern levels that would actual residential distance. The data were abstracted from a New York State, excluding New York City, survey using all respondents (N = 317) from one county known to have a large number of TWDSs. Using linear regression, the variance explained in concern scores was 22 times higher with perceived distance than for actual distance. Perceived residential distance was a significant predictor of concern scores, while actual distance was not. However, perceived distance explained less than 5% of the variance in concern scores. PMID:3196077

  14. Providers' Reported and Actual Use of Coaching Strategies in Natural Environments

    ERIC Educational Resources Information Center

    Salisbury, Christine; Cambray-Engstrom, Elizabeth; Woods, Juliann

    2012-01-01

    This case study examined the agreement between reported and actual use of coaching strategies based on home visit data collected on a diverse sample of providers and families. Paired videotape and contact note data of and from providers during home visits were collected over a six month period and analyzed using structured protocols. Results of…

  15. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification. PMID:23846511

  16. Selective looking at natural scenes: Hedonic content and gender☆

    PubMed Central

    Bradley, Margaret M.; Costa, Vincent D.; Lang, Peter J.

    2015-01-01

    Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. PMID:26156939

  17. Rendering energy-conservative scenes in real time

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Garbo, Dennis L.; Crow, Dennis R.; Coker, Charles F.

    1997-07-01

    Real-time infrared (IR) scene generation from HardWare-in- the-Loop (HWIL) testing of IR seeker systems is a complex problem due to the required frame rates and image fidelity. High frame rates are required for current generation seeker systems to perform designation, discrimination, identification, tracking, and aimpoint selection tasks. Computational requirements for IR signature phenomenology and sensor effects have been difficult to perform in real- time to support HWIL testing. Commercial scene generation hardware is rapidly improving and is becoming a viable solution for HWIL testing activities being conducted at the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator facility at Eglin AFB, Florida. This paper presents computational techniques performed to overcome IR scene rendering errors incurred with commercially available hardware and software for real-time scene generation in support of HWIL testing. These techniques provide an acceptable solution to real-time IR scene generation that strikes a balance between physical accuracy and image framing rates. The results of these techniques are investigated as they pertain to rendering accuracy and speed for target objects which begin as a point source during acquisition and develop into an extended source representation during aimpoint selection.

  18. Unconscious analyses of visual scenes based on feature conjunctions.

    PubMed

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. PMID:25798783

  19. Can cigarette warnings counterbalance effects of smoking scenes in movies?

    PubMed

    Golmier, Isabelle; Chebat, Jean-Charles; Gélinas-Chebat, Claire

    2007-02-01

    Scenes in movies where smoking occurs have been empirically shown to influence teenagers to smoke cigarettes. The capacity of a Canadian warning label on cigarette packages to decrease the effects of smoking scenes in popular movies has been investigated. A 2 x 3 factorial design was used to test the effects of the same movie scene with or without electronic manipulation of all elements related to smoking, and cigarette pack warnings, i.e., no warning, text-only warning, and text+picture warning. Smoking-related stereotypes and intent to smoke of teenagers were measured. It was found that, in the absence of warning, and in the presence of smoking scenes, teenagers showed positive smoking-related stereotypes. However, these effects were not observed if the teenagers were first exposed to a picture and text warning. Also, smoking-related stereotypes mediated the relationship of the combined presentation of a text and picture warning and a smoking scene on teenagers' intent to smoke. Effectiveness of Canadian warning labels to prevent or to decrease cigarette smoking among teenagers is discussed, and areas of research are proposed. PMID:17450995

  20. Selective looking at natural scenes: Hedonic content and gender.

    PubMed

    Bradley, Margaret M; Costa, Vincent D; Lang, Peter J

    2015-10-01

    Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. PMID:26156939

  1. Is OpenSceneGraph an option for ESVS displays?

    NASA Astrophysics Data System (ADS)

    Peinecke, Niklas

    2015-05-01

    Modern Enhanced and Synthetic Vision Systems (ESVS) usually incorporate complex 3D displays, for example, terrain visualizations with color-coded altitude, obstacle representations that change their level of detail based on distance, semi-transparent overlays, dynamic labels, etc. All of these elements can be conveniently implemented by using a modern scene graph implementation. OpenSceneGraph offers such a data structure. Furthermore, OpenSceneGraph includes a broad support for industry-standard file formats, so 3D data and models from other applications can be used. OpenSceneGraph has a large user community and is driven by open source development. Thus a selection of visualization techniques is available and often solutions for common problems can be found easily in the community's discussion groups. On the other side, documentation is sometimes outdated or nonexistent. We investigate which ESVS applications can be realized using OpenSceneGraph and on which platforms this is possible. Furthermore, we take a look at technical and license limitations.

  2. Statistics of colors in paintings and natural scenes.

    PubMed

    Montagner, Cristina; Linhares, João M M; Vilarigues, Márcia; Nascimento, Sérgio M C

    2016-03-01

    Painters reproduce some spatial statistical regularities of natural scenes. To what extent they replicate their color statistics is an open question. We investigated this question by analyzing the colors of 50 natural scenes of rural and urban environments and 44 paintings with abstract and figurative compositions. The analysis was carried out using hyperspectral imaging data from both sets and focused on the gamut and distribution of colors in the CIELAB space. The results showed that paintings, like natural scenes, have gamuts with elongated shapes in the yellow-blue direction but more tilted to the red direction. It was also found that the fraction of discernible colors, expressed as a function of the number of occurrences in the scene or painting, is well described by power laws. These have similar distribution of slopes in a log-log scale for paintings and natural scenes. These features are observed in both abstract and figurative compositions. These results suggest that the underlying chromatic structure of artistic compositions generally follows the main statistical features of the natural environment.

  3. Do Simultaneously Viewed Objects Influence Scene Recognition Individually or as Groups? Two Perceptual Studies

    PubMed Central

    Gagne, Christopher R.; MacEvoy, Sean P.

    2014-01-01

    The ability to quickly categorize visual scenes is critical to daily life, allowing us to identify our whereabouts and to navigate from one place to another. Rapid scene categorization relies heavily on the kinds of objects scenes contain; for instance, studies have shown that recognition is less accurate for scenes to which incongruent objects have been added, an effect usually interpreted as evidence of objects' general capacity to activate semantic networks for scene categories they are statistically associated with. Essentially all real-world scenes contain multiple objects, however, and it is unclear whether scene recognition draws on the scene associations of individual objects or of object groups. To test the hypothesis that scene recognition is steered, at least in part, by associations between object groups and scene categories, we asked observers to categorize briefly-viewed scenes appearing with object pairs that were semantically consistent or inconsistent with the scenes. In line with previous results, scenes were less accurately recognized when viewed with inconsistent versus consistent pairs. To understand whether this reflected individual or group-level object associations, we compared the impact of pairs composed of mutually related versus unrelated objects; i.e., pairs, which, as groups, had clear associations to particular scene categories versus those that did not. Although related and unrelated object pairs equally reduced scene recognition accuracy, unrelated pairs were consistently less capable of drawing erroneous scene judgments towards scene categories associated with their individual objects. This suggests that scene judgments were influenced by the scene associations of object groups, beyond the influence of individual objects. More generally, the fact that unrelated objects were as capable of degrading categorization accuracy as related objects, while less capable of generating specific alternative judgments, indicates that the process

  4. Background gradient reduction of an infrared scene projector mounted on a flight motion simulator

    NASA Astrophysics Data System (ADS)

    Cantey, Thomas M.; Bowden, Mark H.; Ballard, Gary

    2008-04-01

    The U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) recently developed an infrared projector mounted on a flight motion simulator (FMS) that is used for hardware-in-the-loop (HWIL) testing. The initial application of this system within a HWIL environment required variations in the projected background radiance level to be very low. This paper describes the investigation into the causes of the variations in background radiance levels and the steps employed to reduce the background variance to an acceptable level. Test data collected before and after the corrective techniques are provided. The procedures discussed provide insight into the types of practical problems encountered when integrating infrared scene projector technologies into actual test facilities.

  5. False recognition of objects in visual scenes: findings from a combined direct and indirect memory test.

    PubMed

    Weinstein, Yana; Nash, Robert A

    2013-01-01

    We report an extension of the procedure devised by Weinstein and Shanks (Memory & Cognition 36:1415-1428, 2008) to study false recognition and priming of pictures. Participants viewed scenes with multiple embedded objects (seen items), then studied the names of these objects and the names of other objects (read items). Finally, participants completed a combined direct (recognition) and indirect (identification) memory test that included seen items, read items, and new items. In the direct test, participants recognized pictures of seen and read items more often than new pictures. In the indirect test, participants' speed at identifying those same pictures was improved for pictures that they had actually studied, and also for falsely recognized pictures whose names they had read. These data provide new evidence that a false-memory induction procedure can elicit memory-like representations that are difficult to distinguish from "true" memories of studied pictures.

  6. Virtual environments for scene of crime reconstruction and analysis

    NASA Astrophysics Data System (ADS)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  7. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  8. Scene-based nonuniformity correction in infrared videos

    NASA Astrophysics Data System (ADS)

    Bae, Yoonsung; Lee, Jongho; Lee, Jong Ho; Ra, Jong Beom

    2012-05-01

    Recent infrared (IR) sensors are mostly based on a focal-plane array (FPA) structure. However, IR images suffer from the fixed pattern noise (FPN) due to non-uniform response of a FPA structure. Various nonuniformity correction (NUC) techniques have been developed to alleviate the FPN. They can be categorized into reference-based and scene-based approaches. In order to deal with a temporal drift, however, a scene-based approach is needed. Among scene-based algorithms, conventional algorithms compensate only for the offset non-uniformity of IR camera detectors based on the global motion information. Local motions in a video, however, can introduce inaccurate motion information for NUC. Considering global and local motions simultaneously, we propose a correction algorithm of gain and offset. Experiment results using simulated and real IR videos show that the proposed algorithm provides performance improvement on the FPN reduction.

  9. Use of AFIS for linking scenes of crime.

    PubMed

    Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David

    2016-05-01

    Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes. PMID:26996923

  10. Ray tracing a three dimensional scene using a grid

    DOEpatents

    Wald, Ingo; Ize, Santiago; Parker, Steven G; Knoll, Aaron

    2013-02-26

    Ray tracing a three-dimensional scene using a grid. One example embodiment is a method for ray tracing a three-dimensional scene using a grid. In this example method, the three-dimensional scene is made up of objects that are spatially partitioned into a plurality of cells that make up the grid. The method includes a first act of computing a bounding frustum of a packet of rays, and a second act of traversing the grid slice by slice along a major traversal axis. Each slice traversal includes a first act of determining one or more cells in the slice that are overlapped by the frustum and a second act of testing the rays in the packet for intersection with any objects at least partially bounded by the one or more cells overlapped by the frustum.

  11. Smart weapons operability enhancement synthetic scene generation process

    NASA Astrophysics Data System (ADS)

    Koenig, George G.; Welsh, James P.; Wilson, Jerre W.

    1995-06-01

    The smart weapons operability enhancement (SWOE) program has developed a synthetic scene generation process that incorporates formal experimental design, random sampling procedures, data collection methods, physics models, and numerically repeatable validation procedures. The SWOE synthetic scene generation procedure uses an assemblage of measurements, static and dynamic information databases, thermal and radiance models, and rendering techniques to simulate a wide range of environmental conditions. The models provide a spatial and spectral agility that permits the simulation of a wide range of sensor systems for varied environmental conditions. Comprehensive validation efforts have been conducted for two locations: Grayling, Michigan and Yuma, Arizona, and for two spectral bands: shortwave (3 - 5 micrometers ) and longwave (8 - 12 micrometers ) IR. The intended use of the validated SWOE process is synthetic battlefield scene generation. The users of the SWOE process are the smart weapons system designers, developers, testers and evaluators, including developers of automatic target recognition algorithms and techniques.

  12. Constructing Virtual Forest Scenes for Assessment of Sub-pixel Vegetation Structure From Imaging Spectroscopy

    NASA Astrophysics Data System (ADS)

    Gerace, A. D.; Yao, W.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; van Leeuwen, M.; Kampe, T. U.

    2015-12-01

    Assessment of vegetation structure via remote sensing modalities has a long history for a range of sensor platforms. Imaging spectroscopy, while often used for biochemical measurements, also applies to structural assessment in that the Hyperspectral Infrared Imager (HyspIRI), for instance, will provide an opportunity to monitor the global ecosystem. Establishing the linkage between HyspIRI data and sub-pixel vegetation structural variation therefore is of keen interest to the remote sensing and ecology communities. NASA's AVIRIS-C was used to collect airborne data during the 2013-2015 time frame, while ground truth data were limited to 2013 due to time-consuming and labor-intensive nature of field data collection. We augmented the available field data with a first-principles, physics-based simulation approach to refine our field efforts and to maintain larger control over within-pixel variation and associated assessments. Three virtual scenes were constructed for the study, corresponding to the actual vegetation structure of the NEON's Pacific Southwest site (Fresno, CA). They presented three typical forest types: oak savanna, dense coniferous forest, and conifer manzanita mixed forest. Airborne spectrometer and a field leaf area index sensor were simulated over these scenes using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model, a synthetic image generation model. After verifying the geometrical parameters and physical model with those replicative senses, more scenes could be constructed by changing one or more vegetation structural parameters, such as forest density, tree species, size, location, and within-pixel distribution. We constructed regression models of leaf area index (LAI, R2=0.92) and forest density(R2=0.97) with narrow-band vegetation indices through simulation. Those models can be used to improve the HyspIRI's suitability for consistent global vegetation structural assessments. The virtual scene and model can also be used in

  13. 4. Panama Mount. Note concrete ring and metal rail. Note ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. Panama Mount. Note concrete ring and metal rail. Note cliff erosion under foundation at left center. Looking 297° W. - Fort Funston, Panama Mounts for 155mm Guns, Skyline Boulevard & Great Highway, San Francisco, San Francisco County, CA

  14. Scene segmentation in a machine vision system for histopathology

    NASA Astrophysics Data System (ADS)

    Thompson, Deborah B.; Bartels, H. G.; Haddad, J. W.; Bartels, Peter H.

    1990-07-01

    Algorithms and procedures employed to attain reliable and exhaustive segmentation in histopathologic imagery of colon and prostate sections are detailed. The algorithms are controlled and selectively called by a scene segmentation expert system as part of a machine vision system for the diagnostic interpretation of histopathologic sections. At this time, effective segmentation of scenes of glandular tissues is produced, with the system being conservative in the identification of glands; for the segmentation of overlapping glandular nuclei an overall success rate of approximately 90% has been achieved.

  15. Robust pedestrian detection and tracking in crowded scenes

    NASA Astrophysics Data System (ADS)

    Lypetskyy, Yuriy

    2007-09-01

    This paper presents a vision based tracking system developed for very crowded situations like underground or railway stations. Our system consists on two main parts - searching of people candidates in single frames, and tracking them frame to frame over the scene. This paper concentrates mostly on the tracking part and describes its core components in detail. These are trajectories predictions using KLT vectors or Kalman filter, adaptive active shape model adjusting and texture matching. We show that combination of presented algorithms leads to robust people tracking even in complex scenes with permanent occlusions.

  16. Self-Actualization, Liberalism, and Humanistic Education.

    ERIC Educational Resources Information Center

    Porter, Charles Mack

    1979-01-01

    The relationship between personality factors and political orientation has long been of interest to psychologists. This study tests the hypothesis that there is no significant relationship between self-actualization and liberalism-conservatism. The hypothesis is supported. (Author)

  17. Improved canopy reflectance modeling and scene inference through improved understanding of scene pattern

    NASA Technical Reports Server (NTRS)

    Franklin, Janet; Simonett, David

    1988-01-01

    The Li-Strahler reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density within 20 percent of sampled values in two bioclimatic zones in West Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple parameters measured in the field (spatial pattern, shape, and size distribution of trees) and in the imagery (spectral signatures of scene components). Trees are treated as simply shaped objects, and multispectral reflectance of a pixel is assumed to be related only to the proportions of tree crown, shadow, and understory in the pixel. These, in turn, are a direct function of the number and size of trees, the solar illumination angle, and the spectral signatures of crown, shadow and understory. Given the variance in reflectance from pixel to pixel within a homogeneous area of woodland, caused by the variation in the number and size of trees, the model can be inverted to give estimates of average tree size and density. Because the inversion is sensitive to correct determination of component signatures, predictions are not accurate for small areas.

  18. Note-Taking: Different Notes for Different Research Stages.

    ERIC Educational Resources Information Center

    Callison, Daniel

    2003-01-01

    Explains the need to teach students different strategies for taking notes for research, especially at the exploration and collecting information stages, based on Carol Kuhlthau's research process. Discusses format changes; using index cards; notes for live presentations or media presentations versus notes for printed sources; and forming focus…

  19. Emotional conflict in facial expression processing during scene viewing: an ERP study.

    PubMed

    Xu, Qiang; Yang, Yaping; Zhang, Entao; Qiao, Fuqiang; Lin, Wenyi; Liang, Ningjian

    2015-05-22

    Facial expressions are fundamental emotional stimuli as they convey important information in social interaction. In everyday life a face always appears in complex context. Scenes which faces are embedded in provided typical visual context. The aim of the present study was to investigate the processing of emotional conflict between facial expressions and emotional scenes by recording event-related potentials (ERPs). We found that when the scene was presented before the face-scene compound stimulus, the scene had an influence on facial expression processing. Specifically, emotionally incongruent (in conflict) face-scene compound stimuli elicited larger fronto-central N2 amplitude relative to the emotionally congruent face-scene compound stimuli. The effect occurred in the post-perceptual stage of facial expression processing and reflected emotional conflict monitoring between emotional scenes and facial expressions. The present findings emphasized the importance of emotional scenes as a context factor in the study of the processing of facial expressions. PMID:25747865

  20. Emotional conflict in facial expression processing during scene viewing: an ERP study.

    PubMed

    Xu, Qiang; Yang, Yaping; Zhang, Entao; Qiao, Fuqiang; Lin, Wenyi; Liang, Ningjian

    2015-05-22

    Facial expressions are fundamental emotional stimuli as they convey important information in social interaction. In everyday life a face always appears in complex context. Scenes which faces are embedded in provided typical visual context. The aim of the present study was to investigate the processing of emotional conflict between facial expressions and emotional scenes by recording event-related potentials (ERPs). We found that when the scene was presented before the face-scene compound stimulus, the scene had an influence on facial expression processing. Specifically, emotionally incongruent (in conflict) face-scene compound stimuli elicited larger fronto-central N2 amplitude relative to the emotionally congruent face-scene compound stimuli. The effect occurred in the post-perceptual stage of facial expression processing and reflected emotional conflict monitoring between emotional scenes and facial expressions. The present findings emphasized the importance of emotional scenes as a context factor in the study of the processing of facial expressions.

  1. Early childhood exposure to parental nudity and scenes of parental sexuality ("primal scenes"): an 18-year longitudinal study of outcome.

    PubMed

    Okami, P; Olmstead, R; Abramson, P R; Pendleton, L

    1998-08-01

    As part of the UCLA Family Lifestyles Project (FLS), 200 male and female children participated in an 18-year longitudinal outcome study of early childhood exposure to parental nudity and scenes of parental sexuality ("primal scenes"). At age 17-18, participants were assessed for levels of self-acceptance; relations with peers, parents, and other adults; antisocial and criminal behavior; substance use; suicidal ideation; quality of sexual relationships; and problems associated with sexual relations. No harmful "main effect" correlates of the predictor variables were found. A significant crossover Sex of Participant X Primal Scenes interaction was found such that boys exposed to primal scenes before age 6 had reduced risk of STD transmission or having impregnated someone in adolescence. In contrast, girls exposed to primal scenes before age 6 had increased risk of STD transmission or having become pregnant. A number of main effect trends in the data (nonsignificant at p < 0.05, following the Bonferonni correction) linked exposure to nudity and exposure to primal scenes with beneficial outcomes. However, a number of these findings were mediated by sex of participant interactions showing that the effects were attenuated or absent for girls. All effects were independent of family stability, pathology, or child-rearing ideology; sex of participant; SES; and beliefs and attitudes toward sexuality. Limitations of the data and of long-term regression studies in general are discussed, and the sex of participant interactions are interpreted speculatively. It is suggested that pervasive beliefs in the harmfulness of the predictor variables are exaggerated. PMID:9681119

  2. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  3. Disentangling scene content from spatial boundary: complementary roles for the parahippocampal place area and lateral occipital complex in representing real-world scenes.

    PubMed

    Park, Soojin; Brady, Timothy F; Greene, Michelle R; Oliva, Aude

    2011-01-26

    Behavioral and computational studies suggest that visual scene analysis rapidly produces a rich description of both the objects and the spatial layout of surfaces in a scene. However, there is still a large gap in our understanding of how the human brain accomplishes these diverse functions of scene understanding. Here we probe the nature of real-world scene representations using multivoxel functional magnetic resonance imaging pattern analysis. We show that natural scenes are analyzed in a distributed and complementary manner by the parahippocampal place area (PPA) and the lateral occipital complex (LOC) in particular, as well as other regions in the ventral stream. Specifically, we study the classification performance of different scene-selective regions using images that vary in spatial boundary and naturalness content. We discover that, whereas both the PPA and LOC can accurately classify scenes, they make different errors: the PPA more often confuses scenes that have the same spatial boundaries, whereas the LOC more often confuses scenes that have the same content. By demonstrating that visual scene analysis recruits distinct and complementary high-level representations, our results testify to distinct neural pathways for representing the spatial boundaries and content of a visual scene.

  4. Contextual Cueing in Naturalistic Scenes: Global and Local Contexts

    ERIC Educational Resources Information Center

    Brockmole, James R.; Castelhano, Monica S.; Henderson, John M.

    2006-01-01

    In contextual cueing, the position of a target within a group of distractors is learned over repeated exposure to a display with reference to a few nearby items rather than to the global pattern created by the elements. The authors contrasted the role of global and local contexts for contextual cueing in naturalistic scenes. Experiment 1 showed…

  5. Sexual Fundamentalism and Performances of Masculinity: An Ethnographic Scene Study

    ERIC Educational Resources Information Center

    Gallagher, Kathleen

    2006-01-01

    The study on which this paper is based examined the experiences of students in order to develop a theoretical and empirically grounded account of the dynamic social forces of inclusion and exclusion experienced by youth in their unique contexts of North American urban schooling. The ethnographic scenes, organized into four "beats," theatrically…

  6. A new modular optical system for large format scene projection

    NASA Astrophysics Data System (ADS)

    Alexay, Christopher C.; Palmer, Troy A.

    2006-05-01

    This work will present a new approach to large format projection optics suitable for HWIL testing. Aspects of the design's modular approach and its ability to accommodate widely varying spectral ranges, focal lengths, zoom capabilities and the ability to deliver multi-spectral scene data are presented.

  7. Dynamic scene stitching driven by visual cognition model.

    PubMed

    Zou, Li-hui; Zhang, Dezheng; Wulamu, Aziguli

    2014-01-01

    Dynamic scene stitching still has a great challenge in maintaining the global key information without missing or deforming if multiple motion interferences exist in the image acquisition system. Object clips, motion blurs, or other synthetic defects easily occur in the final stitching image. In our research work, we proceed from human visual cognitive mechanism and construct a hybrid-saliency-based cognitive model to automatically guide the video volume stitching. The model consists of three elements of different visual stimuli, that is, intensity, edge contour, and scene depth saliencies. Combined with the manifold-based mosaicing framework, dynamic scene stitching is formulated as a cut path optimization problem in a constructed space-time graph. The cutting energy function for column width selections is defined according to the proposed visual cognition model. The optimum cut path can minimize the cognitive saliency difference throughout the whole video volume. The experimental results show that it can effectively avoid synthetic defects caused by different motion interferences and summarize the key contents of the scene without loss. The proposed method gives full play to the role of human visual cognitive mechanism for the stitching. It is of high practical value to environmental surveillance and other applications.

  8. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  9. Semantic Categorization Precedes Affective Evaluation of Visual Scenes

    ERIC Educational Resources Information Center

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2010-01-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…

  10. The Hidden Agenda: The Behind-the-Scenes Employees.

    ERIC Educational Resources Information Center

    Deal, Terrence E.

    1994-01-01

    College and university personnel managers are urged to pay more attention to employees who operate behind the scenes by: finding a champion among them; linking work with institutional mission; hiring the best; encouraging customer service; soliciting ideas; fostering trust; enlarging responsibility; not upstaging; providing the best equipment; and…

  11. The Nature of Change Detection and Online Representations of Scenes

    ERIC Educational Resources Information Center

    Ryan,J ennifer D.; Cohen, Neal J.

    2004-01-01

    This article provides evidence for implicit change detection and for the contribution of multiple memory sources to online representations. Multiple eye-movement measures distinguished original from changed scenes, even when college students had no conscious awareness for the change. Patients with amnesia showed a systematic deficit on 1 class of…

  12. Memory, emotion, and pupil diameter: Repetition of natural scenes.

    PubMed

    Bradley, Margaret M; Lang, Peter J

    2015-09-01

    Recent studies have suggested that pupil diameter, like the "old-new" ERP, may be a measure of memory. Because the amplitude of the old-new ERP is enhanced for items encoded in the context of repetitions that are distributed (spaced), compared to massed (contiguous), we investigated whether pupil diameter is similarly sensitive to repetition. Emotional and neutral pictures of natural scenes were viewed once or repeated with massed (contiguous) or distributed (spaced) repetition during incidental free viewing and then tested on an explicit recognition test. Although an old-new difference in pupil diameter was found during successful recognition, pupil diameter was not enhanced for distributed, compared to massed, repetitions during either recognition or initial free viewing. Moreover, whereas a significant old-new difference was found for erotic scenes that had been seen only once during encoding, this difference was absent when erotic scenes were repeated. Taken together, the data suggest that pupil diameter is not a straightforward index of prior occurrence for natural scenes. PMID:25943211

  13. Independence of color and luminance edges in natural scenes.

    PubMed

    Hansen, Thorsten; Gegenfurtner, Karl R

    2009-01-01

    Form vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information. PMID:19152717

  14. LOFTrelated semiscale test scene. Water has been dyed red. Hot ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT-related semiscale test scene. Water has been dyed red. Hot steam blowdown exits semiscale at TAN-609 at A&M complex. Edge of building is along left edge of view. Date: 1971. INEEL negative no. 71-376 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  15. The Rescue Mission: Assigning Guilt to a Chaotic Scene.

    ERIC Educational Resources Information Center

    Procter, David E.

    1987-01-01

    Seeks to identify rhetorical distinctiveness of the rescue mission as a form of belligerency--examining presidential discourse justifying the 1985 Lebanon intervention, the 1965 Dominican intervention, and the 1983 Grenada intervention. Argues that the distinction is in guilt narrowly assigned to a chaotic scene and the concomitant call for…

  16. Semi-Supervised Multitask Learning for Scene Recognition.

    PubMed

    Lu, Xiaoqiang; Li, Xuelong; Mou, Lichao

    2015-09-01

    Scene recognition has been widely studied to understand visual information from the level of objects and their relationships. Toward scene recognition, many methods have been proposed. They, however, encounter difficulty to improve the accuracy, mainly due to two limitations: 1) lack of analysis of intrinsic relationships across different scales, say, the initial input and its down-sampled versions and 2) existence of redundant features. This paper develops a semi-supervised learning mechanism to reduce the above two limitations. To address the first limitation, we propose a multitask model to integrate scene images of different resolutions. For the second limitation, we build a model of sparse feature selection-based manifold regularization (SFSMR) to select the optimal information and preserve the underlying manifold structure of data. SFSMR coordinates the advantages of sparse feature selection and manifold regulation. Finally, we link the multitask model and SFSMR, and propose the semi-supervised learning method to reduce the two limitations. Experimental results report the improvements of the accuracy in scene recognition. PMID:25423664

  17. Publishing in '63: Looking for Relevance in a Changing Scene

    ERIC Educational Resources Information Center

    Reynolds, Thomas

    2008-01-01

    In this article, the author examines various publications published in 1963 in an attempt to look for relevance in a changing publication scene. The author considers Gordon Parks's reportorial photographs and accompanying personal essay, "What Their Cry Means to Me," as an act of publishing with implications for the teaching of written…

  18. 47 CFR 80.1127 - On-scene communications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false On-scene communications. 80.1127 Section 80.1127 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS) Operating...

  19. Scene Context Dependency of Pattern Constancy of Time Series Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2008-01-01

    A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure.

  20. Behind the Scenes at Berkeley Lab - The Mechanical Fabrication Facility

    SciTech Connect

    Wells, Russell; Chavez, Pete; Davis, Curtis; Bentley, Brian

    2013-05-17

    Part of the Behind the Scenes series at Berkeley Lab, this video highlights the lab's mechanical fabrication facility and its exceptional ability to produce unique tools essential to the lab's scientific mission. Through a combination of skilled craftsmanship and precision equipment, machinists and engineers work with scientists to create exactly what's needed - whether it's measured in microns or meters.

  1. Behind the Scenes at Berkeley Lab - The Mechanical Fabrication Facility

    ScienceCinema

    Wells, Russell; Chavez, Pete; Davis, Curtis; Bentley, Brian

    2016-07-12

    Part of the Behind the Scenes series at Berkeley Lab, this video highlights the lab's mechanical fabrication facility and its exceptional ability to produce unique tools essential to the lab's scientific mission. Through a combination of skilled craftsmanship and precision equipment, machinists and engineers work with scientists to create exactly what's needed - whether it's measured in microns or meters.

  2. Forensic DNA Evidence at a Crime Scene: An Investigator's Commentary.

    PubMed

    Blozis, J

    2010-07-01

    The purpose of this article is twofold. The first is to present a law enforcement perspective of the importance of a crime scene, the value of probative evidence, and how to properly recognize, document, and collect evidence. The second purpose is to provide forensic scientists who primarily work in laboratories with the ability to gain insight on how law enforcement personnel process a crime scene. With all the technological advances in the various disciplines associated with forensic science, none have been more spectacular than those in the field of DNA. The development of sophisticated and sensitive instrumentation has led forensic scientists to be able to detect DNA profiles from minute samples of evidence in a much timelier manner. In forensic laboratories, safeguards and protocols associated with ASCLD/LAB International, Forensic Quality Services, and or ISO/IEC 17020:1998 accreditation have been established and implemented to ensure proper case analysis. But no scientist, no instrumentation, and no laboratory could come to a successful conclusion about evidence if that evidence had been compromised or simply missed at a crime scene. Evidence collectors must be trained thoroughly to process a scene and to be able to distinguish between probative evidence and non probative evidence. I am a firm believer of the phrase "garbage in is garbage out." One of the evidence collector's main goals is not only to recover enough DNA so that an eligible CODIS profile can be generated to identify an offender but also, more importantly, to recover sufficient DNA to exonerate the innocent.

  3. Logical unit and scene detection: a comparative survey

    NASA Astrophysics Data System (ADS)

    Petersohn, Christian

    2008-01-01

    Logical units are semantic video segments above the shot level. Depending on the common semantics within the unit and data domain, different types of logical unit extraction algorithms have been presented in literature. Topic units are typically extracted for documentaries or news broadcasts while scenes are extracted for narrative-driven video such as feature films, sitcoms, or cartoons. Other types of logical units are extracted from home video and sports. Different algorithms in literature used for the extraction of logical units are reviewed in this paper based on the categories unit type, data domain, features used, segmentation method, and thresholds applied. A detailed comparative study is presented for the case of extracting scenes from narrative-driven video. While earlier comparative studies focused on scene segmentation methods only or on complete news-story segmentation algorithms, in this paper various visual features and segmentation methods with their thresholding mechanisms and their combination into complete scene detection algorithms are investigated. The performance of the resulting large set of algorithms is then evaluated on a set of video files including feature films, sitcoms, children's shows, a detective story, and cartoons.

  4. Design of a wide field of view infrared scene projector

    NASA Astrophysics Data System (ADS)

    Jiang, Zhenyu; Li, Lin; Huang, YiFan

    2008-03-01

    In order to make the projected scene cover the seeker's field-of-view promptly the conventional projection optical systems used for hardware-in-the-loop simulation test usually depend on the 5 axes flight-motion-simulator. Those flight-motion-simulator tables are controlled via servomechanisms. The servomechanism needs many axis position transducers and many electromechanical devices. The structure and controlling procedure of the system are complicated. It is hard to avoid the mechanical motion and controlling errors absolutely. The target image jitter will be induced by the vibration of mechanical platform, and the frequency response is limited by the structural performance. To overcome these defects a new infrared image simulating projection system for hardware-in-the-loop simulation test is presented in this paper. The system in this paper consists of multiple lenses joined side by side on a sphere surface. Each single lens uses one IR image generator or resistor array etc. Every IR image generator displays special IR image controlled by the scene simulation computer. The scene computer distributes to every IR image generator the needed image. So the scene detected by the missile seeker is integrated and uninterrupted. The entrance pupil of the seeker lies in the centre of the sphere. Almost semi-sphere range scene can be achieved by the projection system, and the total field of view can be extended by increasing the number of the lenses. However, the luminance uniformity in the field-of-view will be influenced by the joint between the lenses. The method of controlling the luminance uniformity of field-of-view is studied in this paper. The needed luminous exitance of each resist array is analyzed. The experiment shows that the new method is applicable for the hardware-in-the-loop simulation test.

  5. Comparative Analyses of Live-Action and Animated Film Remake Scenes: Finding Alternative Film-Based Teaching Resources

    ERIC Educational Resources Information Center

    Champoux, Joseph E.

    2005-01-01

    Live-action and animated film remake scenes can show many topics typically taught in organizational behaviour and management courses. This article discusses, analyses and compares such scenes to identify parallel film scenes useful for teaching. The analysis assesses the scenes to decide which scene type, animated or live-action, more effectively…

  6. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  7. Realizing actual feedback control of complex network

    NASA Astrophysics Data System (ADS)

    Tu, Chengyi; Cheng, Yuhua

    2014-06-01

    In this paper, we present the concept of feedbackability and how to identify the Minimum Feedbackability Set of an arbitrary complex directed network. Furthermore, we design an estimator and a feedback controller accessing one MFS to realize actual feedback control, i.e. control the system to our desired state according to the estimated system internal state from the output of estimator. Last but not least, we perform numerical simulations of a small linear time-invariant dynamics network and a real simple food network to verify the theoretical results. The framework presented here could make an arbitrary complex directed network realize actual feedback control and deepen our understanding of complex systems.

  8. A Note on Divergences.

    PubMed

    Liang, Xiao

    2016-10-01

    In many areas of neural computation, like learning, optimization, estimation, and inference, suitable divergences play a key role. In this note, we study the conjecture presented by Amari ( 2009 ) and find a counterexample to show that the conjecture does not hold generally. Moreover, we investigate two classes of [Formula: see text]-divergence (Zhang, 2004 ), weighted f-divergence and weighted [Formula: see text]-divergence, and prove that if a divergence is a weighted f-divergence, as well as a Bregman divergence, then it is a weighted [Formula: see text]-divergence. This result reduces in form to the main theorem established by Amari ( 2009 ) when [Formula: see text] [Formula: see text].

  9. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Xiao, J.

    2013-10-01

    In this paper we develop and compare two methods for scene classification in 3D object space, that is, not single image pixels get classified, but voxels which carry geometric, textural and color information collected from the airborne oblique images and derived products like point clouds from dense image matching. One method is supervised, i.e. relies on training data provided by an operator. We use Random Trees for the actual training and prediction tasks. The second method is unsupervised, thus does not ask for any user interaction. We formulate this classification task as a Markov-Random-Field problem and employ graph cuts for the actual optimization procedure. Two test areas are used to test and evaluate both techniques. In the Haiti dataset we are confronted with largely destroyed built-up areas since the images were taken after the earthquake in January 2010, while in the second case we use images taken over Enschede, a typical Central European city. For the Haiti case it is difficult to provide clear class definitions, and this is also reflected in the overall classification accuracy; it is 73% for the supervised and only 59% for the unsupervised method. If classes are defined more unambiguously like in the Enschede area, results are much better (85% vs. 78%). In conclusion the results are acceptable, also taking into account that the point cloud used for geometric features is not of good quality and no infrared channel is available to support vegetation classification.

  10. Bag of Lines (BoL) for Improved Aerial Scene Representation

    SciTech Connect

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scale invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.

  11. Bag of Lines (BoL) for Improved Aerial Scene Representation

    DOE PAGES

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less

  12. Contextual effects of scene on the visual perception of object orientation in depth.

    PubMed

    Niimi, Ryosuke; Watanabe, Katsumi

    2013-01-01

    We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle) of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1). When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2). This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3). Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.

  13. Developing Human Resources through Actualizing Human Potential

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    2012-01-01

    The key to human resource development is in actualizing individual and collective thinking, feeling and choosing potentials related to our minds, hearts and wills respectively. These capacities and faculties must be balanced and regulated according to the standards of truth, love and justice for individual, community and institutional development,…

  14. [Actual diet of patients with gastrointestinal diseases].

    PubMed

    Loranskaia, T I; Shakhovskaia, A K; Pavliuchkova, M S

    2000-01-01

    The study of actual nutrition of patients with erosive-ulcerative lesions in the gastroduodenal zone and of patients with operated ulcer has revealed defects in intake of essential nutrients by these patients: overeating of animal fat and refined carbohydrates, deficiency of oil, vitamins A, B2, C, D and food fibers.

  15. Humanistic Education and Self-Actualization Theory.

    ERIC Educational Resources Information Center

    Farmer, Rod

    1984-01-01

    Stresses the need for theoretical justification for the development of humanistic education programs in today's schools. Explores Abraham Maslow's hierarchy of needs and theory of self-actualization. Argues that Maslow's theory may be the best available for educators concerned with educating the whole child. (JHZ)

  16. Group Counseling for Self-Actualization.

    ERIC Educational Resources Information Center

    Streich, William H.; Keeler, Douglas J.

    Self-concept, creativity, growth orientation, an integrated value system, and receptiveness to new experiences are considered to be crucial variables to the self-actualization process. A regular, year-long group counseling program was conducted with 85 randomly selected gifted secondary students in the Farmington, Connecticut Public Schools. A…

  17. Teenagers' Perceived and Actual Probabilities of Pregnancy.

    ERIC Educational Resources Information Center

    Namerow, Pearila Brickner; And Others

    1987-01-01

    Explored adolescent females' (N=425) actual and perceived probabilities of pregnancy. Subjects estimated their likelihood of becoming pregnant the last time they had intercourse, and indicated the dates of last intercourse and last menstrual period. Found that the distributions of perceived probability of pregnancy were nearly identical for both…

  18. Scene-Selectivity and Retinotopy in Medial Parietal Cortex

    PubMed Central

    Silson, Edward H.; Steel, Adam D.; Baker, Chris I.

    2016-01-01

    Functional imaging studies in human reliably identify a trio of scene-selective regions, one on each of the lateral [occipital place area (OPA)], ventral [parahippocampal place area (PPA)], and medial [retrosplenial complex (RSC)] cortical surfaces. Recently, we demonstrated differential retinotopic biases for the contralateral lower and upper visual fields within OPA and PPA, respectively. Here, using functional magnetic resonance imaging, we combine detailed mapping of both population receptive fields (pRF) and category-selectivity, with independently acquired resting-state functional connectivity analyses, to examine scene and retinotopic processing within medial parietal cortex. We identified a medial scene-selective region, which was contained largely within the posterior and ventral bank of the parieto-occipital sulcus (POS). While this region is typically referred to as RSC, the spatial extent of our scene-selective region typically did not extend into retrosplenial cortex, and thus we adopt the term medial place area (MPA) to refer to this visually defined scene-selective region. Intriguingly MPA co-localized with a region identified solely on the basis of retinotopic sensitivity using pRF analyses. We found that MPA demonstrates a significant contralateral visual field bias, coupled with large pRF sizes. Unlike OPA and PPA, MPA did not show a consistent bias to a single visual quadrant. MPA also co-localized with a region identified by strong differential functional connectivity with PPA and the human face-selective fusiform face area (FFA), commensurate with its functional selectivity. Functional connectivity with OPA was much weaker than with PPA, and similar to that with face-selective occipital face area (OFA), suggesting a closer link with ventral than lateral cortex. Consistent with prior research, we also observed differential functional connectivity in medial parietal cortex for anterior over posterior PPA, as well as a region on the lateral

  19. Scene-Selectivity and Retinotopy in Medial Parietal Cortex.

    PubMed

    Silson, Edward H; Steel, Adam D; Baker, Chris I

    2016-01-01

    Functional imaging studies in human reliably identify a trio of scene-selective regions, one on each of the lateral [occipital place area (OPA)], ventral [parahippocampal place area (PPA)], and medial [retrosplenial complex (RSC)] cortical surfaces. Recently, we demonstrated differential retinotopic biases for the contralateral lower and upper visual fields within OPA and PPA, respectively. Here, using functional magnetic resonance imaging, we combine detailed mapping of both population receptive fields (pRF) and category-selectivity, with independently acquired resting-state functional connectivity analyses, to examine scene and retinotopic processing within medial parietal cortex. We identified a medial scene-selective region, which was contained largely within the posterior and ventral bank of the parieto-occipital sulcus (POS). While this region is typically referred to as RSC, the spatial extent of our scene-selective region typically did not extend into retrosplenial cortex, and thus we adopt the term medial place area (MPA) to refer to this visually defined scene-selective region. Intriguingly MPA co-localized with a region identified solely on the basis of retinotopic sensitivity using pRF analyses. We found that MPA demonstrates a significant contralateral visual field bias, coupled with large pRF sizes. Unlike OPA and PPA, MPA did not show a consistent bias to a single visual quadrant. MPA also co-localized with a region identified by strong differential functional connectivity with PPA and the human face-selective fusiform face area (FFA), commensurate with its functional selectivity. Functional connectivity with OPA was much weaker than with PPA, and similar to that with face-selective occipital face area (OFA), suggesting a closer link with ventral than lateral cortex. Consistent with prior research, we also observed differential functional connectivity in medial parietal cortex for anterior over posterior PPA, as well as a region on the lateral

  20. Recognition of Natural Scenes from Global Properties: Seeing the Forest without Representing the Trees

    ERIC Educational Resources Information Center

    Greene, Michelle R.; Oliva, Aude

    2009-01-01

    Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects…

  1. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    ERIC Educational Resources Information Center

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  2. The Nesting of Search Contexts within Natural Scenes: Evidence from Contextual Cuing

    ERIC Educational Resources Information Center

    Brooks, Daniel I.; Rasmussen, Ian P.; Hollingworth, Andrew

    2010-01-01

    In a contextual cuing paradigm, we examined how memory for the spatial structure of a natural scene guides visual search. Participants searched through arrays of objects that were embedded within depictions of real-world scenes. If a repeated search array was associated with a single scene during study, then array repetition produced significant…

  3. Radiometric calibration procedures for a wideband infrared scene projector (WISP)

    NASA Astrophysics Data System (ADS)

    Flynn, David S.; Marlow, Steven A.; Bergin, Thomas P.; Kircher, James R.

    1999-07-01

    The Wideband Infrared Scene Projector (WISP) has been undergoing development for the Kinetic-Kill Vehicle Hardware-in-the-Loop Simulator facility at Eglin AFB, Florida. In order to perform realistic tests of an infrared seeker, the radiometric output of the WISP system must produce the same response in the seeker as the real scene. In order to ensure this radiometric realism, calibration procedures must be established and followed. This paper describes calibration procedures that have been used in recent tests. The procedures require knowledge of the camera spectral response in the seeker under test. The camera is set up to operate over the desired range of observable radiances. The camera is then nonuniformity corrected (NUCed) and calibrated with an extended blackbody. The camera drift rates are characterized, and as necessary, the camera is reNUCed and recalibrated. The camera is then set up to observe the WISP system, and calibration measurements are made of the camera/WISP system.

  4. Similarity-based global optimization of buildings in urban scene

    NASA Astrophysics Data System (ADS)

    Zhu, Quansheng; Zhang, Jing; Jiang, Wanshou

    2013-10-01

    In this paper, an approach for the similarity-based global optimization of buildings in urban scene is presented. In the past, most researches concentrated on single building reconstruction, making it difficult to reconstruct reliable models from noisy or incomplete point clouds. To obtain a better result, a new trend is to utilize the similarity among the buildings. Therefore, a new similarity detection and global optimization strategy is adopted to modify local-fitting geometric errors. Firstly, the hierarchical structure that consists of geometric, topological and semantic features is constructed to represent complex roof models. Secondly, similar roof models can be detected by combining primitive structure and connection similarities. At last, the global optimization strategy is applied to preserve the consistency and precision of similar roof structures. Moreover, non-local consolidation is adapted to detect small roof parts. The experiments reveal that the proposed method can obtain convincing roof models and promote the reconstruction quality of 3D buildings in urban scene.

  5. Natural auditory scene statistics shapes human spatial hearing

    PubMed Central

    Parise, Cesare V.; Knorre, Katharina; Ernst, Marc O.

    2014-01-01

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409

  6. Analysis of Agricultural Scenes Based on SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Nico, G.; Mascolo, L.; Pellegrinelli, A.; Giretti, D.; Soccodato, F. M.; Catalao, J.

    2015-05-01

    The aim of this work is to study the temporal behavior of interferometric coherence of natural scenes and use it to discriminate different classes of targets. The scattering properties of targets within a SAR resolution cell depend on their spatial distribution and dielectric constant. We focus on agriculture scenes. In case of bare soils, the radar cross section depends on surface roughness and soil moisture. Both quantities are strongly related to agriculture practices. The interferometric coherence can be modelled as the factorization of correlation terms due to spatial and temporal baselines, terrain roughness, soil moisture and residual noise. We use multivariate analysis methodologies to discriminate scattering classes exhibiting different temporal behaviors of the interferometric coherence. For each class, the temporal evolution of the interferometric phase and radar cross-section are studied.

  7. Scene-based Wave-front Sensing for Remote Imaging

    SciTech Connect

    Poyneer, L A; LaFortune, K; Chan, C

    2003-07-30

    Scene-based wave-front sensing (SBWFS) is a technique that allows an arbitrary scene to be used for wave-front sensing with adaptive optics (AO) instead of the normal point source. This makes AO feasible in a wide range of interesting scenarios. This paper first presents the basic concepts and properties of SBWFS. Then it discusses that application of this technique with AO to remote imaging. For the specific case of correction of a lightweight optic. End-to-end simulation results establish that in this case, SBWFS can perform as well as point-source AO. Design considerations such as noise propagation, number of subapertures and tracking changing image content are analyzed.

  8. Scene Illumination as an Indicator of Image Manipulation

    NASA Astrophysics Data System (ADS)

    Riess, Christian; Angelopoulou, Elli

    The goal of blind image forensics is to distinguish original and manipulated images. We propose illumination color as a new indicator for the assessment of image authenticity. Many images exhibit a combination of multiple illuminants (flash photography, mixture of indoor and outdoor lighting, etc.). In the proposed method, the user selects illuminated areas for further investigation. The illuminant colors are locally estimated, effectively decomposing the scene in a map of differently illuminated regions. Inconsistencies in such a map suggest possible image tampering. Our method is physics-based, which implies that the outcome of the estimation can be further constrained if additional knowledge on the scene is available. Experiments show that these illumination maps provide a useful and very general forensics tool for the analysis of color images.

  9. An intercomparison of artificial intelligence approaches for polar scene identification

    NASA Technical Reports Server (NTRS)

    Tovinkere, V. R.; Penaloza, M.; Logar, A.; Lee, J.; Weger, R. C.; Berendes, T. A.; Welch, R. M.

    1993-01-01

    The following six different artificial-intelligence (AI) approaches to polar scene identification are examined: (1) a feed forward back propagation neural network, (2) a probabilistic neural network, (3) a hybrid neural network, (4) a 'don't care' feed forward perception model, (5) a 'don't care' feed forward back propagation neural network, and (6) a fuzzy logic based expert system. The ten classes into which six AVHRR local-coverage arctic scenes were classified were: water, solid sea ice, broken sea ice, snow-covered mountains, land, stratus over ice, stratus over water, cirrus over water, cumulus over water, and multilayer cloudiness. It was found that 'don't care' back propagation neural network produced the highest accuracies. This approach has also low CPU requirement.

  10. Better Batteries for Transportation: Behind the Scenes @ Berkeley Lab

    SciTech Connect

    Battaglia, Vince

    2011-01-01

    Vince Battaglia leads a behind-the-scenes tour of Berkeley Lab's BATT, the Batteries for Advanced Transportation Technologies Program he leads, where researchers aim to improve batteries upon which the range, efficiency, and power of tomorrow's electric cars will depend. This is the first in a forthcoming series of videos taking viewers into the laboratories and research facilities that members of the public rarely get to see.

  11. Device for imaging scenes with very large ranges of intensity

    DOEpatents

    Deason, Vance Albert

    2011-11-15

    A device for imaging scenes with a very large range of intensity having a pair of polarizers, a primary lens, an attenuating mask, and an imaging device optically connected along an optical axis. Preferably, a secondary lens, positioned between the attenuating mask and the imaging device is used to focus light on the imaging device. The angle between the first polarization direction and the second polarization direction is adjustable.

  12. Better Batteries for Transportation: Behind the Scenes @ Berkeley Lab

    ScienceCinema

    Battaglia, Vince

    2016-07-12

    Vince Battaglia leads a behind-the-scenes tour of Berkeley Lab's BATT, the Batteries for Advanced Transportation Technologies Program he leads, where researchers aim to improve batteries upon which the range, efficiency, and power of tomorrow's electric cars will depend. This is the first in a forthcoming series of videos taking viewers into the laboratories and research facilities that members of the public rarely get to see.

  13. Predicting the Valence of a Scene from Observers’ Eye Movements

    PubMed Central

    R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322

  14. Age differences in adults' scene memory: knowledge and strategy interactions.

    PubMed

    Azmitia, M; Perlmutter, M

    1988-08-01

    Three studies explored young and old adults' use of knowledge to support memory performance. Subjects viewed slides of familiar scenes containing high expectancy and low expectancy items and received free recall (Experiments 1, 2, and 3), cued recall (Experiments 1 and 2), and recognition (Experiments 1 and 2) tests. In Experiment 1 encoding intentionality was varied between subjects. Young adults performed better than old adults on all tests, but on all tests, both age groups produced a similar pattern of better memory of high expectancy than low expectancy items and showed an encoding intentionality effect for low expectancy items. In Experiments 2 and 3 all subjects were told to intentionally encode only one item from each scene; the remaining items could be encoded incidentally. Young adults performed better than old adults, although again, the pattern of performance of the two age groups was similar. High expectancy and low expectancy intentional items were recalled equally well, but high expectancy incidental items were recalled better than low expectancy incidental items. Low expectancy intentional items were recognized better than high expectancy intentional items, but incidental high expectancy items were recognized better than incidental low expectancy items. It was concluded that young and old adults use their knowledge in similar ways to guide scene memory. The effects of item expectancy and item intentionality were interpreted within Hasher & Zacks' (2) model of automatic and effortful processes. PMID:3228800

  15. Touching and Hearing Unseen Objects: Multisensory Effects on Scene Recognition

    PubMed Central

    van Lier, Rob

    2016-01-01

    In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions. In Experiment 1, geometrical objects and simple sounds were used, while in Experiment 2, the objects comprised toy animals that were matched with semantically compatible animal sounds. In Experiment 3, we replicated Experiment 1, but now a tactile-auditory object identification task preceded the experiment in which the participants learned to identify the objects based on tactile and auditory input. For each experiment, the results revealed a significant performance increase only after the switch from bimodal to unimodal. Thus, it appears that the release of bimodal identification, from audio-tactile to tactile-only produces a benefit that is not achieved when having the reversed order in which sound was added after having experience with haptic-only. We conclude that task-related factors other than mere bimodal identification cause the facilitation when switching from bimodal to unimodal conditions. PMID:27698985

  16. Auditory Scene Analysis: The Sweet Music of Ambiguity

    PubMed Central

    Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A.

    2011-01-01

    In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701

  17. Predicting the Valence of a Scene from Observers' Eye Movements.

    PubMed

    R-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that 'saliency map', 'fixation histogram', 'histogram of fixation duration', and 'histogram of saccade slope' are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322

  18. Predicting the Valence of a Scene from Observers' Eye Movements.

    PubMed

    R-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that 'saliency map', 'fixation histogram', 'histogram of fixation duration', and 'histogram of saccade slope' are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images.

  19. The Hip-Hop club scene: Gender, grinding and sex.

    PubMed

    Muñoz-Laboy, Miguel; Weinstein, Hannah; Parker, Richard

    2007-01-01

    Hip-Hop culture is a key social medium through which many young men and women from communities of colour in the USA construct their gender. In this study, we focused on the Hip-Hop club scene in New York City with the intention of unpacking narratives of gender dynamics from the perspective of young men and women, and how these relate to their sexual experiences. We conducted a three-year ethnographic study that included ethnographic observations of Hip-Hop clubs and their social scene, and in-depth interviews with young men and young women aged 15-21. This paper describes how young people negotiate gender relations on the dance floor of Hip-Hop clubs. The Hip-Hop club scene represents a context or setting where young men's masculinities are contested by the social environment, where women challenge hypermasculine privilege and where young people can set the stage for what happens next in their sexual and emotional interactions. Hip-Hop culture therefore provides a window into the gender and sexual scripts of many urban minority youth. A fuller understanding of these patterns can offer key insights into the social construction of sexual risk, as well as the possibilities for sexual health promotion, among young people in urban minority populations.

  20. Understanding the Radiant Scattering Behavior of Vegetated Scenes

    NASA Technical Reports Server (NTRS)

    Kimes, D. S. (Principal Investigator)

    1985-01-01

    Knowledge of the physics of the scattering behavior of vegetation will ultimately serve the remote sensing and earth science community in many ways. For example, it will provide: (1) insight and guidance in developing new extraction techniques of canopy characteristics, (2) a basis for better interpretation of off-nadir satellite and aircraft data, (3) a basis for defining specifications of future earth observing sensor systems, and (4) a basis for defining important aspects of physical and biological processes of the plant system. The overall objective of the three-year study is to improve our fundamental understanding of the dynamics of directional scattering properties of vegetation canopies through analysis of field data and model simulation data. The specific objectives are to: (1) collect directional reflectance data covering the entire exitance hemisphere for several common vegetation canopies with various geometric structure (both homogeneous and row crop structures), (2) develop a scene radiation model with a general mathematical framework which will treat 3-D variability in heterogeneous scenes and account for 3-D radiant interactions within the scene, (3) conduct validations of the model on collected data sets, and (4) test and expand proposed physical scattering mechanisms involved in reflectance distribution dynamics by analyzing both field and modeling data.

  1. Scene kinetics mitigation using factor analysis with derivative factors.

    SciTech Connect

    Larson, Kurt W.; Melgaard, David Kennett; Scholand, Andrew Joseph

    2010-07-01

    Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced. Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation; however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of the image and its derivative factors. This approach has several advantages including: (1) only a small number of images are required to develop the model, (2) the model can estimate backgrounds with jitter different from the input training images, (3) the method is particularly effective for sub-pixel jitter, and (4) the model can be developed from images before the change detection process. In addition the scores from projecting the factors on the background provide estimates of the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the theoretical basis for this technique, provide examples of its application, and discuss its limitations.

  2. Higher-order scene statistics of breast images

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.

    2009-02-01

    Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.

  3. An adaptive correspondence algorithm for modeling scenes with strong interreflections.

    PubMed

    Xu, Yi; Aliaga, Daniel G

    2009-01-01

    Modeling real-world scenes, beyond diffuse objects, plays an important role in computer graphics, virtual reality, and other commercial applications. One active approach is projecting binary patterns in order to obtain correspondence and reconstruct a densely sampled 3D model. In such structured-light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. When a scene has abundant indirect light, this process is especially difficult. In this paper, we present a robust pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval but not in the other. Our method performs better than standard method due to the fact that it avoids gross errors during decoding process caused by strong inter-reflections. For the remaining uncertain pixels, we apply an iterative algorithm to reduce the inter-reflection within the scene. Thus, more points can be decoded and reconstructed after each iteration. Moreover, the iterative algorithm is carried out in an adaptive fashion for fast convergence.

  4. Touching and Hearing Unseen Objects: Multisensory Effects on Scene Recognition

    PubMed Central

    van Lier, Rob

    2016-01-01

    In three experiments, we investigated the influence of object-specific sounds on haptic scene recognition without vision. Blindfolded participants had to recognize, through touch, spatial scenes comprising six objects that were placed on a round platform. Critically, in half of the trials, object-specific sounds were played when objects were touched (bimodal condition), while sounds were turned off in the other half of the trials (unimodal condition). After first exploring the scene, two objects were swapped and the task was to report, which of the objects swapped positions. In Experiment 1, geometrical objects and simple sounds were used, while in Experiment 2, the objects comprised toy animals that were matched with semantically compatible animal sounds. In Experiment 3, we replicated Experiment 1, but now a tactile-auditory object identification task preceded the experiment in which the participants learned to identify the objects based on tactile and auditory input. For each experiment, the results revealed a significant performance increase only after the switch from bimodal to unimodal. Thus, it appears that the release of bimodal identification, from audio-tactile to tactile-only produces a benefit that is not achieved when having the reversed order in which sound was added after having experience with haptic-only. We conclude that task-related factors other than mere bimodal identification cause the facilitation when switching from bimodal to unimodal conditions.

  5. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  6. From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category.

    PubMed

    Groen, Iris I A; Ghebreab, Sennay; Prins, Hielke; Lamme, Victor A F; Scholte, H Steven

    2013-11-27

    The visual system processes natural scenes in a split second. Part of this process is the extraction of "gist," a global first impression. It is unclear, however, how the human visual system computes this information. Here, we show that, when human observers categorize global information in real-world scenes, the brain exhibits strong sensitivity to low-level summary statistics. Subjects rated a specific instance of a global scene property, naturalness, for a large set of natural scenes while EEG was recorded. For each individual scene, we derived two physiologically plausible summary statistics by spatially pooling local contrast filter outputs: contrast energy (CE), indexing contrast strength, and spatial coherence (SC), indexing scene fragmentation. We show that behavioral performance is directly related to these statistics, with naturalness rating being influenced in particular by SC. At the neural level, both statistics parametrically modulated single-trial event-related potential amplitudes during an early, transient window (100-150 ms), but SC continued to influence activity levels later in time (up to 250 ms). In addition, the magnitude of neural activity that discriminated between man-made versus natural ratings of individual trials was related to SC, but not CE. These results suggest that global scene information may be computed by spatial pooling of responses from early visual areas (e.g., LGN or V1). The increased sensitivity over time to SC in particular, which reflects scene fragmentation, suggests that this statistic is actively exploited to estimate scene naturalness.

  7. From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category.

    PubMed

    Groen, Iris I A; Ghebreab, Sennay; Prins, Hielke; Lamme, Victor A F; Scholte, H Steven

    2013-11-27

    The visual system processes natural scenes in a split second. Part of this process is the extraction of "gist," a global first impression. It is unclear, however, how the human visual system computes this information. Here, we show that, when human observers categorize global information in real-world scenes, the brain exhibits strong sensitivity to low-level summary statistics. Subjects rated a specific instance of a global scene property, naturalness, for a large set of natural scenes while EEG was recorded. For each individual scene, we derived two physiologically plausible summary statistics by spatially pooling local contrast filter outputs: contrast energy (CE), indexing contrast strength, and spatial coherence (SC), indexing scene fragmentation. We show that behavioral performance is directly related to these statistics, with naturalness rating being influenced in particular by SC. At the neural level, both statistics parametrically modulated single-trial event-related potential amplitudes during an early, transient window (100-150 ms), but SC continued to influence activity levels later in time (up to 250 ms). In addition, the magnitude of neural activity that discriminated between man-made versus natural ratings of individual trials was related to SC, but not CE. These results suggest that global scene information may be computed by spatial pooling of responses from early visual areas (e.g., LGN or V1). The increased sensitivity over time to SC in particular, which reflects scene fragmentation, suggests that this statistic is actively exploited to estimate scene naturalness. PMID:24285888

  8. Hangar no. 2 west door detail view. Note tracks. Note ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Hangar no. 2 west door detail view. Note tracks. Note box structures on doors for door opening mechanisms. Looking 4 N. - Marine Corps Air Station Tustin, Southern Lighter Than Air Ship Hangar, Near intersection of Windmill Road & Johnson Street, Tustin, Orange County, CA

  9. Online Class Size, Note Reading, Note Writing and Collaborative Discourse

    ERIC Educational Resources Information Center

    Qiu, Mingzhu; Hewitt, Jim; Brett, Clare

    2012-01-01

    Researchers have long recognized class size as affecting students' performance in face-to-face contexts. However, few studies have examined the effects of class size on exact reading and writing loads in online graduate-level courses. This mixed-methods study examined relationships among class size, note reading, note writing, and collaborative…

  10. Reproducing Actual Morphology of Planetary Lava Flows

    NASA Astrophysics Data System (ADS)

    Miyamoto, H.; Sasaki, S.

    1996-03-01

    Assuming that lava flows behave as non-isothermal laminar Bingham fluids, we developed a numerical code of lava flows. We take the self gravity effects and cooling mechanisms into account. The calculation method is a kind of cellular automata using a reduced random space method, which can eliminate the mesh shape dependence. We can calculate large scale lava flows precisely without numerical instability and reproduce morphology of actual lava flows.

  11. The Actual Apollo 13 Prime Crew

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The actual Apollo 13 lunar landing mission prime crew from left to right are: Commander, James A. Lovell Jr., Command Module pilot, John L. Swigert Jr.and Lunar Module pilot, Fred W. Haise Jr. The original Command Module pilot for this mission was Thomas 'Ken' Mattingly Jr. but due to exposure to German measles he was replaced by his backup, Command Module pilot, John L. 'Jack' Swigert Jr.

  12. Scene classification of infrared images based on texture feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    Scene Classification refers to as assigning a physical scene into one of a set of predefined categories. Utilizing the method texture feature is good for providing the approach to classify scenes. Texture can be considered to be repeating patterns of local variation of pixel intensities. And texture analysis is important in many applications of computer image analysis for classification or segmentation of images based on local spatial variations of intensity. Texture describes the structural information of images, so it provides another data to classify comparing to the spectrum. Now, infrared thermal imagers are used in different kinds of fields. Since infrared images of the objects reflect their own thermal radiation, there are some shortcomings of infrared images: the poor contrast between the objectives and background, the effects of blurs edges, much noise and so on. Because of these shortcomings, it is difficult to extract to the texture feature of infrared images. In this paper we have developed an infrared image texture feature-based algorithm to classify scenes of infrared images. This paper researches texture extraction using Gabor wavelet transform. The transformation of Gabor has excellent capability in analysis the frequency and direction of the partial district. Gabor wavelets is chosen for its biological relevance and technical properties In the first place, after introducing the Gabor wavelet transform and the texture analysis methods, the infrared images are extracted texture feature by Gabor wavelet transform. It is utilized the multi-scale property of Gabor filter. In the second place, we take multi-dimensional means and standard deviation with different scales and directions as texture parameters. The last stage is classification of scene texture parameters with least squares support vector machine (LS-SVM) algorithm. SVM is based on the principle of structural risk minimization (SRM). Compared with SVM, LS-SVM has overcome the shortcoming of

  13. N400 brain responses to spoken phrases paired with photographs of scenes: implications for visual scene displays in AAC systems.

    PubMed

    Wilkinson, Krista M; Stutzman, Allyson; Seisler, Andrea

    2015-03-01

    Augmentative and alternative communication (AAC) systems are often implemented for individuals whose speech cannot meet their full communication needs. One type of aided display is called a Visual Scene Display (VSD). VSDs consist of integrated scenes (such as photographs) in which language concepts are embedded. Often, the representations of concepts on VSDs are perceptually similar to their referents. Given this physical resemblance, one may ask how well VSDs support development of symbolic functioning. We used brain imaging techniques to examine whether matches and mismatches between the content of spoken messages and photographic images of scenes evoke neural activity similar to activity that occurs to spoken or written words. Electroencephalography (EEG) was recorded from 15 college students who were shown photographs paired with spoken phrases that were either matched or mismatched to the concepts embedded within each photograph. Of interest was the N400 component, a negative deflecting wave 400 ms post-stimulus that is considered to be an index of semantic functioning. An N400 response in the mismatched condition (but not the matched) would replicate brain responses to traditional linguistic symbols. An N400 was found, exclusively in the mismatched condition, suggesting that mismatches between spoken messages and VSD-type representations set the stage for the N400 in ways similar to traditional linguistic symbols.

  14. Research Notes and Information References

    1994-12-01

    The RNS (Research Notes System) is a set of programs and databases designed to aid the research worker in gathering, maintaining, and using notes taken from the literature. The sources for the notes can be books, journal articles, reports, private conversations, conference papers, audiovisuals, etc. The system ties the databases together in a relational structure, thus eliminating data redundancy while providing full access to all the information. The programs provide the means for access andmore » data entry in a way that reduces the key-entry burden for the user. Each note has several data fields. Included are the text of the note, the subject classification (for retrieval), and the reference identification data. These data are divided into four databases: Document data - title, author, publisher, etc., fields to identify the article within the document; Note data - text and page of the note; Sublect data - subject categories to ensure uniform spelling for searches. Additionally, there are subsidiary files used by the system, including database index and temporary work files. The system provides multiple access routes to the notes, both structurally (access method) and topically (through cross-indexing). Output may be directed to a printer or saved as a file for input to word processing software.« less

  15. Research Notes and Information References

    SciTech Connect

    Hartley, III, Dean S.

    1994-12-01

    The RNS (Research Notes System) is a set of programs and databases designed to aid the research worker in gathering, maintaining, and using notes taken from the literature. The sources for the notes can be books, journal articles, reports, private conversations, conference papers, audiovisuals, etc. The system ties the databases together in a relational structure, thus eliminating data redundancy while providing full access to all the information. The programs provide the means for access and data entry in a way that reduces the key-entry burden for the user. Each note has several data fields. Included are the text of the note, the subject classification (for retrieval), and the reference identification data. These data are divided into four databases: Document data - title, author, publisher, etc., fields to identify the article within the document; Note data - text and page of the note; Sublect data - subject categories to ensure uniform spelling for searches. Additionally, there are subsidiary files used by the system, including database index and temporary work files. The system provides multiple access routes to the notes, both structurally (access method) and topically (through cross-indexing). Output may be directed to a printer or saved as a file for input to word processing software.

  16. Air resistance measurements on actual airplane parts

    NASA Technical Reports Server (NTRS)

    Weiselsberger, C

    1923-01-01

    For the calculation of the parasite resistance of an airplane, a knowledge of the resistance of the individual structural and accessory parts is necessary. The most reliable basis for this is given by tests with actual airplane parts at airspeeds which occur in practice. The data given here relate to the landing gear of a Siemanms-Schuckert DI airplane; the landing gear of a 'Luftfahrzeug-Gesellschaft' airplane (type Roland Dlla); landing gear of a 'Flugzeugbau Friedrichshafen' G airplane; a machine gun, and the exhaust manifold of a 269 HP engine.

  17. Explosive Percolation Transition is Actually Continuous

    NASA Astrophysics Data System (ADS)

    da Costa, R. A.; Dorogovtsev, S. N.; Goltsev, A. V.; Mendes, J. F. F.

    2010-12-01

    Recently a discontinuous percolation transition was reported in a new “explosive percolation” problem for irreversible systems [D. Achlioptas, R. M. D’Souza, and J. Spencer, Science 323, 1453 (2009)SCIEAS0036-807510.1126/science.1167782] in striking contrast to ordinary percolation. We consider a representative model which shows that the explosive percolation transition is actually a continuous, second order phase transition though with a uniquely small critical exponent of the percolation cluster size. We describe the unusual scaling properties of this transition and find its critical exponents and dimensions.

  18. The capture and recreation of 3D auditory scenes

    NASA Astrophysics Data System (ADS)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  19. Human matching performance of genuine crime scene latent fingerprints.

    PubMed

    Thompson, Matthew B; Tangen, Jason M; McCarthy, Duncan J

    2014-02-01

    There has been very little research into the nature and development of fingerprint matching expertise. Here we present the results of an experiment testing the claimed matching expertise of fingerprint examiners. Expert (n = 37), intermediate trainee (n = 8), new trainee (n = 9), and novice (n = 37) participants performed a fingerprint discrimination task involving genuine crime scene latent fingerprints, their matches, and highly similar distractors, in a signal detection paradigm. Results show that qualified, court-practicing fingerprint experts were exceedingly accurate compared with novices. Experts showed a conservative response bias, tending to err on the side of caution by making more errors of the sort that could allow a guilty person to escape detection than errors of the sort that could falsely incriminate an innocent person. The superior performance of experts was not simply a function of their ability to match prints, per se, but a result of their ability to identify the highly similar, but nonmatching fingerprints as such. Comparing these results with previous experiments, experts were even more conservative in their decision making when dealing with these genuine crime scene prints than when dealing with simulated crime scene prints, and this conservatism made them relatively less accurate overall. Intermediate trainees-despite their lack of qualification and average 3.5 years experience-performed about as accurately as qualified experts who had an average 17.5 years experience. New trainees-despite their 5-week, full-time training course or their 6 months experience-were not any better than novices at discriminating matching and similar nonmatching prints, they were just more conservative. Further research is required to determine the precise nature of fingerprint matching expertise and the factors that influence performance. The findings of this representative, lab-based experiment may have implications for the way fingerprint examiners testify in

  20. Rapid discrimination of visual scene content in the human brain.

    PubMed

    Anokhin, Andrey P; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W; Heath, Andrew C

    2006-06-01

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n = 264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200 and 600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline region, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815

  1. Optic flow aided navigation and 3D scene reconstruction

    NASA Astrophysics Data System (ADS)

    Rollason, Malcolm

    2013-10-01

    An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.

  2. Variability of eye movements when viewing dynamic natural scenes.

    PubMed

    Dorr, Michael; Martinetz, Thomas; Gegenfurtner, Karl R; Barth, Erhardt

    2010-01-01

    How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze. PMID:20884493

  3. Concepts using optical MEMS array for ladar scene projection

    NASA Astrophysics Data System (ADS)

    Smith, J. Lynn

    2003-09-01

    Scene projection for HITL testing of LADAR seekers is unique because the 3rd dimension is time delay. Advancement in AFRL for electronic delay and pulse shaping circuits, VCSEL emitters, fiber optic and associated scene generation is underway, and technology hand-off to test facilities is expected eventually. However, size and cost currently projected behooves cost mitigation through further innovation in system design, incorporating new developments, cooperation, and leveraging of dual-purpose technology. Therefore a concept is offered which greatly reduces the number (thus cost) of pulse shaping circuits and enables the projector to be installed on the mobile arm of a flight motion simulator table without fiber optic cables. The concept calls for an optical MEMS (micro-electromechanical system) steerable micro-mirror array. IFOV"s are a cluster of four micro-mirrors, each of which steers through a unique angle to a selected light source with the appropriate delay and waveform basis. An array of such sources promotes angle-to-delay mapping. Separate pulse waveform basis circuits for each scene IFOV are not required because a single set of basis functions is broadcast to all MEMS elements simultaneously. Waveform delivery to spatial filtering and collimation optics is addressed by angular selection at the MEMS array. Emphasis is on technology in existence or under development by the government, its contractors and the telecommunications industry. Values for components are first assumed as those that are easily available. Concept adequacy and upgrades are then discussed. In conclusion an opto-mechanical scan option ranks as the best light source for near-term MEMS-based projector testing of both flash and scan LADAR seekers.

  4. Validation of the ASTER instrument level 1A scene geometry

    USGS Publications Warehouse

    Kieffer, H.H.; Mullins, K.F.; MacKinnon, D.J.

    2008-01-01

    An independent assessment of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument geometry was undertaken by the U.S. ASTER Team, to confirm the geometric correction parameters developed and applied to Level 1A (radiometrically and geometrically raw with correction parameters appended) ASTER data. The goal was to evaluate the geometric quality of the ASTER system and the stability of the Terra spacecraft. ASTER is a 15-band system containing optical instruments with resolutions from 15- to 90-meters; all geometrically registered products are ultimately tied to the 15-meter Visible and Near Infrared (VNIR) sub-system. Our evaluation process first involved establishing a large database of Ground Control Points (GCP) in the mid-western United States; an area with features of an appropriate size for spacecraft instrument resolutions. We used standard U.S. Geological Survey (USGS) Digital Orthophoto Quads (DOQS) of areas in the mid-west to locate accurate GCPs by systematically identifying road intersections and recording their coordinates. Elevations for these points were derived from USGS Digital Elevation Models (DEMS). Road intersections in a swath of nine contiguous ASTER scenes were then matched to the GCPs, including terrain correction. We found no significant distortion in the images; after a simple image offset to absolute position, the RMS residual of about 200 points per scene was less than one-half a VNIR pixel. Absolute locations were within 80 meters, with a slow drift of about 10 meters over the entire 530-kilometer swath. Using strictly simultaneous observations of scenes 370 kilometers apart, we determined a stereo angle correction of 0.00134 degree with an accuracy of one microradian. The mid-west GCP field and the techniques used here should be widely applicable in assessing other spacecraft instruments having resolutions from 5 to 50-meters. ?? 2008 American Society for Photogrammetry and Remote Sensing.

  5. Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene

    NASA Astrophysics Data System (ADS)

    Bonner, David

    2010-05-01

    Connecting physics concepts with real-world events allows students to establish a strong conceptual foundation. When such events are particularly interesting to students, it can greatly impact their engagement and enthusiasm in an activity. Activities that involve studying real-world events of high interest can provide students a long-lasting understanding and positive memorable experiences, both of which heighten the learning experiences of those students. One such activity, described in depth in this paper, utilizes a murder mystery and crime scene investigation as an application of basic projectile motion.

  6. Scene projection technology development for imaging sensor testing at AEDC

    NASA Astrophysics Data System (ADS)

    Lowry, H.; Fedde, M.; Crider, D.; Horne, H.; Bynum, K.; Steely, S.; Labello, J.

    2012-06-01

    Arnold Engineering Development Center (AEDC) is tasked with visible-to-LWIR imaging sensor calibration and characterization, as well as hardware-in-the-loop (HWIL) testing with high-fidelity complex scene projection to validate sensor mission performance. They are thus involved in the development of technologies and methodologies that are used in space simulation chambers for such testing. These activities support a variety of program needs such as space situational awareness (SSA). This paper provides an overview of pertinent technologies being investigated and implemented at AEDC.

  7. Portable X-ray Fluorescence Unit for Analyzing Crime Scenes

    NASA Astrophysics Data System (ADS)

    Visco, A.

    2003-12-01

    Goddard Space Flight Center and the National Institute of Justice have teamed up to apply NASA technology to the field of forensic science. NASA hardware that is under development for future planetary robotic missions, such as Mars exploration, is being engineered into a rugged, portable, non-destructive X-ray fluorescence system for identifying gunshot residue, blood, and semen at crime scenes. This project establishes the shielding requirements that will ensure that the exposure of a user to ionizing radiation is below the U.S. Nuclear Regulatory Commission's allowable limits, and also develops the benchtop model for testing the system in a controlled environment.

  8. Probing the Natural Scene by Echolocation in Bats

    PubMed Central

    Moss, Cynthia F.; Surlykke, Annemarie

    2010-01-01

    Bats echolocating in the natural environment face the formidable task of sorting signals from multiple auditory objects, echoes from obstacles, prey, and the calls of conspecifics. Successful orientation in a complex environment depends on auditory information processing, along with adaptive vocal-motor behaviors and flight path control, which draw upon 3-D spatial perception, attention, and memory. This article reviews field and laboratory studies that document adaptive sonar behaviors of echolocating bats, and point to the fundamental signal parameters they use to track and sort auditory objects in a dynamic environment. We suggest that adaptive sonar behavior provides a window to bats’ perception of complex auditory scenes. PMID:20740076

  9. Cooperation of mobile robots for accident scene inspection

    NASA Astrophysics Data System (ADS)

    Byrne, R. H.; Harrington, J.

    A telerobotic system demonstration was developed for the Department of Energy's Accident Response group to highlight the applications of telerobotic vehicles to accident site inspection. The proof-of-principle system employs two mobile robots, Dixie and RAYBOT, to inspect a simulated accident site. Both robots are controlled serially from a single driving station, allowing an operator to take advantage of having multiple robots at the scene. The telerobotic system is described and some of the advantages of having more than one robot present are discussed. Future plans for the system are also presented.

  10. Photorealistic ray tracing to visualize automobile side mirror reflective scenes.

    PubMed

    Lee, Hocheol; Kim, Kyuman; Lee, Gang; Lee, Sungkoo; Kim, Jingu

    2014-10-20

    We describe an interactive visualization procedure for determining the optimal surface of a special automobile side mirror, thereby removing the blind spot, without the need for feedback from the error-prone manufacturing process. If the horizontally progressive curvature distributions are set to the semi-mathematical expression for a free-form surface, the surface point set can then be derived through numerical integration. This is then converted to a NURBS surface while retaining the surface curvature. Then, reflective scenes from the driving environment can be virtually realized using photorealistic ray tracing, in order to evaluate how these reflected images would appear to drivers.

  11. Scenes, Spaces, and Memory Traces: What Does the Hippocampus Do?

    PubMed

    Maguire, Eleanor A; Intraub, Helene; Mullally, Sinéad L

    2016-10-01

    The hippocampus is one of the most closely scrutinized brain structures in neuroscience. While traditionally associated with memory and spatial cognition, in more recent years it has also been linked with other functions, including aspects of perception and imagining fictitious and future scenes. Efforts continue apace to understand how the hippocampus plays such an apparently wide-ranging role. Here we consider recent developments in the field and in particular studies of patients with bilateral hippocampal damage. We outline some key findings, how they have subsequently been challenged, and consider how to reconcile the disparities that are at the heart of current lively debates in the hippocampal literature.

  12. Probing the natural scene by echolocation in bats.

    PubMed

    Moss, Cynthia F; Surlykke, Annemarie

    2010-01-01

    Bats echolocating in the natural environment face the formidable task of sorting signals from multiple auditory objects, echoes from obstacles, prey, and the calls of conspecifics. Successful orientation in a complex environment depends on auditory information processing, along with adaptive vocal-motor behaviors and flight path control, which draw upon 3-D spatial perception, attention, and memory. This article reviews field and laboratory studies that document adaptive sonar behaviors of echolocating bats, and point to the fundamental signal parameters they use to track and sort auditory objects in a dynamic environment. We suggest that adaptive sonar behavior provides a window to bats' perception of complex auditory scenes.

  13. Gender, smiling, and witness credibility in actual trials.

    PubMed

    Nagle, Jacklyn E; Brodsky, Stanley L; Weeter, Kaycee

    2014-01-01

    It has been acknowledged that females exhibit more smiling behaviors than males, but there has been little attention to this gender difference in the courtroom. Although both male and female witnesses exhibit smiling behaviors, there has been no research examining the subsequent effect of gender and smiling on witness credibility. This study used naturalistic observation to examine smiling behaviors and credibility in actual witnesses testifying in court. Raters assessed the smiling behaviors and credibility (as measured by the Witness Credibility Scale) of 32 male and female witnesses testifying in trials in a mid-sized Southern city. "Credibility raters" rated the perceived likeability, trustworthiness, confidence, knowledge, and overall credibility of the witnesses using the Witness Credibility Scale. "Smile raters" noted smiling frequency and types, including speaking/expressive and listening/receptive smiles. Gender was found to affect perceived trustworthiness ratings, in which male witnesses were seen as more trustworthy than female witnesses. No significant differences were found in the smiling frequency for male and female witnesses. However, the presence of smiling was found to contribute to perceived likeability of a witness. Smiling female witnesses were found to be more likeable than smiling male and non-smiling female witnesses.

  14. How to actualize potential: a bioecological approach to talent development.

    PubMed

    Ceci, Stephen J; Williams-Ceci, Sterling; Williams, Wendy M

    2016-08-01

    Bioecological theory posits three interacting principles to explain developmental outcomes such as fluctuating achievement levels and changing heritability coefficients. Here, we apply the theory to the domain of talent development, by reviewing short-term and long-term cognitive interventions. We argue that macro-level analyses of cultural practices (e.g., matrilineal inheritance and property ownership) and national systems of education are consistent with the bioecological theory; when the findings from these analyses are unpacked, the engines that drive them are so-called proximal processes. This finding has implications for the design and delivery of instruction and the development of talent. We argue that talent is fostered by the same three bioecological mechanisms that explain the actualization of genetic potential. We conclude by discussing several self-descriptions and personal narratives by gifted students in which they spontaneously refer to these bioecological mechanisms in their own talent-development processes. Similar testimonials have been documented by historic talent researchers such as Benjamin Bloom, noting the importance of continual adjustments in feedback.

  15. Gender, smiling, and witness credibility in actual trials.

    PubMed

    Nagle, Jacklyn E; Brodsky, Stanley L; Weeter, Kaycee

    2014-01-01

    It has been acknowledged that females exhibit more smiling behaviors than males, but there has been little attention to this gender difference in the courtroom. Although both male and female witnesses exhibit smiling behaviors, there has been no research examining the subsequent effect of gender and smiling on witness credibility. This study used naturalistic observation to examine smiling behaviors and credibility in actual witnesses testifying in court. Raters assessed the smiling behaviors and credibility (as measured by the Witness Credibility Scale) of 32 male and female witnesses testifying in trials in a mid-sized Southern city. "Credibility raters" rated the perceived likeability, trustworthiness, confidence, knowledge, and overall credibility of the witnesses using the Witness Credibility Scale. "Smile raters" noted smiling frequency and types, including speaking/expressive and listening/receptive smiles. Gender was found to affect perceived trustworthiness ratings, in which male witnesses were seen as more trustworthy than female witnesses. No significant differences were found in the smiling frequency for male and female witnesses. However, the presence of smiling was found to contribute to perceived likeability of a witness. Smiling female witnesses were found to be more likeable than smiling male and non-smiling female witnesses. PMID:24634058

  16. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-11-01

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form.

  17. Reward guides attention to object categories in real-world scenes.

    PubMed

    Hickey, Clayton; Kaiser, Daniel; Peelen, Marius V

    2015-04-01

    Reward is thought to motivate animal-approach behavior in part by automatically facilitating the perceptual processing of reward-associated visual stimuli. Studies have demonstrated this effect for low-level visual features such as color and orientation. However, outside of the laboratory, it is rare that low-level features uniquely characterize objects relevant for behavior. Here, we test whether reward can prime representations at the level of object category. Participants detected category exemplars (cars, trees, people) in briefly presented photographs of real-world scenes. On a subset of trials, successful target detection was rewarded and the effect of this reward was measured on the subsequent trial. Results show that rewarded selection of a category exemplar caused other members of this category to become visually salient, disrupting search when subsequently presented as distractors. It is important to note that this occurred even when there was little opportunity for the repetition of visual features between examples, with the rewarded selection of a human body increasing the salience of a subsequently presented face. Thus, selection of a category example appears to activate representations of prototypical category characteristics even when these are not present in the stimulus. In this way, reward can guide attention to categories of stimuli even when individual examples share no visual characteristics.

  18. Contrast enhancing and adjusting advanced very high resolution radiometer scenes for solar illumination

    USGS Publications Warehouse

    Zokaites, David M.

    1993-01-01

    The AVHRR (Advanced Very High Resolution Radiometer) satellite sensors provide daily coverage of the entire Earth. As a result, individual scenes cover broad geographic areas (roughly 3000 km by 5000 km) and can contain varying levels of solar illumination. Mosaics of AVHRR scenes can be created for large (continental and global) study areas. As the north-south extent of such mosaics increases, the lightness variability within the mosaic increases. AVHRR channels one and two of multiple daytime scenes were histogrammed to find a relationship between solar zenith and scene lightness as described by brightness value distribution. This relationship was used to determine look-up tables (luts) which removed effects of varying solar illumination. These luts were combined with a contrast enhancing lut and stored online. For individual scenes, one precomputed composite lut was applied to the entire scene based on the solar zenith at scene center. For mosaicked scenes, each pixel was adjusted based on the solar zenith at that pixel location. These procedures reduce lightness variability within and between scenes and enhance scene contrast to provide visually pleasing imagery.

  19. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    PubMed

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  20. Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory

    NASA Technical Reports Server (NTRS)

    Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.

    2005-01-01

    Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.

  1. The multispectral advanced volumetric real-time imaging compositor for real-time distributed scene generation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Ballard, Gary H.; Bunfield, Dennis H.; Peddycoart, Thomas E.; Trimble, Darian E.

    2011-06-01

    AMRDEC has developed the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC) prototype for distributed real-time hardware-in-the-loop (HWIL) scene generation. MAVRIC is a dynamic object-based energy conserved scene compositor that can seamlessly convolve distributed scene elements into temporally aligned physicsbased scenes for enhancing existing AMRDEC scene generation codes. The volumetric compositing process accepts input independent of depth order. This real-time compositor framework is built around AMRDEC's ContinuumCore API which provides the common messaging interface leveraging the Neutral Messaging Language (NML) for local, shared memory, reflective memory, network, and remote direct memory access (RDMA) communications and the Joint Signature Image Generator (JSIG) that provides energy conserved scene component interface at each render node. This structure allows for a highly scalable real-time environment capable of rendering individual objects at high fidelity while being considerate of real-time hardware-in-the-loop concerns, such as latency. As such, this system can be scaled to handle highly complex detailed scenes such as urban environments. This architecture provides the basis for common scene generation as it provides disparate scene elements to be calculated by various phenomenology codes and integrated seamlessly into a unified composited environment. This advanced capability is the gateway to higher fidelity scene generation such as ray-tracing. The high speed interconnects using PCI Express and InfiniBand were examined to support distributed scene generation whereby the scene graph, associated phenomenology, and the scene elements can be dynamically distributed across multiple high performance computing assets to maximize system performance.

  2. Detecting and collecting traces of semen and blood from outdoor crime scenes using crime scene dogs and presumptive tests.

    PubMed

    Skalleberg, A G; Bouzga, M M

    2016-07-01

    In 2009, the Norwegian police academy educated their first crime scene dogs, trained to locate traces of seminal fluid and blood in outdoor and indoor crime scenes. The Department of Forensic Biology was invited to take part in this project to educate the police in specimen collection and presumptive testing. We performed tests where seminal fluid was deposited on different outdoor surfaces from between one hour to six days, and blood on coniferous ground from between one hour to two days. For both body fluids the tests were performed with three different volumes. The crime scene dogs located the stains, and acid phosphatase/tetrabasebariumperoxide was used as presumptive tests before collection for microscopy and DNA analysis. For seminal fluid the dogs were able to locate all stains for up to two days and only the largest volume after four days. The presumptive tests confirmed the dog's detection. By microscopy we were able to detect spermatozoa for the smallest volumes up to 32h, and for the largest volume up to 4 days, and the DNA results are in correlation to these findings. For blood all the stains were detected by the dogs, except the smallest volume of blood after 32h. The presumptive tests confirmed the dog's detection. We were able to get DNA results for most stains in the timeframe 1-48h with the two largest volumes. The smallest volume shows diversities between the parallels, with no DNA results after 24h. These experiments show that it is critical that body fluids are collected within a timeframe to be able to get a good DNA result, preferably within the first 24-48h. Other parameters that should be taken into account are the weather conditions, type of surfaces and specimen collection.

  3. Detecting and collecting traces of semen and blood from outdoor crime scenes using crime scene dogs and presumptive tests.

    PubMed

    Skalleberg, A G; Bouzga, M M

    2016-07-01

    In 2009, the Norwegian police academy educated their first crime scene dogs, trained to locate traces of seminal fluid and blood in outdoor and indoor crime scenes. The Department of Forensic Biology was invited to take part in this project to educate the police in specimen collection and presumptive testing. We performed tests where seminal fluid was deposited on different outdoor surfaces from between one hour to six days, and blood on coniferous ground from between one hour to two days. For both body fluids the tests were performed with three different volumes. The crime scene dogs located the stains, and acid phosphatase/tetrabasebariumperoxide was used as presumptive tests before collection for microscopy and DNA analysis. For seminal fluid the dogs were able to locate all stains for up to two days and only the largest volume after four days. The presumptive tests confirmed the dog's detection. By microscopy we were able to detect spermatozoa for the smallest volumes up to 32h, and for the largest volume up to 4 days, and the DNA results are in correlation to these findings. For blood all the stains were detected by the dogs, except the smallest volume of blood after 32h. The presumptive tests confirmed the dog's detection. We were able to get DNA results for most stains in the timeframe 1-48h with the two largest volumes. The smallest volume shows diversities between the parallels, with no DNA results after 24h. These experiments show that it is critical that body fluids are collected within a timeframe to be able to get a good DNA result, preferably within the first 24-48h. Other parameters that should be taken into account are the weather conditions, type of surfaces and specimen collection. PMID:27174517

  4. The actual status of Astronomy in Moldova

    NASA Astrophysics Data System (ADS)

    Gaina, A.

    The astronomical research in the Republic of Moldova after Nicolae Donitch (Donici)(1874-1956(?)) were renewed in 1957, when a satellites observations station was open in Chisinau. Fotometric observations and rotations of first Soviet artificial satellites were investigated under a program SPIN put in action by the Academy of Sciences of former Socialist Countries. The works were conducted by Assoc. prof. Dr. V. Grigorevskij, which conducted also research in variable stars. Later, at the beginning of 60-th, an astronomical Observatory at the Chisinau State University named after Lenin (actually: the State University of Moldova), placed in Lozovo-Ciuciuleni villages was open, which were coordinated by Odessa State University (Prof. V.P. Tsesevich) and the Astrosovet of the USSR. Two main groups worked in this area: first conducted by V. Grigorevskij (till 1971) and second conducted by L.I. Shakun (till 1988), both graduated from Odessa State University. Besides this research areas another astronomical observations were made: Comets observations, astroclimate and atmospheric optics in collaboration with the Institute of the Atmospheric optics of the Siberian branch of the USSR (V. Chernobai, I. Nacu, C. Usov and A.F. Poiata). Comets observations were also made since 1988 by D. I. Gorodetskij which came to Chisinau from Alma-Ata and collaborated with Ukrainean astronomers conducted by K.I. Churyumov. Another part of space research was made at the State University of Tiraspol since the beggining of 70-th by a group of teaching staff of the Tiraspol State Pedagogical University: M.D. Polanuer, V.S. Sholokhov. No a collaboration between Moldovan astronomers and Transdniestrian ones actually exist due to War in Transdniestria in 1992. An important area of research concerned the Radiophysics of the Ionosphere, which was conducted in Beltsy at the Beltsy State Pedagogical Institute by a group of teaching staff of the University since the beginning of 70-th: N. D. Filip, E

  5. MODIS Solar Diffuser: Modelled and Actual Performance

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Xiong, Xiao-Xiong; Esposito, Joe; Wang, Xin-Dong; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument's solar diffuser is used in its radiometric calibration for the reflective solar bands (VIS, NTR, and SWIR) ranging from 0.41 to 2.1 micron. The sun illuminates the solar diffuser either directly or through a attenuation screen. The attenuation screen consists of a regular array of pin holes. The attenuated illumination pattern on the solar diffuser is not uniform, but consists of a multitude of pin-hole images of the sun. This non-uniform illumination produces small, but noticeable radiometric effects. A description of the computer model used to simulate the effects of the attenuation screen is given and the predictions of the model are compared with actual, on-orbit, calibration measurements.

  6. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 10 2014-01-01 2014-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  7. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 10 2012-01-01 2012-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  8. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  9. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 10 2013-01-01 2013-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  10. 7 CFR 1437.101 - Actual production history.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Actual production history. 1437.101 Section 1437.101... Determining Yield Coverage Using Actual Production History § 1437.101 Actual production history. Actual production history (APH) is the unit's record of crop yield by crop year for the APH base period. The...

  11. 3-D model-based frame interpolation for distributed video coding of static scenes.

    PubMed

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.

  12. Frontal eye fields involved in shifting frame of reference within working memory for scenes.

    PubMed

    Wallentin, Mikkel; Roepstorff, Andreas; Burgess, Neil

    2008-01-31

    Working memory (WM) evoked by linguistic cues for allocentric spatial and egocentric spatial aspects of a visual scene was investigated by correlating fMRI BOLD signal (or "activation") with performance on a spatial-relations task. Subjects indicated the relative positions of a person or object (referenced by the personal pronouns "he/she/it") in a previously shown image relative to either themselves (egocentric reference frame) or shifted to a reference frame anchored in another person or object in the image (allocentric reference frame), e.g. "Was he in front of you/her?" Good performers had both shorter response time and more correct responses than poor performers in both tasks. These behavioural variables were entered into a principal component analysis. The first component reflected generalised performance level. We found that the frontal eye fields (FEF), bilaterally, had a higher BOLD response during recall involving allocentric compared to egocentric spatial reference frames, and that this difference was larger in good performers than in poor performers as measured by the first behavioural principal component. The frontal eye fields may be used when subjects move their internal gaze during shifting reference frames in representational space. Analysis of actual eye movements in three subjects revealed no difference between egocentric and allocentric recall tasks where visual stimuli were also absent. Thus, the FEF machinery for directing eye movements may also be involved in changing reference frames within WM. PMID:17915262

  13. A simulation study of scene confusion factors in sensing soil moisture from orbital radar

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.; Roth, F. T.

    1983-01-01

    Simulated C-band radar imagery for a 124-km by 108-km test site in eastern Kansas is used to classify soil moisture. Simulated radar resolutions are 100 m by 100 m, 1 km by 1km, and 3 km by 3 km. Distributions of actual near-surface soil moisture are established daily for a 23-day accounting period using a water budget model. Within the 23-day period, three orbital radar overpasses are simulated roughly corresponding to generally moist, wet, and dry soil moisture conditions. The radar simulations are performed by a target/sensor interaction model dependent upon a terrain model, land-use classification, and near-surface soil moisture distribution. The accuracy of soil-moisture classification is evaluated for each single-date radar observation and also for multi-date detection of relative soil moisture change. In general, the results for single-date moisture detection show that 70% to 90% of cropland can be correctly classified to within +/- 20% of the true percent of field capacity. For a given radar resolution, the expected classification accuracy is shown to be dependent upon both the general soil moisture condition and also the geographical distribution of land-use and topographic relief. An analysis of cropland, urban, pasture/rangeland, and woodland subregions within the test site indicates that multi-temporal detection of relative soil moisture change is least sensitive to classification error resulting from scene complexity and topographic effects.

  14. Online scene change detection of multicast (MBone) video

    NASA Astrophysics Data System (ADS)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  15. Nonuniformity correction of a resistor array infrared scene projector

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Murrer, Robert Lee, Jr.

    1999-07-01

    At the Kinetic-kill vehicle Hardware-in-the-Loop Simulator (KHILS) facility located at Eglin AFB, Florida, a technology has been developed for the projection of scenes to support hardware-in-the-loop testing of infrared seekers. The Wideband Infrared Scene Projector program is based on a 512 X 512 VLSI array of 2 mil pitch resistors. A characteristic associated with these projectors is each resistor emits measurably different in-band radiance when the same voltage is applied. Therefore, since it is desirable to have each resistor emit the same for a commanded radiance, each resistor requires a Non-Uniformity Correction (NUC). Though this NUC task may seem simple to a casual observer, it is, however, quite complicated. A high quality infrared camera and well-designed optical system are prerequisites to measuring each resistor's output accurately for correction. A technique for performing a NUC on a resistor array has been developed and implemented at KHILS that achieves a NUC (standard deviation output/mean output) of less than 1 percent. This paper presents details pertaining to the NUC system, procedures, and results.

  16. Text Detection in Natural Scene Images by Stroke Gabor Words.

    PubMed

    Yi, Chucai; Tian, Yingli

    2011-01-01

    In this paper, we propose a novel algorithm, based on stroke components and descriptive Gabor filters, to detect text regions in natural scene images. Text characters and strings are constructed by stroke components as basic units. Gabor filters are used to describe and analyze the stroke components in text characters or strings. We define a suitability measurement to analyze the confidence of Gabor filters in describing stroke component and the suitability of Gabor filters on an image window. From the training set, we compute a set of Gabor filters that can describe principle stroke components of text by their parameters. Then a K -means algorithm is applied to cluster the descriptive Gabor filters. The clustering centers are defined as Stroke Gabor Words (SGWs) to provide a universal description of stroke components. By suitability evaluation on positive and negative training samples respectively, each SGW generates a pair of characteristic distributions of suitability measurements. On a testing natural scene image, heuristic layout analysis is applied first to extract candidate image windows. Then we compute the principle SGWs for each image window to describe its principle stroke components. Characteristic distributions generated by principle SGWs are used to classify text or nontext windows. Experimental results on benchmark datasets demonstrate that our algorithm can handle complex backgrounds and variant text patterns (font, color, scale, etc.).

  17. A scheme for automatic text rectification in real scene images

    NASA Astrophysics Data System (ADS)

    Wang, Baokang; Liu, Changsong; Ding, Xiaoqing

    2015-03-01

    Digital camera is gradually replacing traditional flat-bed scanner as the main access to obtain text information for its usability, cheapness and high-resolution, there has been a large amount of research done on camera-based text understanding. Unfortunately, arbitrary position of camera lens related to text area can frequently cause perspective distortion which most OCR systems at present cannot manage, thus creating demand for automatic text rectification. Current rectification-related research mainly focused on document images, distortion of natural scene text is seldom considered. In this paper, a scheme for automatic text rectification in natural scene images is proposed. It relies on geometric information extracted from characters themselves as well as their surroundings. For the first step, linear segments are extracted from interested region, and a J-Linkage based clustering is performed followed by some customized refinement to estimate primary vanishing point(VP)s. To achieve a more comprehensive VP estimation, second stage would be performed by inspecting the internal structure of characters which involves analysis on pixels and connected components of text lines. Finally VPs are verified and used to implement perspective rectification. Experiments demonstrate increase of recognition rate and improvement compared with some related algorithms.

  18. Neural dynamics of change detection in crowded acoustic scenes.

    PubMed

    Sohoglu, Ediz; Chait, Maria

    2016-02-01

    Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection.

  19. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  20. No emotional "pop-out" effect in natural scene viewing.

    PubMed

    Acunzo, David J; Henderson, John M

    2011-10-01

    It has been shown that attention is drawn toward emotional stimuli. In particular, eye movement research suggests that gaze is attracted toward emotional stimuli in an unconscious, automated manner. We addressed whether this effect remains when emotional targets are embedded within complex real-world scenes. Eye movements were recorded while participants memorized natural images. Each image contained an item that was either neutral, such as a bag, or emotional, such as a snake or a couple hugging. We found no latency difference for the first target fixation between the emotional and neutral conditions, suggesting no extrafoveal "pop-out" effect of emotional targets. However, once detected, emotional targets held attention for a longer time than neutral targets. The failure of emotional items to attract attention seems to contradict previous eye-movement research using emotional stimuli. However, our results are consistent with studies examining semantic drive of overt attention in natural scenes. Interpretations of the results in terms of perceptual and attentional load are provided. PMID:21787079

  1. Bivariate statistical modeling of color and range in natural scenes

    NASA Astrophysics Data System (ADS)

    Su, Che-Chun; Cormack, Lawrence K.; Bovik, Alan C.

    2014-02-01

    The statistical properties embedded in visual stimuli from the surrounding environment guide and affect the evolutionary processes of human vision systems. There are strong statistical relationships between co-located luminance/chrominance and disparity bandpass coefficients in natural scenes. However, these statistical rela- tionships have only been deeply developed to create point-wise statistical models, although there exist spatial dependencies between adjacent pixels in both 2D color images and range maps. Here we study the bivariate statistics of the joint and conditional distributions of spatially adjacent bandpass responses on both luminance/chrominance and range data of naturalistic scenes. We deploy bivariate generalized Gaussian distributions to model the underlying statistics. The analysis and modeling results show that there exist important and useful statistical properties of both joint and conditional distributions, which can be reliably described by the corresponding bivariate generalized Gaussian models. Furthermore, by utilizing these robust bivariate models, we are able to incorporate measurements of bivariate statistics between spatially adjacent luminance/chrominance and range information into various 3D image/video and computer vision applications, e.g., quality assessment, 2D-to-3D conversion, etc.

  2. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  3. HART: A Hybrid Architecture for Ray Tracing Animated Scenes.

    PubMed

    Nah, Jae-Ho; Kim, Jin-Woo; Park, Junho; Lee, Won-Jong; Park, Jeong-Soo; Jung, Seok-Yoon; Park, Woo-Chan; Manocha, Dinesh; Han, Tack-Don

    2015-03-01

    We present a hybrid architecture, inspired by asynchronous BVH construction [1], for ray tracing animated scenes. Our hybrid architecture utilizes heterogeneous hardware resources: dedicated ray-tracing hardware for BVH updates and ray traversal and a CPU for BVH reconstruction. We also present a traversal scheme using a primitive's axis-aligned bounding box (PrimAABB). This scheme reduces ray-primitive intersection tests by reusing existing BVH traversal units and the primAABB data for tree updates; it enables the use of shallow trees to reduce tree build times, tree sizes, and bus bandwidth requirements. Furthermore, we present a cache scheme that exploits consecutive memory access by reusing data in an L1 cache block. We perform cycle-accurate simulations to verify our architecture, and the simulation results indicate that the proposed architecture can achieve real-time Whitted ray tracing animated scenes at 1,920 × 1,200 resolution. This result comes from our high-performance hardware architecture and minimized resource requirements for tree updates.

  4. Raytracing Dynamic Scenes on the GPU Using Grids.

    PubMed

    Guntury, S; Narayanan, P J

    2012-01-01

    Raytracing dynamic scenes at interactive rates have received a lot of attention recently. We present a few strategies for high performance raytracing on a commodity GPU. The construction of grids needs sorting, which is fast on today's GPUs. The grid is thus the acceleration structure of choice for dynamic scenes as per-frame rebuilding is required. We advocate the use of appropriate data structures for each stage of raytracing, resulting in multiple structure building per frame. A perspective grid built for the camera achieves perfect coherence for primary rays. A perspective grid built with respect to each light source provides the best performance for shadow rays. Spherical grids handle lights positioned inside the model space and handle spotlights. Uniform grids are best for reflection and refraction rays with little coherence. We propose an Enforced Coherence method to bring coherence to them by rearranging the ray to voxel mapping using sorting. This gives the best performance on GPUs with only user-managed caches. We also propose a simple, Independent Voxel Walk method, which performs best by taking advantage of the L1 and L2 caches on recent GPUs. We achieve over 10 fps of total rendering on the Conference model with one light source and one reflection bounce, while rebuilding the data structure for each stage. Ideas presented here are likely to give high performance on the future GPUs as well as other manycore architectures. PMID:21383409

  5. Hdr Imaging for Feature Detection on Detailed Architectural Scenes

    NASA Astrophysics Data System (ADS)

    Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.

    2015-02-01

    3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  6. HART: A Hybrid Architecture for Ray Tracing Animated Scenes.

    PubMed

    Nah, Jae-Ho; Kim, Jin-Woo; Park, Junho; Lee, Won-Jong; Park, Jeong-Soo; Jung, Seok-Yoon; Park, Woo-Chan; Manocha, Dinesh; Han, Tack-Don

    2015-03-01

    We present a hybrid architecture, inspired by asynchronous BVH construction [1], for ray tracing animated scenes. Our hybrid architecture utilizes heterogeneous hardware resources: dedicated ray-tracing hardware for BVH updates and ray traversal and a CPU for BVH reconstruction. We also present a traversal scheme using a primitive's axis-aligned bounding box (PrimAABB). This scheme reduces ray-primitive intersection tests by reusing existing BVH traversal units and the primAABB data for tree updates; it enables the use of shallow trees to reduce tree build times, tree sizes, and bus bandwidth requirements. Furthermore, we present a cache scheme that exploits consecutive memory access by reusing data in an L1 cache block. We perform cycle-accurate simulations to verify our architecture, and the simulation results indicate that the proposed architecture can achieve real-time Whitted ray tracing animated scenes at 1,920 × 1,200 resolution. This result comes from our high-performance hardware architecture and minimized resource requirements for tree updates. PMID:26357070

  7. The Influence of Familiarity on Affective Responses to Natural Scenes

    NASA Astrophysics Data System (ADS)

    Sanabria Z., Jorge C.; Cho, Youngil; Yamanaka, Toshimasa

    This kansei study explored how familiarity with image-word combinations influences affective states. Stimuli were obtained from Japanese print advertisements (ads), and consisted of images (e.g., natural-scene backgrounds) and their corresponding headlines (advertising copy). Initially, a group of subjects evaluated their level of familiarity with images and headlines independently, and stimuli were filtered based on the results. In the main experiment, a different group of subjects rated their pleasure and arousal to, and familiarity with, image-headline combinations. The Self-Assessment Manikin (SAM) scale was used to evaluate pleasure and arousal, and a bipolar scale was used to evaluate familiarity. The results showed a high correlation between familiarity and pleasure, but low correlation between familiarity and arousal. The characteristics of the stimuli, and their effect on the variables of pleasure, arousal and familiarity, were explored through ANOVA. It is suggested that, in the case of natural-scene ads, familiarity with image-headline combinations may increase the pleasure response to the ads, and that certain components in the images (e.g., water) may increase arousal levels.

  8. A new proposition for redating the Mithraic tauroctony scene

    NASA Astrophysics Data System (ADS)

    Bon, E.; Ćirković, M. M.; Milosavljević, I.

    2002-07-01

    Assuming that the figures of the central icon of the Mithraic cult - the scene of tauroctony (bull slaying) - represent equatorial constellations at the time when the spring equinox was placed somewhere between Taurus and Aries, it is difficult to explain why some equatorial constellations (Orion and Libra) were not included in the Mithraic icons A simulation of the sky at the times in which the spring equinox was in the constellation of Taurus, only a small area of spring equinox positions permits to exclude these two constellations, with all other representations of equatorial constellations (Taurus, Canis Minor, Hydra, Crater, Corvus, Scorpio) included. These positions of the spring equinox occurred at the beginning of the age of Taurus, and included Gemini as an equatorial constellation. Two of the main figures in the Mithraic icons are two identical figures, usually represented on the each side of the bull, wearing phrygian caps and holding torches. Their names, Cautes and Cautopates, and their looks may indicate that they represent the constellation of Gemini. In that case the main icon of Mithraic religion could represent an event that happened around 4000 BC, when the spring equinox entered the constellation of Taurus. Also, this position of equator contains Perseus as an equatorial constellation. Ulansey suggested that the god Mithras is identified with the constellation Perseus. In that case, all figures in the main scene would be equatorial constellations.

  9. New Proposition for Redating of Mithraic Tauroctony Scene

    NASA Astrophysics Data System (ADS)

    Bon, Edi; Ćirković, Milan; Milosavljević, Ivana

    Considering the idea that figures in the central icon of the Mithraic religion, with scene of tauroctony (bull slaying), represent equatorial constellations in the times in which the spring equinox was in between of Taurus and Aries (Ulansey, 1989) , it was hard to explain why some equatorial constellations were not included in the Mithraic icons (constellations of Orion and Libra), when those constellations were equatorial in those times. With simulations of skies in the times in which the spring equinox was in the constellation of Taurus, only small area of spring equinox positions allows excluding those two constellations, with all other representations of equatorial constellations included (Taurus, Canis Minor, Hidra, Crater, Corvus, Scorpio). These positions were the beginning of the ages of Taurus. But these positions of spring equinox included Gemini as equatorial constellation. Two of the main figures in the icons of Mithaic religions were two identical figures, usually represented on the each side of the bull, wearing frigian caps and holding the torches. Their names, Cautes and Cautopates, and their looks could lead to the idea that they represent the constellation of Gemini. In that case the main icon of Mithraic religion could represent the event that happened around 4000 BC, when the spring equinox entered the constellation of Taurus. Also, this position of equator contain Perseus as the equatorial constellation. In the work of Ulansey was presented that the god Mithras was the constellation of Perseus. In that case, all figures in the main scene would be equatorial constellations.

  10. 12 CFR 561.33 - Note account.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 6 2013-01-01 2012-01-01 true Note account. 561.33 Section 561.33 Banks and... SAVINGS ASSOCIATIONS § 561.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  11. 12 CFR 561.33 - Note account.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 6 2012-01-01 2012-01-01 false Note account. 561.33 Section 561.33 Banks and... SAVINGS ASSOCIATIONS § 561.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  12. 12 CFR 161.33 - Note account.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Note account. 161.33 Section 161.33 Banks and... SAVINGS ASSOCIATIONS § 161.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  13. 12 CFR 561.33 - Note account.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Note account. 561.33 Section 561.33 Banks and... SAVINGS ASSOCIATIONS § 561.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  14. 12 CFR 390.300 - Note account.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Note account. 390.300 Section 390.300 Banks and... Associations § 390.300 Note account. The term note account means a note, subject to the right of immediate call... Department regulations. Note accounts are not savings accounts or savings deposits....

  15. 12 CFR 561.33 - Note account.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 6 2014-01-01 2012-01-01 true Note account. 561.33 Section 561.33 Banks and... SAVINGS ASSOCIATIONS § 561.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  16. 12 CFR 390.300 - Note account.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Note account. 390.300 Section 390.300 Banks and... Associations § 390.300 Note account. The term note account means a note, subject to the right of immediate call... Department regulations. Note accounts are not savings accounts or savings deposits....

  17. 12 CFR 561.33 - Note account.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 5 2011-01-01 2011-01-01 false Note account. 561.33 Section 561.33 Banks and... SAVINGS ASSOCIATIONS § 561.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  18. 12 CFR 390.300 - Note account.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Note account. 390.300 Section 390.300 Banks and... Associations § 390.300 Note account. The term note account means a note, subject to the right of immediate call... Department regulations. Note accounts are not savings accounts or savings deposits....

  19. 12 CFR 161.33 - Note account.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 1 2013-01-01 2013-01-01 false Note account. 161.33 Section 161.33 Banks and... SAVINGS ASSOCIATIONS § 161.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  20. 12 CFR 161.33 - Note account.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Note account. 161.33 Section 161.33 Banks and... SAVINGS ASSOCIATIONS § 161.33 Note account. The term note account means a note, subject to the right of... States Treasury Department regulations. Note accounts are not savings accounts or savings deposits....

  1. Modeling Search for People in 900 Scenes: A combined source model of eye guidance.

    PubMed

    Ehinger, Krista A; Hidalgo-Sotelo, Barbara; Torralba, Antonio; Oliva, Aude

    2009-08-01

    How predictable are human eye movements during search in real world scenes? We recorded 14 observers' eye movements as they performed a search task (person detection) in 912 outdoor scenes. Observers were highly consistent in the regions fixated during search, even when the target was absent from the scene. These eye movements were used to evaluate computational models of search guidance from three sources: saliency, target features, and scene context. Each of these models independently outperformed a cross-image control in predicting human fixations. Models that combined sources of guidance ultimately predicted 94% of human agreement, with the scene context component providing the most explanatory power. None of the models, however, could reach the precision and fidelity of an attentional map defined by human fixations. This work puts forth a benchmark for computational models of search in real world scenes. Further improvements in modeling should capture mechanisms underlying the selectivity of observer's fixations during search.

  2. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  3. A Note on Inflation Targeting.

    ERIC Educational Resources Information Center

    Lai, Ching-chong; Chang, Juin-jen

    2001-01-01

    Presents a pedagogical graphical exposition to illustrate the stabilizing effect of price target zones. Finds that authorities' commitment to defend a price target zone affects the public's inflation expectations and, in turn, reduces actual inflation. (RLH)

  4. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes.

    PubMed

    Smith, Tim J; Mital, Parag K

    2013-01-01

    Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion. PMID:23863509

  5. Caustic-Side Solvent Extraction: Prediction of Cesium Extraction for Actual Wastes and Actual Waste Simulants

    SciTech Connect

    Delmau, L.H.; Haverlock, T.J.; Sloop, F.V., Jr.; Moyer, B.A.

    2003-02-01

    This report presents the work that followed the CSSX model development completed in FY2002. The developed cesium and potassium extraction model was based on extraction data obtained from simple aqueous media. It was tested to ensure the validity of the prediction for the cesium extraction from actual waste. Compositions of the actual tank waste were obtained from the Savannah River Site personnel and were used to prepare defined simulants and to predict cesium distribution ratios using the model. It was therefore possible to compare the cesium distribution ratios obtained from the actual waste, the simulant, and the predicted values. It was determined that the predicted values agree with the measured values for the simulants. Predicted values also agreed, with three exceptions, with measured values for the tank wastes. Discrepancies were attributed in part to the uncertainty in the cation/anion balance in the actual waste composition, but likely more so to the uncertainty in the potassium concentration in the waste, given the demonstrated large competing effect of this metal on cesium extraction. It was demonstrated that the upper limit for the potassium concentration in the feed ought to not exceed 0.05 M in order to maintain suitable cesium distribution ratios.

  6. Seek and you shall remember: scene semantics interact with visual search to build better memories.

    PubMed

    Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L H

    2014-01-01

    Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. PMID:25015385

  7. Self-interference polarization holographic imaging of a three-dimensional incoherent scene

    NASA Astrophysics Data System (ADS)

    Zhu, Ziyi; Shi, Zhimin

    2016-08-01

    We present a self-interference polarization holographic imaging (Si-Phi) technique to capture the three-dimensional information of an incoherent scene in a single shot. The light from the scene is modulated by a polarization-dependent lens, and a complex-valued polarization hologram is obtained by measuring directly the polarization profile of the light at the detection plane. Using a backward-propagating Green's function, we can numerically retrieve the transverse intensity profile of the scene at any desired focus plane. We demonstrate experimentally our Si-Phi technique by imaging, in real time, three-dimensional mimicked incoherent scenes created by a fast spatial light modulator.

  8. Seek and you shall remember: scene semantics interact with visual search to build better memories.

    PubMed

    Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L H

    2014-07-11

    Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization.

  9. Viewing nature scenes positively affects recovery of autonomic function following acute-mental stress.

    PubMed

    Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F

    2013-06-01

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.

  10. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    PubMed

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  11. Oscillating blood droplets--implications for crime scene reconstruction.

    PubMed

    Raymond, M A; Smith, E R; Liesegang, J

    1996-01-01

    Traditionally, the analysis of blood spatter on surfaces in the reconstruction of crime scenes relies on the assumption that blood droplets are spherical when they strike the surface. This paper explores the effects of their shape on the reconstruction of trajectories from their impact pattern, and reports a theoretical analysis of the lifetime of droplet oscillations. Oscillations damp quickly in blood droplets due to the viscosity. The analysis provides ranges of velocities and distances from the point of droplet projection within which it is unreliable to assume the droplets are spherical when they stain a surface. Non-spherical droplet stains predict incorrect positioning of the droplet projection point. Experimental data are presented to show that the estimates apply in practice. PMID:8789933

  12. Luminance cues constrain chromatic blur discrimination in natural scene stimuli.

    PubMed

    Sharman, Rebecca J; McGraw, Paul V; Peirce, Jonathan W

    2013-01-01

    Introducing blur into the color components of a natural scene has very little effect on its percept, whereas blur introduced into the luminance component is very noticeable. Here we quantify the dominance of luminance information in blur detection and examine a number of potential causes. We show that the interaction between chromatic and luminance information is not explained by reduced acuity or spatial resolution limitations for chromatic cues, the effective contrast of the luminance cue, or chromatic and achromatic statistical regularities in the images. Regardless of the quality of chromatic information, the visual system gives primacy to luminance signals when determining edge location. In natural viewing, luminance information appears to be specialized for detecting object boundaries while chromatic information may be used to determine surface properties.

  13. A model for conceptual processing of naturalistic scenes.

    PubMed

    Hanna, A; Loftus, G

    1993-09-01

    Are there fundamental differences in the way in which a list of pictures and a list of words are processed? We report three experiments that examine serial position effects for rapidly-presented naturalistic scenes. The experiments provide a basis for comparison with the U-shaped serial position curve and list-length effect which typically result from verbal learning experiments. In contrast to the U-shaped verbal serial position function, our results show a flat function at the beginning serial positions and a recency effect which is small and limited to the last serial position. There is a set-size effect. Results suggest that the processing leading to a memory representation for visual stimuli such as pictures and linguistic stimuli such as words is qualitatively dissimilar. The findings can be accounted for by a serial processing model whose main parameter is the probability that the subject will switch attention from one picture to the next.

  14. Human supervisory approach to modeling industrial scenes using geometric primitives

    SciTech Connect

    Luck, J.P.; Little, C.Q.; Roberts, R.S.

    1997-11-19

    A three-dimensional world model is crucial for many robotic tasks. Modeling techniques tend to be either fully manual or autonomous. Manual methods are extremely time consuming but also highly accurate and flexible. Autonomous techniques are fast but inflexible and, with real-world data, often inaccurate. The method presented in this paper combines the two, yielding a highly efficient, flexible, and accurate mapping tool. The segmentation and modeling algorithms that compose the method are specifically designed for industrial environments, and are described in detail. A mapping system based on these algorithms has been designed. It enables a human supervisor to quickly construct a fully defined world model from unfiltered and unsegmented real-world range imagery. Examples of how industrial scenes are modeled with the mapping system are provided.

  15. Dynamic infrared scene projectors based upon the DMD

    NASA Astrophysics Data System (ADS)

    Beasley, D. Brett; Bender, Matt; Crosby, Jay; Messer, Tim

    2009-02-01

    The Micromirror Array Projector System (MAPS) is an advanced dynamic scene projector system developed by Optical Sciences Corporation (OSC) for Hardware-In-the-Loop (HWIL) simulation and sensor test applications. The MAPS is based upon the Texas Instruments Digital Micromirror Device (DMD) which has been modified to project high resolution, realistic imagery suitable for testing sensors and seekers operating in the UV, visible, NIR, and IR wavebands. Since the introduction of the first MAPS in 2001, OSC has continued to improve the technology and develop systems for new projection and Electro-Optical (E-O) test applications. This paper reviews the basic MAPS design and performance capabilities. We also present example projectors and E-O test sets designed and fabricated by OSC in the last 7 years. Finally, current research efforts and new applications of the MAPS technology are discussed.

  16. New scene projector developments at the AMRDEC's advanced simulation center

    NASA Astrophysics Data System (ADS)

    Saylor, Daniel A.; Bowden, Mark; Buford, James

    2006-05-01

    The Aviation and Missile Research, Engineering, and Development Center's (AMRDEC) System Simulation and Development Directorate (SS&DD) has an extensive history of applying all types of modeling and simulation (M&S) to weapon system development and has been a particularly strong advocate of hardware-in-the-loop (HWIL) simulation and test for many years. Key to the successful application of HWIL testing at AMRDEC has been the use of state-of-the-art Scene Projector technologies. This paper describes recent advancements over the past year within the AMRDEC Advanced Simulation Center (ASC) HWIL facilities with a specific emphasis on the state of the various IRSP technologies employed. Areas discussed include application of FMS-compatible IR projectors, advancements in hybrid and multi-spectral projectors, and characterization of existing and emerging technologies.

  17. Oscillating blood droplets--implications for crime scene reconstruction.

    PubMed

    Raymond, M A; Smith, E R; Liesegang, J

    1996-01-01

    Traditionally, the analysis of blood spatter on surfaces in the reconstruction of crime scenes relies on the assumption that blood droplets are spherical when they strike the surface. This paper explores the effects of their shape on the reconstruction of trajectories from their impact pattern, and reports a theoretical analysis of the lifetime of droplet oscillations. Oscillations damp quickly in blood droplets due to the viscosity. The analysis provides ranges of velocities and distances from the point of droplet projection within which it is unreliable to assume the droplets are spherical when they stain a surface. Non-spherical droplet stains predict incorrect positioning of the droplet projection point. Experimental data are presented to show that the estimates apply in practice.

  18. Complete scene recovery and terrain classification in textured terrain meshes.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

  19. Scene-based nonuniformity correction using sparse prior

    NASA Astrophysics Data System (ADS)

    Mou, Xingang; Zhang, Guilin; Hu, Ruolan; Zhou, Xiao

    2011-11-01

    The performance of infrared focal plane array (IRFPA) is known to be affected by the presence of spatial fixed pattern noise (FPN) that is superimposed on the true image. Scene-based nonuniformity correction (NUC) algorithms are widely concerned since they only need the readout infrared data captured by the imaging system during its normal operation. A novel adaptive NUC algorithm is proposed using the sparse prior that when derivative filters are applied to infrared images, the filter outputs tends to be sparse. A change detection module based on results of derivative filters is introduced to avoid stationary object being learned into the background, so the ghosting artifact is eliminated effectively. The performance of the new algorithm is evaluated with both real and simulated imagery.

  20. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    NASA Astrophysics Data System (ADS)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  1. Complete Scene Recovery and Terrain Classification in Textured Terrain Meshes

    PubMed Central

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh. PMID:23112653

  2. Extracting scene feature vectors through modeling, volume 3

    NASA Technical Reports Server (NTRS)

    Berry, J. K.; Smith, J. A.

    1976-01-01

    The remote estimation of the leaf area index of winter wheat at Finney County, Kansas was studied. The procedure developed consists of three activities: (1) field measurements; (2) model simulations; and (3) response classifications. The first activity is designed to identify model input parameters and develop a model evaluation data set. A stochastic plant canopy reflectance model is employed to simulate reflectance in the LANDSAT bands as a function of leaf area index for two phenological stages. An atmospheric model is used to translate these surface reflectances into simulated satellite radiance. A divergence classifier determines the relative similarity between model derived spectral responses and those of areas with unknown leaf area index. The unknown areas are assigned the index associated with the closest model response. This research demonstrated that the SRVC canopy reflectance model is appropriate for wheat scenes and that broad categories of leaf area index can be inferred from the procedure developed.

  3. Conference scene: pharmacogenomics: from cell to clinic (part 2).

    PubMed

    Siest, Gérard; Medeiros, Rui; Melichar, Bohuslav; Stathopoulou, Maria; Van Schaik, Ron Hn; Cacabelos, Ramon; Abt, Peter Meier; Monteiro, Carolino; Gurwitz, David; Queiroz, Jao; Mota-Filipe, Helder; Ndiaye, Ndeye Coumba; Visvikis-Siest, Sophie

    2014-04-01

    Second International ESPT Meeting Lisbon, Portugal, 26-28 September 2013 The second European Society of Pharmacogenomics and Theranostics (ESPT) conference was organized in Lisbon, Portugal, and attracted 250 participants from 37 different countries. The participants could listen to 50 oral presentations, participate in five lunch symposia and were able to view 83 posters and an exhibition. Part 1 of this Conference Scene was presented in the previous issue of Pharmacogenomics. This second part will focus on: clinical implementation of pharmacogenomics tests; transporters and pharmacogenomics; stem cells and other new tools for pharmacogenomics and drug discovery; from system pharmacogenomics to personalized medicine; and, finally, we will discuss the Posters and Awards that were presented at the conference.

  4. When anticipation beats accuracy: Threat alters memory for dynamic scenes.

    PubMed

    Greenstein, Michael; Franklin, Nancy; Martins, Mariana; Sewack, Christine; Meier, Markus A

    2016-05-01

    Threat frequently leads to the prioritization of survival-relevant processes. Much of the work examining threat-related processing advantages has focused on the detection of static threats or long-term memory for details. In the present study, we examined immediate memory for dynamic threatening situations. We presented participants with visually neutral, dynamic stimuli using a representational momentum (RM) paradigm, and manipulated threat conceptually. Although the participants in both the threatening and nonthreatening conditions produced classic RM effects, RM was stronger for scenarios involving threat (Exps. 1 and 2). Experiments 2 and 3 showed that this effect does not generalize to the nonthreatening objects within a threatening scene, and that it does not extend to arousing happy situations. Although the increased RM effect for threatening objects by definition reflects reduced accuracy, we argue that this reduced accuracy may be offset by a superior ability to predict, and thereby evade, a moving threat.

  5. Attention, Awareness, and the Perception of Auditory Scenes

    PubMed Central

    Snyder, Joel S.; Gregg, Melissa K.; Weintraub, David M.; Alain, Claude

    2011-01-01

    Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences. PMID:22347201

  6. Complete scene recovery and terrain classification in textured terrain meshes.

    PubMed

    Song, Wei; Cho, Kyungeun; Um, Kyhyun; Won, Chee Sun; Sim, Sungdae

    2012-01-01

    Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D) and two-dimensional (2D) datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh. PMID:23112653

  7. Inverting a dispersive scene's side-scanned image

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1983-01-01

    Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.

  8. When anticipation beats accuracy: Threat alters memory for dynamic scenes.

    PubMed

    Greenstein, Michael; Franklin, Nancy; Martins, Mariana; Sewack, Christine; Meier, Markus A

    2016-05-01

    Threat frequently leads to the prioritization of survival-relevant processes. Much of the work examining threat-related processing advantages has focused on the detection of static threats or long-term memory for details. In the present study, we examined immediate memory for dynamic threatening situations. We presented participants with visually neutral, dynamic stimuli using a representational momentum (RM) paradigm, and manipulated threat conceptually. Although the participants in both the threatening and nonthreatening conditions produced classic RM effects, RM was stronger for scenarios involving threat (Exps. 1 and 2). Experiments 2 and 3 showed that this effect does not generalize to the nonthreatening objects within a threatening scene, and that it does not extend to arousing happy situations. Although the increased RM effect for threatening objects by definition reflects reduced accuracy, we argue that this reduced accuracy may be offset by a superior ability to predict, and thereby evade, a moving threat. PMID:26698159

  9. Conference scene: pharmacogenomics: from cell to clinic (part 2).

    PubMed

    Siest, Gérard; Medeiros, Rui; Melichar, Bohuslav; Stathopoulou, Maria; Van Schaik, Ron Hn; Cacabelos, Ramon; Abt, Peter Meier; Monteiro, Carolino; Gurwitz, David; Queiroz, Jao; Mota-Filipe, Helder; Ndiaye, Ndeye Coumba; Visvikis-Siest, Sophie

    2014-04-01

    Second International ESPT Meeting Lisbon, Portugal, 26-28 September 2013 The second European Society of Pharmacogenomics and Theranostics (ESPT) conference was organized in Lisbon, Portugal, and attracted 250 participants from 37 different countries. The participants could listen to 50 oral presentations, participate in five lunch symposia and were able to view 83 posters and an exhibition. Part 1 of this Conference Scene was presented in the previous issue of Pharmacogenomics. This second part will focus on: clinical implementation of pharmacogenomics tests; transporters and pharmacogenomics; stem cells and other new tools for pharmacogenomics and drug discovery; from system pharmacogenomics to personalized medicine; and, finally, we will discuss the Posters and Awards that were presented at the conference. PMID:24897282

  10. Infrared imaging of the crime scene: possibilities and pitfalls.

    PubMed

    Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G

    2013-09-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. PMID:23919285

  11. Providing Study Notes: Comparison of Three Types of Notes for Review.

    ERIC Educational Resources Information Center

    Kiewra, Kenneth A.; And Others

    1988-01-01

    Forty-four undergraduates received different types of notes for review of a lecture (complete text, linear outline, or matrix), or received no notes. Any form of notes increased performance over no notes, with matrix and outline notes producing higher recall and matrix notes producing greatest transfer. (SLD)

  12. Catchment areas of panoramic snapshots in outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zeil, Jochen; Hofmann, Martin I.; Chahl, Javaan S.

    2003-03-01

    We took panoramic snapshots in outdoor scenes at regular intervals in two- or three-dimensional grids covering 1 m2 or 1 m3 and determined how the root mean square pixel differences between each of the images and a reference image acquired at one of the locations in the grid develop over distance from the reference position. We then asked whether the reference position can be pinpointed from a random starting position by moving the panoramic imaging device in such a way that the image differences relative to the reference image are minimized. We find that on time scales of minutes to hours, outdoor locations are accurately defined by a clear, sharp minimum in a smooth three-dimensional (3D) volume of image differences (the 3D difference function). 3D difference functions depend on the spatial-frequency content of natural scenes and on the spatial layout of objects therein. They become steeper in the vicinity of dominant objects. Their shape and smoothness, however, are affected by changes in illumination and shadows. The difference functions generated by rotation are similar in shape to those generated by translation, but their plateau values are higher. Rotational difference functions change little with distance from the reference location. Simple gradient descent methods are surprisingly successful in recovering a goal location, even if faced with transient changes in illumination. Our results show that view-based homing with panoramic images is in principle feasible in natural environments and does not require the identification of individual landmarks. We discuss the relevance of our findings to the study of robot and insect homing.

  13. MISR empirical stray light corrections in high-contrast scenes

    NASA Astrophysics Data System (ADS)

    Limbacher, J. A.; Kahn, R. A.

    2015-07-01

    We diagnose the potential causes for the Multi-angle Imaging SpectroRadiometer's (MISR) persistent high aerosol optical depth (AOD) bias at low AOD with the aid of coincident MODerate-resolution Imaging Spectroradiometer (MODIS) imagery from NASA's Terra satellite. Stray light in the MISR instrument is responsible for a large portion of the high AOD bias in high-contrast scenes, such as broken-cloud scenes that are quite common over ocean. Discrepancies among MODIS and MISR nadir-viewing blue, green, red, and near-infrared images are used to optimize seven parameters individually for each wavelength, along with a background reflectance modulation term that is modeled separately, to represent the observed features. Independent surface-based AOD measurements from the AErosol RObotic NETwork (AERONET) and the Marine Aerosol Network (MAN) are compared with MISR research aerosol retrieval algorithm (RA) AOD retrievals for 1118 coincidences to validate the corrections when applied to the nadir and off-nadir cameras. With these corrections, plus the baseline RA corrections and enhanced cloud screening applied, the median AOD bias for all data in the mid-visible (green, 558 nm) band decreases from 0.006 (0.020 for the MISR standard algorithm (SA)) to 0.000, and the RMSE decreases by 5 % (27 % compared to the SA). For AOD558 nm < 0.10, which includes about half the validation data, 68th percentile absolute AOD558 nm errors for the RA have dropped from 0.022 (0.034 for the SA) to < 0.02 (~ 0.018).

  14. Monocular 3-D gait tracking in surveillance scenes.

    PubMed

    Rogez, Grégory; Rihan, Jonathan; Guerrero, Jose J; Orrite, Carlos

    2014-06-01

    Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene's ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset. PMID:23955796

  15. Can IR scene projectors reduce total system cost?

    NASA Astrophysics Data System (ADS)

    Ginn, Robert; Solomon, Steven

    2006-05-01

    There is an incredible amount of system engineering involved in turning the typical infrared system needs of probability of detection, probability of identification, and probability of false alarm into focal plane array (FPA) requirements of noise equivalent irradiance (NEI), modulation transfer function (MTF), fixed pattern noise (FPN), and defective pixels. Unfortunately, there are no analytic solutions to this problem so many approximations and plenty of "seat of the pants" engineering is employed. This leads to conservative specifications, which needlessly drive up system costs by increasing system engineering costs, reducing FPA yields, increasing test costs, increasing rework and the never ending renegotiation of requirements in an effort to rein in costs. These issues do not include the added complexity to the FPA factory manager of trying to meet varied, and changing, requirements for similar products because different customers have made different approximations and flown down different specifications. Scene generation technology may well be mature and cost effective enough to generate considerable overall savings for FPA based systems. We will compare the costs and capabilities of various existing scene generation systems and estimate the potential savings if implemented at several locations in the IR system fabrication cycle. The costs of implementing this new testing methodology will be compared to the probable savings in systems engineering, test, rework, yield improvement and others. The diverse requirements and techniques required for testing missile warning systems, missile seekers, and FLIRs will be defined. Last, we will discuss both the hardware and software requirements necessary to meet the new test paradigm and discuss additional cost improvements related to the incorporation of these technologies.

  16. Consequences of Predicted or Actual Asteroid Impacts

    NASA Astrophysics Data System (ADS)

    Chapman, C. R.

    2003-12-01

    Earth impact by an asteroid could have enormous physical and environmental consequences. Impactors larger than 2 km diameter could be so destructive as to threaten civilization. Since such events greatly exceed any other natural or man-made catastrophe, much extrapolation is necessary just to understand environmental implications (e.g. sudden global cooling, tsunami magnitude, toxic effects). Responses of vital elements of the ecosystem (e.g. agriculture) and of human society to such an impact are conjectural. For instance, response to the Blackout of 2003 was restrained, but response to 9/11 terrorism was arguably exaggerated and dysfunctional; would society be fragile or robust in the face of global catastrophe? Even small impacts, or predictions of impacts (accurate or faulty), could generate disproportionate responses, especially if news media reports are hyped or inaccurate or if responsible entities (e.g. military organizations in regions of conflict) are inadequately aware of the phenomenology of small impacts. Asteroid impact is the one geophysical hazard of high potential consequence with which we, fortunately, have essentially no historical experience. It is thus important that decision makers familiarize themselves with the hazard and that society (perhaps using a formal procedure, like a National Academy of Sciences study) evaluate the priority of addressing the hazard by (a) further telescopic searches for dangerous but still-undiscovered asteroids and (b) development of mitigation strategies (including deflection of an oncoming asteroid and on- Earth civil defense). I exemplify these issues by discussing several representative cases that span the range of parameters. Many of the specific physical consequences of impact involve effects like those of other geophysical disasters (flood, fire, earthquake, etc.), but the psychological and sociological aspects of predicted and actual impacts are distinctive. Standard economic cost/benefit analyses may not

  17. A unification framework for best-of-breed real-time scene generation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Ballard, Gary H.; Trimble, Darian E.; Bunfield, Dennis H.; Mayhall, Anthony J.

    2010-04-01

    AMRDEC sought out an improved framework for real-time hardware-in-the-loop (HWIL) scene generation to provide the flexibility needed to adapt to rapidly changing hardware advancements and provide the ability to more seamlessly integrate external third party codes for Best-of-Breed real-time scene generation. As such, AMRDEC has developed Continuum, a new software architecture foundation to allow for the integration of these codes into a HWIL lab facility while enhancing existing AMRDEC HWIL scene generation codes such as the Joint Signature Image Generator (JSIG). This new real-time framework is a minimalistic modular approach based on the National Institute of Standards (NIST) Neutral Messaging Language (NML) that provides the basis for common HWIL scene generation. High speed interconnects and protocols were examined to support distributed scene generation whereby the scene graph, associated phenomenology, and resulting scene can be designed around the data rather than a framework, and the scene elements can be dynamically distributed across multiple high performance computing assets. Because of this open architecture approach, the framework facilitates scaling from a single GPU "traditional" PC scene generation system to a multi-node distributed system requiring load distribution and scene compositing across multiple high performance computing platforms. This takes advantage of the latest advancements in GPU hardware, such as NVIDIA's Tesla and Fermi architectures, providing an increased benefit in both fidelity and performance of the associated scene's phenomenology. Other features of the Continuum easily extend the use of this framework to include visualization, diagnostic, analysis, configuration, and other HWIL and all digital simulation tools.

  18. Problem Areas in Student Teaching and the California Scene.

    ERIC Educational Resources Information Center

    Forer, Ruth K.

    The paper describes the practicum requirements of the Special Education Teacher Preparation program at California State University at Northridge, and it notes problem areas in the roles of the principal, cooperating teacher, and the university professor. Field work requirements involving observation, participation, and student teaching at both the…

  19. Notes.

    ERIC Educational Resources Information Center

    Physics Teacher, 1979

    1979-01-01

    Some topics included are: the relative merits of a programmable calculator and a microcomputer; the advantages of acquiring a sound-level meter for the laboratory; how to locate a virtual image in a plane mirror; center of gravity of a student; and how to demonstrate interference of light using two cords.

  20. A Note on Reverse Derivations

    ERIC Educational Resources Information Center

    Samman, M.

    2005-01-01

    In this note, the notion of reverse derivation is studied. It is shown that in the class of semiprime rings, this notion coincides with the usual derivation when it maps a semiprime ring into its centre. However, we provide some examples to show that it is not the case in general.

  1. Challenging the "Cliffs Notes" Syndrome.

    ERIC Educational Resources Information Center

    Karsten, Ernie

    1989-01-01

    Presents an approach to teaching Pierre Boulle's novella, "Face of a Hero," in which students produce their own "Cliffs Notes" for the text. Stresses the importance of using nontraditional literature, and shows how students can discover their own richer responses to literature instead of relying on study aids. (MM)

  2. Possibilities of lasers within NOTES.

    PubMed

    Stepp, Herbert; Sroka, Ronald

    2010-10-01

    Lasers possess unique properties that render them versatile light sources particularly for NOTES. Depending on the laser light sources used, diagnostic as well as therapeutic purposes can be achieved. The diagnostic potential offered by innovative concepts such as new types of ultra-thin endoscopes and optical probes supports the physician with optical information of ultra-high resolution, tissue discrimination and manifold types of fluorescence detection. In addition, the potential 3-D capability promises enhanced recognition of tissue type and pathological status. These diagnostic techniques might enable or at least contribute to accurate and safe procedures within the spatial restrictions inherent with NOTES. The therapeutic potential ranges from induction of phototoxic effects over tissue welding, coagulation and tissue cutting to stone fragmentation. As proven in many therapeutic laser endoscopic treatment concepts, laser surgery is potentially bloodless and transmits the energy without mechanical forces. Specialized NOTES endoscopes will likely incorporate suitable probes for improving diagnostic procedures, laser fibres with advantageous light delivery possibility or innovative laser beam manipulation systems. NOTES training centres may support the propagation of the complex handling and the safety aspects for clinical use to the benefit of the patient.

  3. 49 CFR Appendix - Editorial Note:

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 16084; 28 CFR § 0.66). The Assistant Attorney General, Land and Natural Resources Division, has further.... Editorial Note: For Federal Register citations affecting appendix A to part 1, see the List of CFR Sections... determinations as to the sufficiency of titles. The Chief Counsels of the Federal Aviation...

  4. A Note on Hamiltonian Graphs

    ERIC Educational Resources Information Center

    Skurnick, Ronald; Davi, Charles; Skurnick, Mia

    2005-01-01

    Since 1952, several well-known graph theorists have proven numerous results regarding Hamiltonian graphs. In fact, many elementary graph theory textbooks contain the theorems of Ore, Bondy and Chvatal, Chvatal and Erdos, Posa, and Dirac, to name a few. In this note, the authors state and prove some propositions of their own concerning Hamiltonian…

  5. Applied Fluid Mechanics. Lecture Notes.

    ERIC Educational Resources Information Center

    Gregg, Newton D.

    This set of lecture notes is used as a supplemental text for the teaching of fluid dynamics, as one component of a thermodynamics course for engineering technologists. The major text for the course covered basic fluids concepts such as pressure, mass flow, and specific weight. The objective of this document was to present additional fluids…

  6. Constructing Visual Representations of Natural Scenes: The Roles of Short- and Long-Term Visual Memory

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2004-01-01

    A "follow-the-dot" method was used to investigate the visual memory systems supporting accumulation of object information in natural scenes. Participants fixated a series of objects in each scene, following a dot cue from object to object. Memory for the visual form of a target object was then tested. Object memory was consistently superior for…

  7. Real-time detection of moving objects in a dynamic scene from moving robotic vehicles

    NASA Technical Reports Server (NTRS)

    Ansar, A.; Talukder, S.; Goldberg, L.; Matthies, A.

    2003-01-01

    Dynamic scene perception is currently limited to detection of moving objects from a static platform or scenes with flat backgrounds. We discuss novel methods to segment moving objects in the motion field formed by a moving camera/robotic platform in real time.

  8. The Interplay of Episodic and Semantic Memory in Guiding Repeated Search in Scenes

    ERIC Educational Resources Information Center

    Vo, Melissa L.-H.; Wolfe, Jeremy M.

    2013-01-01

    It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers…

  9. Priming of Simple and Complex Scene Layout: Rapid Function from the Intermediate Level

    ERIC Educational Resources Information Center

    Sanocki, Thomas; Sulman, Noah

    2009-01-01

    Three experiments examined the time course of layout priming with photographic scenes varying in complexity (number of objects). Primes were presented for varying durations (800-50 ms) before a target scene with 2 spatial probes; observers indicated whether the left or right probe was closer to viewpoint. Reaction time was the main measure. Scene…

  10. SmartScene: An Immersive, Realtime, Assembly, Verification and Training Application

    NASA Technical Reports Server (NTRS)

    Homan, Ray

    1997-01-01

    There are four major components to SmartScene. First, it is shipped with everything necessary to quickly be able to do productive work. It is immersive in that when a user is working in SmartScene he or she cannot see anything except the world being manipulated.

  11. Representation of higher-order statistical structures in natural scenes via spatial phase distributions.

    PubMed

    MaBouDi, HaDi; Shimazaki, Hideaki; Amari, Shun-ichi; Soltanian-Zadeh, Hamid

    2016-03-01

    Natural scenes contain richer perceptual information in their spatial phase structure than their amplitudes. Modeling phase structure of natural scenes may explain higher-order structure inherent to the natural scenes, which is neglected in most classical models of redundancy reduction. Only recently, a few models have represented images using a complex form of receptive fields (RFs) and analyze their complex responses in terms of amplitude and phase. However, these complex representation models often tacitly assume a uniform phase distribution without empirical support. The structure of spatial phase distributions of natural scenes in the form of relative contributions of paired responses of RFs in quadrature has not been explored statistically until now. Here, we investigate the spatial phase structure of natural scenes using complex forms of various Gabor-like RFs. To analyze distributions of the spatial phase responses, we constructed a mixture model that accounts for multi-modal circular distributions, and the EM algorithm for estimation of the model parameters. Based on the likelihood, we report presence of both uniform and structured bimodal phase distributions in natural scenes. The latter bimodal distributions were symmetric with two peaks separated by about 180°. Thus, the redundancy in the natural scenes can be further removed by using the bimodal phase distributions obtained from these RFs in the complex representation models. These results predict that both phase invariant and phase sensitive complex cells are required to represent the regularities of natural scenes in visual systems.

  12. Was That Levity or Livor Mortis? Crime Scene Investigators' Perspectives on Humor and Work

    ERIC Educational Resources Information Center

    Vivona, Brian D.

    2012-01-01

    Humor is common and purposeful in most work settings. Although researchers have examined humor and joking behavior in various work settings, minimal research has been done on humor applications in the field of crime scene investigation. The crime scene investigator encounters death, trauma, and tragedy in a more intimate manner than any other…

  13. Assessment of Subtraction Scene Understanding Using a Story-Generation Task

    ERIC Educational Resources Information Center

    Kinda, Shigehiro

    2010-01-01

    The present study used a new assessment technique, the story-generation task, to examine students' understanding of subtraction scenes. The students from four grade levels (110 first-, 107 third-, 110 fourth- and 119 sixth-graders) generated stories under the constraints provided by a picture (representing Change, Combine or Compare scene) and a…

  14. The Effect of Scene Variation on the Redundant Use of Color in Definite Reference

    ERIC Educational Resources Information Center

    Koolen, Ruud; Goudbeek, Martijn; Krahmer, Emiel

    2013-01-01

    This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly…

  15. The Role of Visual Experience on the Representation and Updating of Novel Haptic Scenes

    ERIC Educational Resources Information Center

    Pasqualotto, Achille; Newell, Fiona N.

    2007-01-01

    We investigated the role of visual experience on the spatial representation and updating of haptic scenes by comparing recognition performance across sighted, congenitally and late blind participants. We first established that spatial updating occurs in sighted individuals to haptic scenes of novel objects. All participants were required to…

  16. Fundamental remote sensing science research program. Part 1: Scene radiation and atmospheric effects characterization project

    NASA Technical Reports Server (NTRS)

    Murphy, R. E.; Deering, D. W.

    1984-01-01

    Brief articles summarizing the status of research in the scene radiation and atmospheric effect characterization (SRAEC) project are presented. Research conducted within the SRAEC program is focused on the development of empirical characterizations and mathematical process models which relate the electromagnetic energy reflected or emitted from a scene to the biophysical parameters of interest.

  17. The Spiritual Potential of Otherness in Film: The Interplay of Scene and Narrative.

    ERIC Educational Resources Information Center

    Engnell, Richard A.

    1995-01-01

    Discusses the spiritual potential of scene, some spiritual implications of film narrative, and "Otherness" and the varieties of spirituality. Explores multiple ways in which film may manipulate scene and narrative to express Otherness by examining two films: "Places in the Heart" and "Tender Mercies." (SR)

  18. Auditory and Cognitive Effects of Aging on Perception of Environmental Sounds in Natural Auditory Scenes

    ERIC Educational Resources Information Center

    Gygi, Brian; Shafiro, Valeriy

    2013-01-01

    Purpose: Previously, Gygi and Shafiro (2011) found that when environmental sounds are semantically incongruent with the background scene (e.g., horse galloping in a restaurant), they can be identified more accurately by young normal-hearing listeners (YNH) than sounds congruent with the scene (e.g., horse galloping at a racetrack). This study…

  19. Speed Limits: Orientation and Semantic Context Interactions Constrain Natural Scene Discrimination Dynamics

    ERIC Educational Resources Information Center

    Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen

    2008-01-01

    The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…

  20. Eye Movement Control in Scene Viewing and Reading: Evidence from the Stimulus Onset Delay Paradigm

    ERIC Educational Resources Information Center

    Luke, Steven G.; Nuthmann, Antje; Henderson, John M.

    2013-01-01

    The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased…

  1. The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes

    ERIC Educational Resources Information Center

    Gygi, Brian; Shafiro, Valeriy

    2011-01-01

    The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five…

  2. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  3. 40 CFR 74.22 - Actual SO2 emissions rate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Actual SO2 emissions rate. 74.22... (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Calculations for Combustion Sources § 74.22 Actual SO2 emissions... actual SO2 emissions rate shall be 1985. (2) For combustion sources that commenced operation...

  4. Actualization and the Fear of Death: Retesting an Existential Hypothesis.

    ERIC Educational Resources Information Center

    Wood, Keith; Robinson, Paul J.

    1982-01-01

    Demonstrates that within a group of highly actualized individuals, the degree to which "own death" is integrated into constructs of self is a far more powerful predictor of fear of death than actualization. Findings suggest that actualization and integration are independent in their overall effect on fear of death. (Author)

  5. Notes for Serials Cataloging. Second Edition.

    ERIC Educational Resources Information Center

    Geer, Beverley, Ed.; Caraway, Beatrice L., Ed.

    Notes are indispensable to serials cataloging. Researchers, reference librarians, and catalogers regularly use notes on catalog records and, as the audience for these notes has expanded from the local library community to the global Internet community, the need for notes to be cogent, clear, and useful is greater than ever. This book is a…

  6. The Influence of Content Meaningfulness on Eye Movements across Tasks: Evidence from Scene Viewing and Reading

    PubMed Central

    Luke, Steven G.; Henderson, John M.

    2016-01-01

    The present study investigated the influence of content meaningfulness on eye-movement control in reading and scene viewing. Texts and scenes were manipulated to make them uninterpretable, and then eye-movements in reading and scene-viewing were compared to those in pseudo-reading and pseudo-scene viewing. Fixation durations and saccade amplitudes were greater for pseudo-stimuli. The effect of the removal of meaning was seen exclusively in the tail of the fixation duration distribution in both tasks, and the size of this effect was the same across tasks. These findings suggest that eye movements are controlled by a common mechanism in reading and scene viewing. They also indicate that not all eye movements are responsive to the meaningfulness of stimulus content. Implications for models of eye movement control are discussed. PMID:26973561

  7. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  8. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    SciTech Connect

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected by high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.

  9. Faces in Context: Does Face Perception Depend on the Orientation of the Visual Scene?

    PubMed

    Taubert, Jessica; van Golde, Celine; Verstraten, Frans A J

    2016-10-01

    The mechanisms held responsible for familiar face recognition are thought to be orientation dependent; inverted faces are more difficult to recognize than their upright counterparts. Although this effect of inversion has been investigated extensively, researchers have typically sliced faces from photographs and presented them in isolation. As such, it is not known whether the perceived orientation of a face is inherited from the visual scene in which it appears. Here, we address this question by measuring performance in a simultaneous same-different task while manipulating both the orientation of the faces and the scene. We found that the face inversion effect survived scene inversion. Nonetheless, an improvement in performance when the scene was upside down suggests that sensitivity to identity increased when the faces were more easily segmented from the scene. Thus, while these data identify congruency with the visual environment as a contributing factor in recognition performance, they imply different mechanisms operate on upright and inverted faces.

  10. The Influence of Content Meaningfulness on Eye Movements across Tasks: Evidence from Scene Viewing and Reading.

    PubMed

    Luke, Steven G; Henderson, John M

    2016-01-01

    The present study investigated the influence of content meaningfulness on eye-movement control in reading and scene viewing. Texts and scenes were manipulated to make them uninterpretable, and then eye-movements in reading and scene-viewing were compared to those in pseudo-reading and pseudo-scene viewing. Fixation durations and saccade amplitudes were greater for pseudo-stimuli. The effect of the removal of meaning was seen exclusively in the tail of the fixation duration distribution in both tasks, and the size of this effect was the same across tasks. These findings suggest that eye movements are controlled by a common mechanism in reading and scene viewing. They also indicate that not all eye movements are responsive to the meaningfulness of stimulus content. Implications for models of eye movement control are discussed. PMID:26973561

  11. Age-Related Differences in Spatial Frequency Processing during Scene Categorization.

    PubMed

    Ramanoël, Stephen; Kauffmann, Louise; Cousin, Emilie; Dojat, Michel; Peyrin, Carole

    2015-01-01

    Visual analysis of real-life scenes starts with the parallel extraction of different visual elementary features at different spatial frequencies. The global shape of the scene is mainly contained in low spatial frequencies (LSF), and the edges and borders of objects are mainly contained in high spatial frequencies (HSF). The present fMRI study investigates the effect of age on the spatial frequency processing in scenes. Young and elderly participants performed a categorization task (indoor vs. outdoor) on LSF and HSF scenes. Behavioral results revealed performance degradation for elderly participants only when categorizing HSF scenes. At the cortical level, young participants exhibited retinotopic organization of spatial frequency processing, characterized by medial activation in the anterior part of the occipital lobe for LSF scenes (compared to HSF), and the lateral activation in the posterior part of the occipital lobe for HSF scenes (compared to LSF). Elderly participants showed activation only in the anterior part of the occipital lobe for LSF scenes (compared to HSF), but not significant activation for HSF (compared to LSF). Furthermore, a ROI analysis revealed that the parahippocampal place area, a scene-selective region, was less activated for HSF than LSF for elderly participants only. Comparison between groups revealed greater activation of the right inferior occipital gyrus in young participants than in elderly participants for HSF. Activation of temporo-parietal regions was greater in elderly participants irrespective of spatial frequencies. The present findings indicate a specific low-contrasted HSF deficit for normal elderly people, in association with an occipito-temporal cortex dysfunction, and a functional reorganization of the categorization of filtered scenes. PMID:26288146

  12. Age-Related Differences in Spatial Frequency Processing during Scene Categorization

    PubMed Central

    2015-01-01

    Visual analysis of real-life scenes starts with the parallel extraction of different visual elementary features at different spatial frequencies. The global shape of the scene is mainly contained in low spatial frequencies (LSF), and the edges and borders of objects are mainly contained in high spatial frequencies (HSF). The present fMRI study investigates the effect of age on the spatial frequency processing in scenes. Young and elderly participants performed a categorization task (indoor vs. outdoor) on LSF and HSF scenes. Behavioral results revealed performance degradation for elderly participants only when categorizing HSF scenes. At the cortical level, young participants exhibited retinotopic organization of spatial frequency processing, characterized by medial activation in the anterior part of the occipital lobe for LSF scenes (compared to HSF), and the lateral activation in the posterior part of the occipital lobe for HSF scenes (compared to LSF). Elderly participants showed activation only in the anterior part of the occipital lobe for LSF scenes (compared to HSF), but not significant activation for HSF (compared to LSF). Furthermore, a ROI analysis revealed that the parahippocampal place area, a scene-selective region, was less activated for HSF than LSF for elderly participants only. Comparison between groups revealed greater activation of the right inferior occipital gyrus in young participants than in elderly participants for HSF. Activation of temporo-parietal regions was greater in elderly participants irrespective of spatial frequencies. The present findings indicate a specific low-contrasted HSF deficit for normal elderly people, in association with an occipito-temporal cortex dysfunction, and a functional reorganization of the categorization of filtered scenes. PMID:26288146

  13. Estimating trace deposition time with circadian biomarkers: a prospective and versatile tool for crime scene reconstruction.

    PubMed

    Ackermann, Katrin; Ballantyne, Kaye N; Kayser, Manfred

    2010-09-01

    Linking biological samples found at a crime scene with the actual crime event represents the most important aspect of forensic investigation, together with the identification of the sample donor. While DNA profiling is well established for donor identification, no reliable methods exist for timing forensic samples. Here, we provide for the first time a biochemical approach for determining deposition time of human traces. Using commercial enzyme-linked immunosorbent assays we showed that the characteristic 24-h profiles of two circadian hormones, melatonin (concentration peak at late night) and cortisol (peak in the morning) can be reproduced from small samples of whole blood and saliva. We further demonstrated by analyzing small stains dried and stored up to 4 weeks the in vitro stability of melatonin, whereas for cortisol a statistically significant decay with storage time was observed, although the hormone was still reliably detectable in 4-week-old samples. Finally, we showed that the total protein concentration, also assessed using a commercial assay, can be used for normalization of hormone signals in blood, but less so in saliva. Our data thus demonstrate that estimating normalized concentrations of melatonin and cortisol represents a prospective approach for determining deposition time of biological trace samples, at least from blood, with promising expectations for forensic applications. In the broader context, our study opens up a new field of circadian biomarkers for deposition timing of forensic traces; future studies using other circadian biomarkers may reveal if the time range offered by the two hormones studied here can be specified more exactly. PMID:20419380

  14. Kindergarten Quantum Mechanics: Lecture Notes

    SciTech Connect

    Coecke, Bob

    2006-01-04

    These lecture notes survey some joint work with Samson Abramsky as it was presented by me at several conferences in the summer of 2005. It concerns 'doing quantum mechanics using only pictures of lines, squares, triangles and diamonds'. This picture calculus can be seen as a very substantial extension of Dirac's notation, and has a purely algebraic counterpart in terms of so-called Strongly Compact Closed Categories (introduced by Abramsky and I which subsumes my Logic of Entanglement. For a survey on the 'what', the 'why' and the 'hows' I refer to a previous set of lecture notes. In a last section we provide some pointers to the body of technical literature on the subject.

  15. Recognizing Exponential Growth. Classroom Notes

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2004-01-01

    Two heuristic and three rigorous arguments are given for the fact that functions of the form Ce[kx], with C an arbitrary constant, are the only solutions of the equation dy/dx=ky where k is constant. Various of the proofs in this self-contained note could find classroom use in a first-year calculus course, an introductory course on differential…

  16. A note on "Kepler's equation".

    NASA Astrophysics Data System (ADS)

    Dutka, J.

    1997-07-01

    This note briefly points out the formal similarity between Kepler's equation and equations developed in Hindu and Islamic astronomy for describing the lunar parallax. Specifically, an iterative method for calculating the lunar parallax has been developed by the astronomer Habash al-Hasib al-Marwazi (about 850 A.D., Turkestan), which is surprisingly similar to the iterative method for solving Kepler's equation invented by Leonhard Euler (1707 - 1783).

  17. Training in LESS and NOTES.

    PubMed

    Liang, Xiao; Yang, Bo; Yinghao, Sun; Huiqin, Wang; Zhi, Cao; Chuanliang, Xu; Linhui, Wang

    2012-04-01

    LESS and NOTES are the further step forward the "scarless" surgery recently, which have challenged the main principles of conventional multiport laparoscopy. To develop the surgical skills for these novel techniques, the guideline for training program should be neccesary for its clinical practice to reduce the complications. In this paper, we will summary the challenges of these new technique and introduce our experience of training courses.

  18. DNA methylation: the future of crime scene investigation?

    PubMed

    Gršković, Branka; Zrnec, Dario; Vicković, Sanja; Popović, Maja; Mršić, Gordan

    2013-07-01

    Proper detection and subsequent analysis of biological evidence is crucial for crime scene reconstruction. The number of different criminal acts is increasing rapidly. Therefore, forensic geneticists are constantly on the battlefield, trying hard to find solutions how to solve them. One of the essential defensive lines in the fight against the invasion of crime is relying on DNA methylation. In this review, the role of DNA methylation in body fluid identification and other DNA methylation applications are discussed. Among other applications of DNA methylation, age determination of the donor of biological evidence, analysis of the parent-of-origin specific DNA methylation markers at imprinted loci for parentage testing and personal identification, differentiation between monozygotic twins due to their different DNA methylation patterns, artificial DNA detection and analyses of DNA methylation patterns in the promoter regions of circadian clock genes are the most important ones. Nevertheless, there are still a lot of open chapters in DNA methylation research that need to be closed before its final implementation in routine forensic casework. PMID:23649761

  19. Visuomotor crowding: the resolution of grasping in cluttered scenes.

    PubMed

    Bulakowski, Paul F; Post, Robert B; Whitney, David

    2009-01-01

    Reaching toward a cup of coffee while reading the newspaper becomes exceedingly difficult when other objects are nearby. Although much is known about the precision of visual perception in cluttered scenes, relatively little is understood about acting within these environments - the spatial resolution of visuomotor behavior. When the number and density of objects overwhelm visual processing, crowding results, which serves as a bottleneck for object recognition. Despite crowding, featural information of the ensemble persists, thereby supporting texture perception. While texture is beneficial for visual perception, it is relatively uninformative for guiding the metrics of grasping. Therefore, it would be adaptive if the visual and visuomotor systems utilized the clutter differently. Using an orientation task, we measured the effect of crowding on vision and visually guided grasping and found that the density of clutter similarly limited discrimination performance. However, while vision integrates the surround to compute a texture, action discounts this global information. We propose that this dissociation reflects an optimal use of information by each system. PMID:19949462

  20. Ocfentanil overdose fatality in the recreational drug scene.

    PubMed

    Coopman, Vera; Cordonnier, Jan; De Leeuw, Marc; Cirimele, Vincent

    2016-09-01

    This paper describes the first reported death involving ocfentanil, a potent synthetic opioid and structure analogue of fentanyl abused as a new psychoactive substance in the recreational drug scene. A 17-year-old man with a history of illegal substance abuse was found dead in his home after snorting a brown powder purchased over the internet with bitcoins. Acetaminophen, caffeine and ocfentanil were identified in the powder by gas chromatography mass spectrometry and reversed-phase liquid chromatography with diode array detector. Quantitation of ocfentanil in biological samples was performed using a target analysis based on liquid-liquid extraction and ultra performance liquid chromatography tandem mass spectrometry. In the femoral blood taken at the external body examination, the following concentrations were measured: ocfentanil 15.3μg/L, acetaminophen 45mg/L and caffeine 0.23mg/L. Tissues sampled at autopsy were analyzed to study the distribution of ocfentanil. The comprehensive systematic toxicological analysis on the post-mortem blood and tissue samples was negative for other compounds. Based on circumstantial evidence, autopsy findings and the results of the toxicological analysis, the medical examiner concluded that the cause of death was an acute intoxication with ocfentanil. The manner of death was assumed to be accidental after snorting the powder. PMID:27471990