Hillstrom, Anne P; Segabinazi, Joice D; Godwin, Hayward J; Liversedge, Simon P; Benson, Valerie
2017-02-19
We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Hird, H J; Brown, M K
2017-11-01
The identification of samples at a crime scene which require forensic DNA typing has been the focus of recent research interest. We propose a simple, but sensitive analysis system which can be deployed at a crime scene to identify crime scene stains as human or non-human. The proposed system uses the isothermal amplification of DNA in a rapid assay format, which returns results in as little as 30min from sampling. The assay system runs on the Genie II device, a proven in-field detection system which could be deployed at a crime scene. The results presented here demonstrate that the system was sufficiently specific and sensitive and was able to detect the presence of human blood, semen and saliva on mock forensic samples. Copyright © 2017. Published by Elsevier B.V.
Global ensemble texture representations are critical to rapid scene perception.
Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A
2017-06-01
Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-01-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703
Clandestine laboratory scene investigation and processing using portable GC/MS
NASA Astrophysics Data System (ADS)
Matejczyk, Raymond J.
1997-02-01
This presentation describes the use of portable gas chromatography/mass spectrometry for on-scene investigation and processing of clandestine laboratories. Clandestine laboratory investigations present special problems to forensic investigators. These crime scenes contain many chemical hazards that must be detected, identified and collected as evidence. Gas chromatography/mass spectrometry performed on-scene with a rugged, portable unit is capable of analyzing a variety of matrices for drugs and chemicals used in the manufacture of illicit drugs, such as methamphetamine. Technologies used to detect various materials at a scene have particular applications but do not address the wide range of samples, chemicals, matrices and mixtures that exist in clan labs. Typical analyses performed by GC/MS are for the purpose of positively establishing the identity of starting materials, chemicals and end-product collected from clandestine laboratories. Concerns for the public and investigator safety and the environment are also important factors for rapid on-scene data generation. Here is described the implementation of a portable multiple-inlet GC/MS system designed for rapid deployment to a scene to perform forensic investigations of clandestine drug manufacturing laboratories. GC/MS has long been held as the 'gold standard' in performing forensic chemical analyses. With the capability of GC/MS to separate and produce a 'chemical fingerprint' of compounds, it is utilized as an essential technique for detecting and positively identifying chemical evidence. Rapid and conclusive on-scene analysis of evidence will assist the forensic investigators in collecting only pertinent evidence thereby reducing the amount of evidence to be transported, reducing chain of custody concerns, reducing costs and hazards, maintaining sample integrity and speeding the completion of the investigative process.
The Neural Dynamics of Attentional Selection in Natural Scenes.
Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V
2016-10-12
The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.
ERIC Educational Resources Information Center
Greene, Michelle R.; Oliva, Aude
2009-01-01
Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects…
Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-06-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Slater, E A; Weiss, S J; Ernst, A A; Haynes, M
1998-09-01
Maintenance of an airway in the air medically transported patient is of paramount importance. The purpose of this study is to compare preflight versus en route rapid sequence intubation (RSI)-assisted intubations and to determine the value of air medical use of RSI. This study is a 31-month retrospective review of all patients intubated and transported by a large city air medical service. Subgroup analysis was based on whether patients were transported from a hospital or a scene and whether they were intubated preflight or en route. Information on age, Glasgow Coma Scale score, type of scene, ground time, and previous attempts at intubation was recorded. Complications included failures, multiple attempts at intubation, arrhythmias, and need for repeated paralytic agents. Comparisons were made using a confidence interval analysis. An alpha of 0.05 was considered significant; Bonferroni correction was used for multiple comparisons. Three hundred twenty-five patients were intubated and transported by Lifeflight during the study period. Two hundred eighty-eight patients were intubated using RSI (89%). The success rate was 97%. Preflight intubations were performed on 100 hospital calls and 86 scene calls. En route intubations were performed on 40 hospital cases and 62 scene calls. Patients who underwent preflight intubations were significantly younger than those who underwent en route intubations for both the hospital group (34 +/- 11 vs. 44 +/- 24 years, p < 0.05) and the scene group (27 +/- 13 vs. 32 +/- 16 years,p < 0.05). Otherwise, the demographic characteristics of the four groups were similar. Trauma accounted for 60 to 70% of hospital transfers and almost 95 to 100% of scene calls. Compared with preflight intubations, there was a significant decrease in ground time for hospital patients who were intubated en route (26 +/- 10 vs. 34 +/- 11 minutes, p < 0.05) and for scene patients who were intubated en route (11 +/- 8 vs. 18 +/- 9 minutes, p < 0.05). There were no significant differences between the groups for number of failures (9 of 288), arrhythmias (18 of 288), or necessity for repeated paralysis (8 of 288). Multiple intubation attempts were performed in more scene preflight patients (30 of 86, 35%) than scene en route patients (16 of 62, 26%), but this did not reach statistical significance. Even for patients having previous attempts at intubation, the success rate using RSI was 93% (62 of 67). Air medical intubations, both preflight and en route, for both scene calls and interhospital transports, can be done with a very high success rate. Rapid sequence intubation may improve the success rate. For scene calls, there was a significant decrease in ground time, and there was a trend toward fewer multiple intubation attempts when the patient was intubated en route instead of preflight.
Social relevance drives viewing behavior independent of low-level salience in rhesus macaques
Solyst, James A.; Buffalo, Elizabeth A.
2014-01-01
Quantifying attention to social stimuli during the viewing of complex social scenes with eye tracking has proven to be a sensitive method in the diagnosis of autism spectrum disorders years before average clinical diagnosis. Rhesus macaques provide an ideal model for understanding the mechanisms underlying social viewing behavior, but to date no comparable behavioral task has been developed for use in monkeys. Using a novel scene-viewing task, we monitored the gaze of three rhesus macaques while they freely viewed well-controlled composed social scenes and analyzed the time spent viewing objects and monkeys. In each of six behavioral sessions, monkeys viewed a set of 90 images (540 unique scenes) with each image presented twice. In two-thirds of the repeated scenes, either a monkey or an object was replaced with a novel item (manipulated scenes). When viewing a repeated scene, monkeys made longer fixations and shorter saccades, shifting from a rapid orienting to global scene contents to a more local analysis of fewer items. In addition to this repetition effect, in manipulated scenes, monkeys demonstrated robust memory by spending more time viewing the replaced items. By analyzing attention to specific scene content, we found that monkeys strongly preferred to view conspecifics and that this was not related to their salience in terms of low-level image features. A model-free analysis of viewing statistics found that monkeys that were viewed earlier and longer had direct gaze and redder sex skin around their face and rump, two important visual social cues. These data provide a quantification of viewing strategy, memory and social preferences in rhesus macaques viewing complex social scenes, and they provide an important baseline with which to compare to the effects of therapeutics aimed at enhancing social cognition. PMID:25414633
a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes
NASA Astrophysics Data System (ADS)
Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.
2017-11-01
Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.
Dima, Diana C; Perry, Gavin; Singh, Krish D
2018-06-11
In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
A Rapid and Efficient Method for Evaluation of Suspect Testimony: Palynological Scanning.
Wiltshire, Patricia E J; Hawksworth, David L; Edwards, Kevin J
2015-11-01
A rapid method for evaluating suspect testimony is valuable at any stage in an inquiry and can result in a change of direction in an investigation. Rape cases, in particular, can present problems where a defendant renders DNA analysis redundant by claiming that the claimant consented to have sexual relations. Forensic palynology is valuable in confirming or eliminating locations as being crime scenes, thus checking the testimony of both parties. In contrast to some forensic disciplines, forensic palynology can provide critical information without time-consuming full analysis. Two cases are described where the palynological assemblages from comparator samples of pertinent places were compared with those obtained from clothing of claimants and defendants. The results of rapid microscopical scanning of relevant preparations led to early confessions, thus obviating the need for costly analyses and protracted court proceedings. A third case demonstrates the unbiased nature of this technique where a man, although innocent of any offense, lied about having visited the crime scene for fear of prosecution. This highlights the need for sensitive policing in claims of rape. © 2015 American Academy of Forensic Sciences.
Virkler, Kelly; Lednev, Igor K
2009-07-01
Body fluid traces recovered at crime scenes are among the most important types of evidence to forensic investigators. They contain valuable DNA evidence which can identify a suspect or victim as well as exonerate an innocent individual. The first step of identifying a particular body fluid is highly important since the nature of the fluid is itself very informative to the investigation, and the destructive nature of a screening test must be considered when only a small amount of material is available. The ability to characterize an unknown stain at the scene of the crime without having to wait for results from a laboratory is another very critical step in the development of forensic body fluid analysis. Driven by the importance for forensic applications, body fluid identification methods have been extensively developed in recent years. The systematic analysis of these new developments is vital for forensic investigators to be continuously educated on possible superior techniques. Significant advances in laser technology and the development of novel light detectors have dramatically improved spectroscopic methods for molecular characterization over the last decade. The application of this novel biospectroscopy for forensic purposes opens new and exciting opportunities for the development of on-field, non-destructive, confirmatory methods for body fluid identification at a crime scene. In addition, the biospectroscopy methods are universally applicable to all body fluids unlike the majority of current techniques which are valid for individual fluids only. This article analyzes the current methods being used to identify body fluid stains including blood, semen, saliva, vaginal fluid, urine, and sweat, and also focuses on new techniques that have been developed in the last 5-6 years. In addition, the potential of new biospectroscopic techniques based on Raman and fluorescence spectroscopy is evaluated for rapid, confirmatory, non-destructive identification of a body fluid at a crime scene.
Coordinate references for the indoor/outdoor seamless positioning
NASA Astrophysics Data System (ADS)
Ruan, Ling; Zhang, Ling; Long, Yi; Cheng, Fei
2018-05-01
Indoor positioning technologies are being developed rapidly, and seamless positioning which connected indoor and outdoor space is a new trend. The indoor and outdoor positioning are not applying the same coordinate system and different indoor positioning scenes uses different indoor local coordinate reference systems. A specific and unified coordinate reference frame is needed as the space basis and premise in seamless positioning application. Trajectory analysis of indoor and outdoor integration also requires a uniform coordinate reference. However, the coordinate reference frame in seamless positioning which can applied to various complex scenarios is lacking of research for a long time. In this paper, we proposed a universal coordinate reference frame in indoor/outdoor seamless positioning. The research focus on analysis and classify the indoor positioning scenes and put forward the coordinate reference system establishment and coordinate transformation methods in each scene. And, through some experiments, the calibration method feasibility was verified.
The neural bases of spatial frequency processing during scene perception
Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole
2014-01-01
Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226
Research on hyperspectral dynamic scene and image sequence simulation
NASA Astrophysics Data System (ADS)
Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei
2016-10-01
This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Li, Ya-Pin; Gao, Hong-Wei; Fan, Hao-Jun; Wei, Wei; Xu, Bo; Dong, Wen-Long; Li, Qing-Feng; Song, Wen-Jing; Hou, Shi-Ke
2017-12-01
The objective of this study was to build a database to collect infectious disease information at the scene of a disaster through the use of 128 epidemiological questionnaires and 47 types of options, with rapid acquisition of information regarding infectious disease and rapid questionnaire customization at the scene of disaster relief by use of a personal digital assistant (PDA). SQL Server 2005 (Microsoft Corp, Redmond, WA) was used to create the option database for the infectious disease investigation, to develop a client application for the PDA, and to deploy the application on the server side. The users accessed the server for data collection and questionnaire customization with the PDA. A database with a set of comprehensive options was created and an application system was developed for the Android operating system (Google Inc, Mountain View, CA). On this basis, an infectious disease information collection system was built for use at the scene of disaster relief. The creation of an infectious disease information collection system and rapid questionnaire customization through the use of a PDA was achieved. This system integrated computer technology and mobile communication technology to develop an infectious disease information collection system and to allow for rapid questionnaire customization at the scene of disaster relief. (Disaster Med Public Health Preparedness. 2017;11:668-673).
Research on hyperspectral dynamic scene and image sequence simulation
NASA Astrophysics Data System (ADS)
Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei
2016-10-01
This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.
2017-10-01
Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.
Re-presentations of space in Hollywood movies: an event-indexing analysis.
Cutting, James; Iricinschi, Catalina
2015-03-01
Popular movies present chunk-like events (scenes and subscenes) that promote episodic, serial updating of viewers' representations of the ongoing narrative. Event-indexing theory would suggest that the beginnings of new scenes trigger these updates, which in turn require more cognitive processing. Typically, a new movie event is signaled by an establishing shot, one providing more background information and a longer look than the average shot. Our analysis of 24 films reconfirms this. More important, we show that, when returning to a previously shown location, the re-establishing shot reduces both context and duration while remaining greater than the average shot. In general, location shifts dominate character and time shifts in event segmentation of movies. In addition, over the last 70 years re-establishing shots have become more like the noninitial shots of a scene. Establishing shots have also approached noninitial shot scales, but not their durations. Such results suggest that film form is evolving, perhaps to suit more rapid encoding of narrative events. Copyright © 2014 Cognitive Science Society, Inc.
Multiple Vehicle Detection and Segmentation in Malaysia Traffic Flow
NASA Astrophysics Data System (ADS)
Fariz Hasan, Ahmad; Fikri Che Husin, Mohd; Affendi Rosli, Khairul; Norhafiz Hashim, Mohd; Faiz Zainal Abidin, Amar
2018-03-01
Vision based system are widely used in the field of Intelligent Transportation System (ITS) to extract a large amount of information to analyze traffic scenes. By rapid number of vehicles on the road as well as significant increase on cameras dictated the need for traffic surveillance systems. This system can take over the burden some task was performed by human operator in traffic monitoring centre. The main technique proposed by this paper is concentrated on developing a multiple vehicle detection and segmentation focusing on monitoring through Closed Circuit Television (CCTV) video. The system is able to automatically segment vehicle extracted from heavy traffic scene by optical flow estimation alongside with blob analysis technique in order to detect the moving vehicle. Prior to segmentation, blob analysis technique will compute the area of interest region corresponding to moving vehicle which will be used to create bounding box on that particular vehicle. Experimental validation on the proposed system was performed and the algorithm is demonstrated on various set of traffic scene.
Rapid detection of person information in a naturalistic scene.
Fletcher-Watson, Sue; Findlay, John M; Leekam, Susan R; Benson, Valerie
2008-01-01
A preferential-looking paradigm was used to investigate how gaze is distributed in naturalistic scenes. Two scenes were presented side by side: one contained a single person (person-present) and one did not (person-absent). Eye movements were recorded, the principal measures being the time spent looking at each region of the scenes, and the latency and location of the first fixation within each trial. We studied gaze patterns during free viewing, and also in a task requiring gender discrimination of the human figure depicted. Results indicated a strong bias towards looking to the person-present scene. This bias was present on the first fixation after image presentation, confirming previous findings of ultra-rapid processing of complex information. Faces attracted disproportionately many fixations, the preference emerging in the first fixation and becoming stronger in the following ones. These biases were exaggerated in the gender-discrimination task. A tendency to look at the object being fixated by the person in the scene was shown to be strongest at a slightly later point in the gaze sequence. We conclude that human bodies and faces are subject to special perceptual processing when presented as part of a naturalistic scene.
ERIC Educational Resources Information Center
Rieger, Jochem W.; Kochy, Nick; Schalk, Franziska; Gruschow, Marcus; Heinze, Hans-Jochen
2008-01-01
The visual system rapidly extracts information about objects from the cluttered natural environment. In 5 experiments, the authors quantified the influence of orientation and semantics on the classification speed of objects in natural scenes, particularly with regard to object-context interactions. Natural scene photographs were presented in an…
Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images
Cao, Jianfang; Chen, Lichao
2015-01-01
With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818
Conceptual short-term memory (CSTM) supports core claims of Christiansen and Chater.
Potter, Mary C
2016-01-01
Rapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.
What you see is what you expect: rapid scene understanding benefits from prior experience.
Greene, Michelle R; Botros, Abraham P; Beck, Diane M; Fei-Fei, Li
2015-05-01
Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.
Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A
2013-10-01
Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Bradley, Margaret M.; Lang, Peter J.
2013-01-01
During rapid serial visual presentation (RSVP), the perceptual system is confronted with a rapidly changing array of sensory information demanding resolution. At rapid rates of presentation, previous studies have found an early (e.g., 150–280 ms) negativity over occipital sensors that is enhanced when emotional, as compared with neutral, pictures are viewed, suggesting facilitated perception. In the present study, we explored how picture composition and the presence of people in the image affect perceptual processing of pictures of natural scenes. Using RSVP, pictures that differed in perceptual composition (figure–ground or scenes), content (presence of people or not), and emotional content (emotionally arousing or neutral) were presented in a continuous stream for 330 ms each with no intertrial interval. In both subject and picture analyses, all three variables affected the amplitude of occipital negativity, with the greatest enhancement for figure–ground compositions (as compared with scenes), irrespective of content and emotional arousal, supporting an interpretation that ease of perceptual processing is associated with enhanced occipital negativity. Viewing emotional pictures prompted enhanced negativity only for pictures that depicted people, suggesting that specific features of emotionally arousing images are associated with facilitated perceptual processing, rather than all emotional content. PMID:23780520
Integration and segregation in auditory scene analysis
NASA Astrophysics Data System (ADS)
Sussman, Elyse S.
2005-03-01
Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..
Simulating Scenes In Outer Space
NASA Technical Reports Server (NTRS)
Callahan, John D.
1989-01-01
Multimission Interactive Picture Planner, MIP, computer program for scientifically accurate and fast, three-dimensional animation of scenes in deep space. Versatile, reasonably comprehensive, and portable, and runs on microcomputers. New techniques developed to perform rapidly calculations and transformations necessary to animate scenes in scientifically accurate three-dimensional space. Written in FORTRAN 77 code. Primarily designed to handle Voyager, Galileo, and Space Telescope. Adapted to handle other missions.
ERIC Educational Resources Information Center
Bacon-Mace, Nadege; Kirchner, Holle; Fabre-Thorpe, Michele; Thorpe, Simon J.
2007-01-01
Using manual responses, human participants are remarkably fast and accurate at deciding if a natural scene contains an animal, but recent data show that they are even faster to indicate with saccadic eye movements which of 2 scenes contains an animal. How could it be that 2 images can apparently be processed faster than a single image? To better…
Ground-plane influences on size estimation in early visual processing.
Champion, Rebecca A; Warren, Paul A
2010-07-21
Ground-planes have an important influence on the perception of 3D space (Gibson, 1950) and it has been shown that the assumption that a ground-plane is present in the scene plays a role in the perception of object distance (Bruno & Cutting, 1988). Here, we investigate whether this influence is exerted at an early stage of processing, to affect the rapid estimation of 3D size. Participants performed a visual search task in which they searched for a target object that was larger or smaller than distracter objects. Objects were presented against a background that contained either a frontoparallel or slanted 3D surface, defined by texture gradient cues. We measured the effect on search performance of target location within the scene (near vs. far) and how this was influenced by scene orientation (which, e.g., might be consistent with a ground or ceiling plane, etc.). In addition, we investigated how scene orientation interacted with texture gradient information (indicating surface slant), to determine how these separate cues to scene layout were combined. We found that the difference in target detection performance between targets at the front and rear of the simulated scene was maximal when the scene was consistent with a ground-plane - consistent with the use of an elevation cue to object distance. In addition, we found a significant increase in the size of this effect when texture gradient information (indicating surface slant) was present, but no interaction between texture gradient and scene orientation information. We conclude that scene orientation plays an important role in the estimation of 3D size at an early stage of processing, and suggest that elevation information is linearly combined with texture gradient information for the rapid estimation of 3D size. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Rapid natural scene categorization in the near absence of attention
Li, Fei Fei; VanRullen, Rufin; Koch, Christof; Perona, Pietro
2002-01-01
What can we see when we do not pay attention? It is well known that we can be “blind” even to major aspects of natural scenes when we attend elsewhere. The only tasks that do not need attention appear to be carried out in the early stages of the visual system. Contrary to this common belief, we report that subjects can rapidly detect animals or vehicles in briefly presented novel natural scenes while simultaneously performing another attentionally demanding task. By comparison, they are unable to discriminate large T's from L's, or bisected two-color disks from their mirror images under the same conditions. We conclude that some visual tasks associated with “high-level” cortical areas may proceed in the near absence of attention. PMID:12077298
Collet, Anne-Claire; Fize, Denis; VanRullen, Rufin
2015-01-01
Rapid visual categorization is a crucial ability for survival of many animal species, including monkeys and humans. In real conditions, objects (either animate or inanimate) are never isolated but embedded in a complex background made of multiple elements. It has been shown in humans and monkeys that the contextual background can either enhance or impair object categorization, depending on context/object congruency (for example, an animal in a natural vs. man-made environment). Moreover, a scene is not only a collection of objects; it also has global physical features (i.e phase and amplitude of Fourier spatial frequencies) which help define its gist. In our experiment, we aimed to explore and compare the contribution of the amplitude spectrum of scenes in the context-object congruency effect in monkeys and humans. We designed a rapid visual categorization task, Animal versus Non-Animal, using as contexts both real scenes photographs and noisy backgrounds built from the amplitude spectrum of real scenes but with randomized phase spectrum. We showed that even if the contextual congruency effect was comparable in both species when the context was a real scene, it differed when the foreground object was surrounded by a noisy background: in monkeys we found a similar congruency effect in both conditions, but in humans the congruency effect was absent (or even reversed) when the context was a noisy background. PMID:26207915
Forensic 3D Scene Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3Dmore » measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.« less
Semantic Categorization Precedes Affective Evaluation of Visual Scenes
ERIC Educational Resources Information Center
Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.
2010-01-01
We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…
Changes in nursing ethics education in Lithuania.
Toliusiene, Jolanta; Peicius, Eimantas
2007-11-01
The post-Soviet scene in Lithuania is one of rapid change in medical and nursing ethics. A short introduction to the current background sets the scene for a wider discussion of ethics in health care professionals' education. Lithuania had to adapt rapidly from a politicized nursing and ethics curriculum to European regulations, and from a paternalistic style of care to one of engagement with choices and dilemmas. The relationships between professionals, and between professionals and patients, are affected by this in particular. This short article highlights these issues and how they impact on all involved.
NASA Technical Reports Server (NTRS)
Muller, Richard E. (Inventor); Mouroulis, Pantazis Z. (Inventor); Maker, Paul D. (Inventor); Wilson, Daniel W. (Inventor)
2003-01-01
The optical system of this invention is an unique type of imaging spectrometer, i.e. an instrument that can determine the spectra of all points in a two-dimensional scene. The general type of imaging spectrometer under which this invention falls has been termed a computed-tomography imaging spectrometer (CTIS). CTIS's have the ability to perform spectral imaging of scenes containing rapidly moving objects or evolving features, hereafter referred to as transient scenes. This invention, a reflective CTIS with an unique two-dimensional reflective grating, can operate in any wavelength band from the ultraviolet through long-wave infrared. Although this spectrometer is especially useful for rapidly occurring events it is also useful for investigation of some slow moving phenomena as in the life sciences.
An Analysis of the Max-Min Texture Measure.
1982-01-01
PANC 33 D2 Confusion Matrices for Scene A, IR 34 D3 Confusion Matrices for Scene B, PANC 35 D4 Confusion Matrices for Scene B, IR 36 D5 Confusion...Matrices for Scene C, PANC 37 D6 Confusion Matrices for Scene C, IR 38 D7 Confusion Matrices for Scene E, PANC 39 D8 Confusion Matrices for Scene E, IR 40...D9 Confusion Matrices for Scene H, PANC 41 DIO Confusion Matrices for Scene H, JR 42 3 .D 10CnuinMtie o cn ,IR4 AN ANALYSIS OF THE MAX-MIN TEXTURE
Rapid microfluidic analysis of a Y-STR multiplex for screening of forensic samples.
Gibson-Daw, Georgiana; Albani, Patricia; Gassmann, Marcus; McCord, Bruce
2017-02-01
In this paper, we demonstrate a rapid analysis procedure for use with a small set of rapidly mutating Y chromosomal short tandem repeat (Y-STR) loci that combines both rapid polymerase chain reaction (PCR) and microfluidic separation elements. The procedure involves a high-speed polymerase and a rapid cycling protocol to permit PCR amplification in 16 min. The resultant amplified sample is next analysed using a short 1.8-cm microfluidic electrophoresis system that permits a four-locus Y-STR genotype to be produced in 80 s. The entire procedure takes less than 25 min from sample collection to result. This paper describes the rapid amplification protocol as well as studies of the reproducibility and sensitivity of the procedure and its optimisation. The amplification process utilises a small high-speed thermocycler, microfluidic device and compact laptop, making it portable and potentially useful for rapid, inexpensive on-site genotyping. The four loci used for the multiplex were selected due to their rapid mutation rates and should proved useful in preliminary screening of samples and suspects. Overall, this technique provides a method for rapid sample screening of suspect and crime scene samples in forensic casework. Graphical abstract ᅟ.
Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory
ERIC Educational Resources Information Center
Coco, Moreno I.; Keller, Frank; Malcolm, George L.
2016-01-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…
Metabolic Mapping of the Brain's Response to Visual Stimulation: Studies in Humans.
ERIC Educational Resources Information Center
Phelps, Michael E.; Kuhl, David E.
1981-01-01
Studies demonstrate increasing glucose metabolic rates in human primary (PVC) and association (AVC) visual cortex as complexity of visual scenes increase. AVC increased more rapidly with scene complexity than PVC and increased local metabolic activities above control subject with eyes closed; indicates wide range and metabolic reserve of visual…
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.
1975-01-01
An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.
Scadding, Cameron J; Watling, R John; Thomas, Allen G
2005-08-15
The majority of crimes result in the generation of some form of physical evidence, which is available for collection by crime scene investigators or police. However, this debris is often limited in amount as modern criminals become more aware of its potential value to forensic scientists. The requirement to obtain robust evidence from increasingly smaller sized samples has required refinement and modification of old analytical techniques and the development of new ones. This paper describes a new method for the analysis of oxy-acetylene debris, left behind at a crime scene, and the establishment of its co-provenance with single particles of equivalent debris found on the clothing of persons of interest (POI). The ability to rapidly determine and match the elemental distribution patterns of debris collected from crime scenes to those recovered from persons of interest is essential in ensuring successful prosecution. Traditionally, relatively large amounts of sample (up to several milligrams) have been required to obtain a reliable elemental fingerprint of this type of material [R.J. Walting , B.F. Lynch, D. Herring, J. Anal. At. Spectrom. 12 (1997) 195]. However, this quantity of material is unlikely to be recovered from a POI. This paper describes the development and application of laser ablation inductively coupled plasma time of flight mass spectrometry (LA-ICP-TOF-MS), as an analytical protocol, which can be applied more appropriately to the analysis of micro-debris than conventional quadrupole based mass spectrometry. The resulting data, for debris as small as 70mum in diameter, was unambiguously matched between a single spherule recovered from a POI and a spherule recovered from the scene of crime, in an analytical procedure taking less than 5min.
Scenes of Devastation: Chasing Hawaii's Deadly Ohia Fungus | Hawaii Public
Scenes of Devastation: Chasing Hawaii's Deadly Ohia Fungus By Molly Solomon * Mar 25, 2016 TweetShareGoogle+Email Molly Solomon Rapid Ohia Death has devastated native forests on Hawaii Island, especially in Lower Puna subdivisions like Leilani Estates. Credit Molly Solomon One of Hawai'i's oldest and most
Perception of Objects in Natural Scenes: Is It Really Attention Free?
ERIC Educational Resources Information Center
Evans, Karla K.; Treisman, Anne
2005-01-01
Studies have suggested attention-free semantic processing of natural scenes in which concurrent tasks leave category detection unimpaired (e.g., F. Li, R. VanRullen, C. Koch, & P. Perona, 2002). Could this ability reflect detection of disjunctive feature sets rather than high-level binding? Participants detected an animal target in a rapid serial…
Robust fusion-based processing for military polarimetric imaging systems
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.; Kim, Kyung Su; Choi, Hyun-Jin
2017-05-01
Polarisation information within a scene can be exploited in military systems to give enhanced automatic target detection and recognition (ATD/R) performance. However, the performance gain achieved is highly dependent on factors such as the geometry, viewing conditions, and the surface finish of the target. Such performance sensitivities are highly undesirable in many tactical military systems where operational conditions can vary significantly and rapidly during a mission. Within this paper, a range of processing architectures and fusion methods is considered in terms of their practical viability and operational robustness for systems requiring ATD/R. It is shown that polarisation information can give useful performance gains but, to retained system robustness, the introduction of polarimetric processing should be done in such a way as to not compromise other discriminatory scene information in the spectral and spatial domains. The analysis concludes that polarimetric data can be effectively integrated with conventional intensity-based ATD/R by either adapting the ATD/R processing function based on the scene polarisation or else by detection-level fusion. Both of these approaches avoid the introduction of processing bottlenecks and limit the impact of processing on system latency.
Guidance of attention to objects and locations by long-term memory of natural scenes.
Becker, Mark W; Rasmussen, Ian P
2008-11-01
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Schettino, Antonio; Keil, Andreas; Porcu, Emanuele; Müller, Matthias M
2016-06-01
The rapid extraction of affective cues from the visual environment is crucial for flexible behavior. Previous studies have reported emotion-dependent amplitude modulations of two event-related potential (ERP) components - the N1 and EPN - reflecting sensory gain control mechanisms in extrastriate visual areas. However, it is unclear whether both components are selective electrophysiological markers of attentional orienting toward emotional material or are also influenced by physical features of the visual stimuli. To address this question, electrical brain activity was recorded from seventeen male participants while viewing original and bright versions of neutral and erotic pictures. Bright neutral scenes were rated as more pleasant compared to their original counterpart, whereas erotic scenes were judged more positively when presented in their original version. Classical and mass univariate ERP analysis showed larger N1 amplitude for original relative to bright erotic pictures, with no differences for original and bright neutral scenes. Conversely, the EPN was only modulated by picture content and not by brightness, substantiating the idea that this component is a unique electrophysiological marker of attention allocation toward emotional material. Complementary topographic analysis revealed the early selective expression of a centro-parietal positivity following the presentation of original erotic scenes only, reflecting the recruitment of neural networks associated with sustained attention and facilitated memory encoding for motivationally relevant material. Overall, these results indicate that neural networks subtending the extraction of emotional information are differentially recruited depending on low-level perceptual features, which ultimately influence affective evaluations. Copyright © 2016 Elsevier Inc. All rights reserved.
DNA methylation: the future of crime scene investigation?
Gršković, Branka; Zrnec, Dario; Vicković, Sanja; Popović, Maja; Mršić, Gordan
2013-07-01
Proper detection and subsequent analysis of biological evidence is crucial for crime scene reconstruction. The number of different criminal acts is increasing rapidly. Therefore, forensic geneticists are constantly on the battlefield, trying hard to find solutions how to solve them. One of the essential defensive lines in the fight against the invasion of crime is relying on DNA methylation. In this review, the role of DNA methylation in body fluid identification and other DNA methylation applications are discussed. Among other applications of DNA methylation, age determination of the donor of biological evidence, analysis of the parent-of-origin specific DNA methylation markers at imprinted loci for parentage testing and personal identification, differentiation between monozygotic twins due to their different DNA methylation patterns, artificial DNA detection and analyses of DNA methylation patterns in the promoter regions of circadian clock genes are the most important ones. Nevertheless, there are still a lot of open chapters in DNA methylation research that need to be closed before its final implementation in routine forensic casework.
Rapid discrimination of visual scene content in the human brain.
Anokhin, Andrey P; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W; Heath, Andrew C
2006-06-06
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n = 264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200 and 600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline region, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance.
Rapid discrimination of visual scene content in the human brain
Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.
2007-01-01
The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815
Scene analysis in the natural environment
Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.
2014-01-01
The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740
Groen, Iris I A; Silson, Edward H; Baker, Chris I
2017-02-19
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Subliminal encoding and flexible retrieval of objects in scenes.
Wuethrich, Sergej; Hannula, Deborah E; Mast, Fred W; Henke, Katharina
2018-04-27
Our episodic memory stores what happened when and where in life. Episodic memory requires the rapid formation and flexible retrieval of where things are located in space. Consciousness of the encoding scene is considered crucial for episodic memory formation. Here, we question the necessity of consciousness and hypothesize that humans can form unconscious episodic memories. Participants were presented with subliminal scenes, i.e., scenes invisible to the conscious mind. The scenes displayed objects at certain locations for participants to form unconscious object-in-space memories. Later, the same scenes were presented supraliminally, i.e., visibly, for retrieval testing. Scenes were presented absent the objects and rotated by 90°-270° in perspective to assess the representational flexibility of unconsciously formed memories. During the test phase, participants performed a forced-choice task that required them to place an object in one of two highlighted scene locations and their eye movements were recorded. Evaluation of the eye tracking data revealed that participants remembered object locations unconsciously, irrespective of changes in viewing perspective. This effect of gaze was related to correct placements of objects in scenes, and an intuitive decision style was necessary for unconscious memories to influence intentional behavior to a significant degree. We conclude that conscious perception is not mandatory for spatial episodic memory formation. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
Bölte, Jens; Hofmann, Reinhild; Meier, Claudine C.; Dobel, Christian
2018-01-01
At the interface between scene perception and speech production, we investigated how rapidly action scenes can activate semantic and lexical information. Experiment 1 examined how complex action-scene primes, presented for 150 ms, 100 ms, or 50 ms and subsequently masked, influenced the speed with which immediately following action-picture targets are named. Prime and target actions were either identical, showed the same action with different actors and environments, or were unrelated. Relative to unrelated primes, identical and same-action primes facilitated naming the target action, even when presented for 50 ms. In Experiment 2, neutral primes assessed the direction of effects. Identical and same-action scenes induced facilitation but unrelated actions induced interference. In Experiment 3, written verbs were used as targets for naming, preceded by action primes. When target verbs denoted the prime action, clear facilitation was obtained. In contrast, interference was observed when target verbs were phonologically similar, but otherwise unrelated, to the names of prime actions. This is clear evidence for word-form activation by masked action scenes. Masked action pictures thus provide conceptual information that is detailed enough to facilitate apprehension and naming of immediately following scenes. Masked actions even activate their word-form information–as is evident when targets are words. We thus show how language production can be primed with briefly flashed masked action scenes, in answer to long-standing questions in scene processing. PMID:29652939
Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
2015-01-01
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
NASA Fundamental Remote Sensing Science Research Program
NASA Technical Reports Server (NTRS)
1984-01-01
The NASA Fundamental Remote Sensing Research Program is described. The program provides a dynamic scientific base which is continually broadened and from which future applied research and development can draw support. In particular, the overall objectives and current studies of the scene radiation and atmospheric effect characterization (SRAEC) project are reviewed. The SRAEC research can be generically structured into four types of activities including observation of phenomena, empirical characterization, analytical modeling, and scene radiation analysis and synthesis. The first three activities are the means by which the goal of scene radiation analysis and synthesis is achieved, and thus are considered priority activities during the early phases of the current project. Scene radiation analysis refers to the extraction of information describing the biogeophysical attributes of the scene from the spectral, spatial, and temporal radiance characteristics of the scene including the atmosphere. Scene radiation synthesis is the generation of realistic spectral, spatial, and temporal radiance values for a scene with a given set of biogeophysical attributes and atmospheric conditions.
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
The influence of behavioral relevance on the processing of global scene properties: An ERP study.
Hansen, Natalie E; Noesen, Birken T; Nador, Jeffrey D; Harel, Assaf
2018-05-02
Recent work studying the temporal dynamics of visual scene processing (Harel et al., 2016) has found that global scene properties (GSPs) modulate the amplitude of early Event-Related Potentials (ERPs). It is still not clear, however, to what extent the processing of these GSPs is influenced by their behavioral relevance, determined by the goals of the observer. To address this question, we investigated how behavioral relevance, operationalized by the task context impacts the electrophysiological responses to GSPs. In a set of two experiments we recorded ERPs while participants viewed images of real-world scenes, varying along two GSPs, naturalness (manmade/natural) and spatial expanse (open/closed). In Experiment 1, very little attention to scene content was required as participants viewed the scenes while performing an orthogonal fixation-cross task. In Experiment 2 participants saw the same scenes but now had to actively categorize them, based either on their naturalness or spatial expense. We found that task context had very little impact on the early ERP responses to the naturalness and spatial expanse of the scenes: P1, N1, and P2 could distinguish between open and closed scenes and between manmade and natural scenes across both experiments. Further, the specific effects of naturalness and spatial expanse on the ERP components were largely unaffected by their relevance for the task. A task effect was found at the N1 and P2 level, but this effect was manifest across all scene dimensions, indicating a general effect rather than an interaction between task context and GSPs. Together, these findings suggest that the extraction of global scene information reflected in the early ERP components is rapid and very little influenced by top-down observer-based goals. Copyright © 2018 Elsevier Ltd. All rights reserved.
Comprehensive Understanding for Vegetated Scene Radiance Relationships
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Deering, D. W.
1984-01-01
Directional reflectance distributions spanning the entire existent hemisphere were measured in two field studies; one using a Mark III 3-band radiometer and one using the rapid scanning bidirectional field instrument called PARABOLA. Surfaces measured included corn, soybeans, bare soils, grass lawn, orchard grass, alfalfa, cotton row crops, plowed field, annual grassland, stipa grass, hard wheat, salt plain shrubland, and irrigated wheat. Analysis of field data showed unique reflectance distributions ranging from bare soil to complete vegetation canopies. Physical mechanisms causing these trends were proposed. A 3-D model was developed and is unique in that it predicts: (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy; (2) the spectral absorption as a function of location within the scene; and (3) the directional spectral radiance as a function of the sensor's location within the scene. Initial verification of the model as applied to a soybean row crop showed that the simulated directional data corresponded relatively well in gross trends to the measured data. The model was expanded to include the anisotropic scattering properties of leaves as a function of the leaf orientation distribution in both the zenith and azimuth angle modes.
NASA Astrophysics Data System (ADS)
Li, Wanjing; Schütze, Rainer; Böhler, Martin; Boochs, Frank; Marzani, Franck S.; Voisin, Yvon
2009-06-01
We present an approach to integrate a preprocessing step of the region of interest (ROI) localization into 3-D scanners (laser or stereoscopic). The definite objective is to make the 3-D scanner intelligent enough to localize rapidly in the scene, during the preprocessing phase, the regions with high surface curvature, so that precise scanning will be done only in these regions instead of in the whole scene. In this way, the scanning time can be largely reduced, and the results contain only pertinent data. To test its feasibility and efficiency, we simulated the preprocessing process under an active stereoscopic system composed of two cameras and a video projector. The ROI localization is done in an iterative way. First, the video projector projects a regular point pattern in the scene, and then the pattern is modified iteratively according to the local surface curvature of each reconstructed 3-D point. Finally, the last pattern is used to determine the ROI. Our experiments showed that with this approach, the system is capable to localize all types of objects, including small objects with small depth.
ERIC Educational Resources Information Center
Champoux, Joseph E.
2005-01-01
Live-action and animated film remake scenes can show many topics typically taught in organizational behaviour and management courses. This article discusses, analyses and compares such scenes to identify parallel film scenes useful for teaching. The analysis assesses the scenes to decide which scene type, animated or live-action, more effectively…
Photonics: From target recognition to lesion detection
NASA Technical Reports Server (NTRS)
Henry, E. Michael
1994-01-01
Since 1989, Martin Marietta has invested in the development of an innovative concept for robust real-time pattern recognition for any two-dimensioanal sensor. This concept has been tested in simulation, and in laboratory and field hardware, for a number of DOD and commercial uses from automatic target recognition to manufacturing inspection. We have now joined Rose Health Care Systems in developing its use for medical diagnostics. The concept is based on determining regions of interest by using optical Fourier bandpassing as a scene segmentation technique, enhancing those regions using wavelet filters, passing the enhanced regions to a neural network for analysis and initial pattern identification, and following this initial identification with confirmation by optical correlation. The optical scene segmentation and pattern confirmation are performed by the same optical module. The neural network is a recursive error minimization network with a small number of connections and nodes that rapidly converges to a global minimum.
Dynamics of distribution and density of phreatophytes and other arid land plant communities
NASA Technical Reports Server (NTRS)
Turner, R. M. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Ground truth measurements of plant coverage on six satellite overflight dates reveal unique trends in coverage for the five desert or semi-desert communities selected. Densitometry and multispectral additive color viewing were used in a preliminary analysis of imagery using the electronic satellite image analyzer console at Stanford Research Institute. The densitometric analysis shows promise for mapping boundaries between plant communities. Color additive viewing of a chronologic sequence of the same scene shown in rapid order will provide a method for mapping phreatophyte communities.
Henderson, John M; Choi, Wonil
2015-06-01
During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Barrow, H. G.; Weyl, S. A.
1976-01-01
Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography.
Adolescent Characters and Alcohol Use Scenes in Brazilian Movies, 2000-2008.
Castaldelli-Maia, João Mauricio; de Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh
2016-04-01
Quantitative structured assessment of 193 scenes depicting substance use from a convenience sample of 50 Brazilian movies was performed. Logistic regression and analysis of variance or multivariate analysis of variance models were employed to test for two different types of outcome regarding alcohol appearance: The mean length of alcohol scenes in seconds and the prevalence of alcohol use scenes. The presence of adolescent characters was associated with a higher prevalence of alcohol use scenes compared to nonalcohol use scenes. The presence of adolescents was also associated with a higher than average length of alcohol use scenes compared to the nonalcohol use scenes. Alcohol use was negatively associated with cannabis, cocaine, and other drugs use. However, when the use of cannabis, cocaine, or other drugs was present in the alcohol use scenes, a higher average length was found. This may mean that most vulnerable group may see drinking as a more attractive option leading to higher alcohol use. © The Author(s) 2016.
Fires and Heavy Smoke in Alaska
NASA Technical Reports Server (NTRS)
2002-01-01
On May 28, 2002, the Moderate Resolution Imaging Spectroradiometer (MODIS) captured this image of fires that continue to burn in central Alaska. Alaska is very dry and warm for this time of year, and has experienced over 230 wildfires so far this season. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of the scene at the sensor's fullest resolution, visit the MODIS Rapid Response Image Gallery.
Wagner, Dylan D; Kelley, William M; Heatherton, Todd F
2011-12-01
People are able to rapidly infer complex personality traits and mental states even from the most minimal person information. Research has shown that when observers view a natural scene containing people, they spend a disproportionate amount of their time looking at the social features (e.g., faces, bodies). Does this preference for social features merely reflect the biological salience of these features or are observers spontaneously attempting to make sense of complex social dynamics? Using functional neuroimaging, we investigated neural responses to social and nonsocial visual scenes in a large sample of participants (n = 48) who varied on an individual difference measure assessing empathy and mentalizing (i.e., empathizing). Compared with other scene categories, viewing natural social scenes activated regions associated with social cognition (e.g., dorsomedial prefrontal cortex and temporal poles). Moreover, activity in these regions during social scene viewing was strongly correlated with individual differences in empathizing. These findings offer neural evidence that observers spontaneously engage in social cognition when viewing complex social material but that the degree to which people do so is mediated by individual differences in trait empathizing.
Rank preserving sparse learning for Kinect based scene classification.
Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong
2013-10-01
With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.
2017-01-01
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013
A qualitative approach for recovering relative depths in dynamic scenes
NASA Technical Reports Server (NTRS)
Haynes, S. M.; Jain, R.
1987-01-01
This approach to dynamic scene analysis is a qualitative one. It computes relative depths using very general rules. The depths calculated are qualitative in the sense that the only information obtained is which object is in front of which others. The motion is qualitative in the sense that the only required motion data is whether objects are moving toward or away from the camera. Reasoning, which takes into account the temporal character of the data and the scene, is qualitative. This approach to dynamic scene analysis can tolerate imprecise data because in dynamic scenes the data are redundant.
AgRISTARS. Supporting research: Algorithms for scene modelling
NASA Technical Reports Server (NTRS)
Rassbach, M. E. (Principal Investigator)
1982-01-01
The requirements for a comprehensive analysis of LANDSAT or other visual data scenes are defined. The development of a general model of a scene and a computer algorithm for finding the particular model for a given scene is discussed. The modelling system includes a boundary analysis subsystem, which detects all the boundaries and lines in the image and builds a boundary graph; a continuous variation analysis subsystem, which finds gradual variations not well approximated by a boundary structure; and a miscellaneous features analysis, which includes texture, line parallelism, etc. The noise reduction capabilities of this method and its use in image rectification and registration are discussed.
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
Research on 3D virtual campus scene modeling based on 3ds Max and VRML
NASA Astrophysics Data System (ADS)
Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue
2015-12-01
With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.
A Rapid Auto-Indexing Technology for Designing Readable E-Learning Content
ERIC Educational Resources Information Center
Yu, Pao-Ta; Liao, Yuan-Hsun; Su, Ming-Hsiang; Cheng, Po-Jen; Pai, Chun-Hsuan
2012-01-01
A rapid scene indexing method is proposed to improve retrieval performance for students accessing instructional videos. This indexing method is applied to anchor suitable indices to the instructional video so that students can obtain several small lesson units to gain learning mastery. The method also regulates online course progress. These…
Edge co-occurrences can account for rapid categorization of natural versus animal images
NASA Astrophysics Data System (ADS)
Perrinet, Laurent U.; Bednar, James A.
2015-06-01
Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.
A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.
Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent
2007-07-20
Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
Visible-Infrared Hyperspectral Image Projector
NASA Technical Reports Server (NTRS)
Bolcar, Matthew
2013-01-01
The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.
Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.
Durant, Szonya; Wall, Matthew B; Zanker, Johannes M
2011-09-09
Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.
Semantic guidance of eye movements in real-world scenes
Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc
2011-01-01
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914
Semantic guidance of eye movements in real-world scenes.
Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc
2011-05-25
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ultra-Rapid Categorization of Meaningful Real-Life Scenes in Adults with and without ASD
ERIC Educational Resources Information Center
Vanmarcke, Steven; Van Der Hallen, Ruth; Evers, Kris; Noens, Ilse; Steyaert, Jean; Wagemans, Johan
2016-01-01
In comparison to typically developing (TD) individuals, people with autism spectrum disorder (ASD) appear to be worse in the fast extraction of the global meaning of a situation or picture. Ultra-rapid categorization [paradigm developed by Thorpe et al. ("Nature" 381:520-522, 1996)] involves such global information processing. We…
The singular nature of auditory and visual scene analysis in autism
Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa
2017-01-01
Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025
Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics
NASA Astrophysics Data System (ADS)
Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu
2007-11-01
In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.
Brockmann, C.E.; Carter, William D.
1976-01-01
ERTS-1 digital data in the form of computer compatible tapes provide the geoscientist with an unusual opportunity to test the maximum flexibility of the satellite system using interactive computers, such as the General Electric Image 100 System. Approximately 9 hours of computer and operator time were used to analyze the Lake Titicaca image, 1443-14073, acquired 9 October 1973. The total area of the lake and associate wetlands was calculated and found to be within 3 percent of previous measurements. The area was subdivided by reflectance characteristics employing cluster analysis of all 4 bands and later compared with density values of band 4. Reflectance variations may be attributed to surface roughness, water depth and bottom characteristics, turbidity, and floating matter. Wetland marsh vegetation, vegetation related to ground-water effluents, natural grasses, and farm crops were separated by cluster analysis. Sandstone, limestone, sand dunes, and several volcanic rock types were similarly separated and displayed by assigned colors and extended through the entire scene. Waste dumps of the Matilde Zinc Mine and smaller mine workings were tentatively identified by signature analysis. Histograms of reflectance values and map printouts were automatically obtained as a record of each of the principal themes. These themes were also stored on a work tape for later display and photographic record as well as to serve in training. The Image 100 System is rapid, extremely flexible and very useful to the investigator in identifying subtle features that may not be noticed by conventional image analysis. The entire scene, which covers 34,225 km2, was analyzed at a scale of 1:600,000, and portions at 1:98,000 and 1:25,000, during a 9-hour period at a rental cost of $250 per hour. Costs to the user can be reduced by restricting its uses to specific areas, objectives, and procedures, rather than undertaking a complete analysis of a total scene.
Differential Visual Processing of Animal Images, with and without Conscious Awareness
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106
Differential Visual Processing of Animal Images, with and without Conscious Awareness.
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.
Real-time full-motion color Flash lidar for target detection and identification
NASA Astrophysics Data System (ADS)
Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt
2015-05-01
Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.
Optical system design of dynamic infrared scene projector based on DMD
NASA Astrophysics Data System (ADS)
Lu, Jing; Fu, Yuegang; Liu, Zhiying; Li, Yandong
2014-09-01
Infrared scene simulator is now widely used to simulate infrared scene practicality in the laboratory, which can greatly reduce the research cost of the optical electrical system and offer economical experiment environment. With the advantage of large dynamic range and high spatial resolution, dynamic infrared projection technology, which is the key part of the infrared scene simulator, based on digital micro-mirror device (DMD) has been rapidly developed and widely applied in recent years. In this paper, the principle of the digital micro-mirror device is briefly introduced and the characteristics of the DLP (Digital Light Procession) system based on digital micromirror device (DMD) are analyzed. The projection system worked at 8~12μm with 1024×768 pixel DMD is designed by ZEMAX. The MTF curve is close to the diffraction limited curve and the radius of the spot diagram is smaller than that of the airy disk. The result indicates that the system meets the design requirements.
Testing Instrument for Flight-Simulator Displays
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1987-01-01
Displays for flight-training simulators rapidly aligned with aid of integrated optical instrument. Calibrations and tests such as aligning boresight of display with respect to user's eyes, checking and adjusting display horizon, checking image sharpness, measuring illuminance of displayed scenes, and measuring distance of optical focus of scene performed with single unit. New instrument combines all measurement devices in single, compact, integrated unit. Requires just one initial setup. Employs laser and produces narrow, collimated beam for greater measurement accuracy. Uses only one moving part, double right prism, to position laser beam.
An application of cluster detection to scene analysis
NASA Technical Reports Server (NTRS)
Rosenfeld, A. H.; Lee, Y. H.
1971-01-01
Certain arrangements of local features in a scene tend to group together and to be seen as units. It is suggested that in some instances, this phenomenon might be interpretable as a process of cluster detection in a graph-structured space derived from the scene. This idea is illustrated using a class of scenes that contain only horizontal and vertical line segments.
An integratable microfluidic cartridge for forensic swab samples lysis.
Yang, Jianing; Brooks, Carla; Estes, Matthew D; Hurth, Cedric M; Zenhausern, Frederic
2014-01-01
Fully automated rapid forensic DNA analysis requires integrating several multistep processes onto a single microfluidic platform, including substrate lysis, extraction of DNA from the released lysate solution, multiplexed PCR amplification of STR loci, separation of PCR products by capillary electrophoresis, and analysis for allelic peak calling. Over the past several years, most of the rapid DNA analysis systems developed started with the reference swab sample lysate and involved an off-chip lysis of collected substrates. As a result of advancement in technology and chemistry, addition of a microfluidic module for swab sample lysis has been achieved in a few of the rapid DNA analysis systems. However, recent reports on integrated rapid DNA analysis systems with swab-in and answer-out capability lack any quantitative and qualitative characterization of the swab-in sample lysis module, which is important for downstream forensic sample processing. Maximal collection and subsequent recovery of the biological material from the crime scene is one of the first and critical steps in forensic DNA technology. Herein we present the design, fabrication and characterization of an integratable swab lysis cartridge module and the test results obtained from different types of commonly used forensic swab samples, including buccal, saliva, and blood swab samples, demonstrating the compatibility with different downstream DNA extraction chemistries. This swab lysis cartridge module is easy to operate, compatible with both forensic and microfluidic requirements, and ready to be integrated with our existing automated rapid forensic DNA analysis system. Following the characterization of the swab lysis module, an integrated run from buccal swab sample-in to the microchip CE electropherogram-out was demonstrated on the integrated prototype instrument. Therefore, in this study, we demonstrate that this swab lysis cartridge module is: (1) functionally, comparable with routine benchtop lysis, (2) compatible with various types of swab samples and chemistries, and (3) integratable to achieve a micro total analysis system (μTAS) for rapid DNA analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
New light field camera based on physical based rendering tracing
NASA Astrophysics Data System (ADS)
Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung
2014-03-01
Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.
Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan
2016-07-01
A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.
Adaptive foveated single-pixel imaging with dynamic supersampling
Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.
2017-01-01
In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538
Concave Surround Optics for Rapid Multi-View Imaging
2006-11-01
thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically
NASA Astrophysics Data System (ADS)
Wang, DeLiang; Terman, David
1995-01-01
A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated analytically and by computer simulation. The model of each oscillator corresponds to a standard relaxation oscillator with two time scales. The network exhibits a mechanism of selective gating, whereby an oscillator jumping up to its active phase rapidly recruits the oscillators stimulated by the same pattern, while preventing other oscillators from jumping up. We show analytically that with the selective gating mechanism the network rapidly achieves both synchronization within blocks of oscillators that are stimulated by connected regions and desynchronization between different blocks. Computer simulations demonstrate LEGION's promising ability for segmenting multiple input patterns in real time. This model lays a physical foundation for the oscillatory correlation theory of feature binding, and may provide an effective computational framework for scene segmentation and figure/ground segregation.
Aquila, Isabella; Gratteri, Santo; Sacco, Matteo A; Ricci, Pietrantonio
2018-05-01
Forensic botany can provide useful information for pathologists, particularly on crime scene investigation. We report the case of a man who arrived at the hospital and died shortly afterward. The body showed widespread electrical lesions. The statements of his brother and wife about the incident aroused a large amount of suspicion in the investigators. A crime scene investigation was carried out, along with a botanical morphological survey on small vegetations found on the corpse. An autopsy was also performed. Botanical analysis showed some samples of Xanthium spinosum, thus leading to the discovery of the falsification of the crime scene although the location of the true crime scene remained a mystery. The botanical analysis, along with circumstantial data and autopsy findings, led to the discovery of the real crime scene and became crucial as part of the legal evidence regarding the falsity of the statements made to investigators. © 2017 American Academy of Forensic Sciences.
ERIC Educational Resources Information Center
Clark, Caroline T.; Blackburn, Mollie V.
2016-01-01
This study examines LGBT-inclusive and queering discourses in five recent award-winning LGBT-themed young adult books. The analysis brought scenes of violence and sex/love scenes to the fore. Violent scenes offered readers messages that LGBT people are either the victims of violence-fueled hatred and fear, or, in some cases, showed a gay person…
Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu
2006-06-01
Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.
Advanced Weapon System (AWS) Sensor Prediction Techniques Study. Volume II
1981-09-01
models are suggested. TV. 1-1 ’ICourant Com’p’uter Sctence Report #9 December 1975 Scene Analysis: A Survey Carl Weiman Cou rant Institute of...some crucial differences. In the psycho- logical model of mechanical vision, the aim of scene analysis is to perceive and understand 2-0 images of 3-D...scenes. The meaning of this analogy can be clarified using a rudimentary informational model ; this yields a natural hierarchy from physical
Feature diagnosticity and task context shape activity in human scene-selective cortex.
Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S
2016-01-15
Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.
Towards a neural basis of music perception.
Koelsch, Stefan; Siebel, Walter A
2005-12-01
Music perception involves complex brain functions underlying acoustic analysis, auditory memory, auditory scene analysis, and processing of musical syntax and semantics. Moreover, music perception potentially affects emotion, influences the autonomic nervous system, the hormonal and immune systems, and activates (pre)motor representations. During the past few years, research activities on different aspects of music processing and their neural correlates have rapidly progressed. This article provides an overview of recent developments and a framework for the perceptual side of music processing. This framework lays out a model of the cognitive modules involved in music perception, and incorporates information about the time course of activity of some of these modules, as well as research findings about where in the brain these modules might be located.
Being There: (Re)Making the Assessment Scene
ERIC Educational Resources Information Center
Gallagher, Chris W.
2011-01-01
I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the…
Trainor, Laurel J.
2015-01-01
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory–motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation. PMID:25646512
Trainor, Laurel J
2015-03-19
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory-motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation.
NASA Astrophysics Data System (ADS)
Li, Jia; Tian, Yonghong; Gao, Wen
2008-01-01
In recent years, the amount of streaming video has grown rapidly on the Web. Often, retrieving these streaming videos offers the challenge of indexing and analyzing the media in real time because the streams must be treated as effectively infinite in length, thus precluding offline processing. Generally speaking, captions are important semantic clues for video indexing and retrieval. However, existing caption detection methods often have difficulties to make real-time detection for streaming video, and few of them concern on the differentiation of captions from scene texts and scrolling texts. In general, these texts have different roles in streaming video retrieval. To overcome these difficulties, this paper proposes a novel approach which explores the inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video. In our approach, the inter-frame correlation information is used to distinguish caption texts from scene texts and scrolling texts. Moreover, wavelet-domain Generalized Gaussian Models (GGMs) are utilized to automatically remove non-text regions from each frame and only keep caption regions for further processing. Experiment results show that our approach is able to offer real-time caption detection with high recall and low false alarm rate, and also can effectively discern caption texts from the other texts even in low resolutions.
A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.
Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T
2015-09-01
We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.
NASA Technical Reports Server (NTRS)
Wrigley, R. C. (Principal Investigator)
1984-01-01
The Thematic Mapper scene of Sacramento, CA acquired during the TDRSS test was received in TIPS format. Quadrants for both scenes were tested for band-to-band registration using reimplemented block correlation techniques. Summary statistics for band-to-band registrations of TM band combinations for Quadrant 4 of the NE Arkansas scene in TIPS format are tabulated as well as those for Quadrant 1 of the Sacramento scene. The system MTF analysis for the San Francisco scene is completed. The thermal band did not have sufficient contrast for the targets used and was not analyzed.
Satellite image analysis using neural networks
NASA Technical Reports Server (NTRS)
Sheldon, Roger A.
1990-01-01
The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.
LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites
NASA Technical Reports Server (NTRS)
Wukelic, G. E. (Principal Investigator)
1983-01-01
No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.
Scene analysis for a breadboard Mars robot functioning in an indoor environment
NASA Technical Reports Server (NTRS)
Levine, M. D.
1973-01-01
The problem is delt with of computer perception in an indoor laboratory environment containing rocks of various sizes. The sensory data processing is required for the NASA/JPL breadboard mobile robot that is a test system for an adaptive variably-autonomous vehicle that will conduct scientific explorations on the surface of Mars. Scene analysis is discussed in terms of object segmentation followed by feature extraction, which results in a representation of the scene in the robot's world model.
Zelinsky, G J
2001-02-01
Search, memory, and strategy constraints on change detection were analyzed in terms of oculomotor variables. Observers viewed a repeating sequence of three displays (Scene 1-->Mask-->Scene 2-->Mask...) and indicated the presence-absence of a changing object between Scenes 1 and 2. Scenes depicted real-world objects arranged on a surface. Manipulations included set size (one, three, or nine items) and the orientation of the changing objects (similar or different). Eye movements increased with the number of potentially changing objects in the scene, with this set size effect suggesting a relationship between change detection and search. A preferential fixation analysis determined that memory constraints are better described by the operation comparing the pre- and postchange objects than as a capacity limitation, and a scanpath analysis revealed a change detection strategy relying on the peripheral encoding and comparison of display items. These findings support a signal-in-noise interpretation of change detection in which the signal varies with the similarity of the changing objects and the noise is determined by the distractor objects and scene background.
GeoPAT: A toolbox for pattern-based information retrieval from large geospatial databases
NASA Astrophysics Data System (ADS)
Jasiewicz, Jarosław; Netzel, Paweł; Stepinski, Tomasz
2015-07-01
Geospatial Pattern Analysis Toolbox (GeoPAT) is a collection of GRASS GIS modules for carrying out pattern-based geospatial analysis of images and other spatial datasets. The need for pattern-based analysis arises when images/rasters contain rich spatial information either because of their very high resolution or their very large spatial extent. Elementary units of pattern-based analysis are scenes - patches of surface consisting of a complex arrangement of individual pixels (patterns). GeoPAT modules implement popular GIS algorithms, such as query, overlay, and segmentation, to operate on the grid of scenes. To achieve these capabilities GeoPAT includes a library of scene signatures - compact numerical descriptors of patterns, and a library of distance functions - providing numerical means of assessing dissimilarity between scenes. Ancillary GeoPAT modules use these functions to construct a grid of scenes or to assign signatures to individual scenes having regular or irregular geometries. Thus GeoPAT combines knowledge retrieval from patterns with mapping tasks within a single integrated GIS environment. GeoPAT is designed to identify and analyze complex, highly generalized classes in spatial datasets. Examples include distinguishing between different styles of urban settlements using VHR images, delineating different landscape types in land cover maps, and mapping physiographic units from DEM. The concept of pattern-based spatial analysis is explained and the roles of all modules and functions are described. A case study example pertaining to delineation of landscape types in a subregion of NLCD is given. Performance evaluation is included to highlight GeoPAT's applicability to very large datasets. The GeoPAT toolbox is available for download from
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
LANDSAT inventory of surface-mined areas using extendible digital techniques
NASA Technical Reports Server (NTRS)
Anderson, A. T.; Schultz, D. T.; Buchman, N.
1975-01-01
Multispectral LANDSAT imagery was analyzed to provide a rapid and accurate means of identification, classification, and measurement of strip-mined surfaces in Western Maryland. Four band analysis allows distinction of a variety of strip-mine associated classes, but has limited extendibility. A method for surface area measurements of strip mines, which is both geographically and temporally extendible, has been developed using band-ratioed LANDSAT reflectance data. The accuracy of area measurement by this method, averaged over three LANDSAT scenes taken between September 1972 and July 1974, is greater than 93%. Total affected acreage of large (50 hectare/124 acre) mines can be measured to within 1.0%.
NASA Technical Reports Server (NTRS)
Montgomery, H. E.; Ostrow, H.; Ressler, G. M.
1990-01-01
The theory is described and the equations required to design are developed and the performance of electro-optical sensor systems that operate from the visible through the thermal infrared spectral regions are analyzed. Methods to compute essential optical and detector parameters, signal-to-noise ratio, MTF, and figures of merit such as NE delta rho and NE delta T are developed. A set of atmospheric tables are provided to determine scene radiance in the visible spectral region. The Planck function is used to determine radiance in the infrared. The equations developed were incorporated in a spreadsheet so that a wide variety of sensor studies can be rapidly and efficiently conducted.
NASA Astrophysics Data System (ADS)
Kravitz, K.; Furuya, M.; Mueller, K. J.
2013-12-01
The Needles District, in Canyonlands National Park in Utah exposes an array of actively creeping normal faults that accommodate gravity-driven extension above a plastically deforming substrate of evaporite deposits. Previous interferogram stacking and InSAR analysis of faults in the Needles District using 35 ERS satellite scenes from 1992 to 2002 showed line-of-sight deformation rates of ~1-2 mm/yr along active normal faults, with a wide strain gradient along the eastern margin of the deforming region. More rapid subsidence of ~2-2.5 mm/yr was also evident south of the main fault array across a broad platform bounded by the Colorado River and a single fault scarp to the south. In this study, time series analysis was performed on SAR scenes from Envisat, PALSAR, and ERS satellites ranging from 1992 to 2010 to expand upon previous results. Both persistent scatterer and small baseline methods were implemented using StaMPS. Preliminary results from Envisat data indicate equally distributed slip rates along the length of faults within the Needles District and very little subsidence in the broad region further southwest identified in previous work. A phase ramp that appears to be present within the initial interferograms creates uncertainty in the current analysis and future work is aimed at removing this artifact. Our new results suggest, however that a clear deformation signal is present along a number of large grabens in the northern part of the region at higher rates of up to 3-4 mm/yr. Little to no creep is evident along the single fault zone that bounds the southern Needles, in spite of the presence of a large and apparently active fault. This includes a segment of this fault that is instrumented by a creepmeter that yields slip rates on the order of ~1mm/yr. Further work using time series analysis and a larger sampling of SAR scenes will be used in an effort to determine why differences exist between previous and current work and to test mechanics-based modeling of extension in the region.
Analysis of Urban Terrain Data for Use in the Development of an Urban Camouflage Pattern
1990-02-01
the entire lightness gamut , but concentrated in the red, orange, yellow and neutral regions of color space. 20. DISTRIBUTION I AVAILABILITY OF...le·nents grouped by color. ) Summary of Scenes Filmed for Urban Camouflage Study. 01Jtirnum Number of Do·nains Separated by Type; Sele:::ted CIELAB ...Values for All Urban Scenes. Selected CIELAB Values for Type I Urban Scenes. Selected CIELAB Values for Type II Urban Scenes. v Page 3 6 7 8 9
Automatic acquisition of motion trajectories: tracking hockey players
NASA Astrophysics Data System (ADS)
Okuma, Kenji; Little, James J.; Lowe, David
2003-12-01
Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.
Intelligent bandwidth compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.
Conjoint representation of texture ensemble and location in the parahippocampal place area.
Park, Jeongho; Park, Soojin
2017-04-01
Texture provides crucial information about the category or identity of a scene. Nonetheless, not much is known about how the texture information in a scene is represented in the brain. Previous studies have shown that the parahippocampal place area (PPA), a scene-selective part of visual cortex, responds to simple patches of texture ensemble. However, in natural scenes textures exist in spatial context within a scene. Here we tested two hypotheses that make different predictions on how textures within a scene context are represented in the PPA. The Texture-Only hypothesis suggests that the PPA represents texture ensemble (i.e., the kind of texture) as is, irrespective of its location in the scene. On the other hand, the Texture and Location hypothesis suggests that the PPA represents texture and its location within a scene (e.g., ceiling or wall) conjointly. We tested these two hypotheses across two experiments, using different but complementary methods. In experiment 1 , by using multivoxel pattern analysis (MVPA) and representational similarity analysis, we found that the representational similarity of the PPA activation patterns was significantly explained by the Texture-Only hypothesis but not by the Texture and Location hypothesis. In experiment 2 , using a repetition suppression paradigm, we found no repetition suppression for scenes that had the same texture ensemble but differed in location (supporting the Texture and Location hypothesis). On the basis of these results, we propose a framework that reconciles contrasting results from MVPA and repetition suppression and draw conclusions about how texture is represented in the PPA. NEW & NOTEWORTHY This study investigates how the parahippocampal place area (PPA) represents texture information within a scene context. We claim that texture is represented in the PPA at multiple levels: the texture ensemble information at the across-voxel level and the conjoint information of texture and its location at the within-voxel level. The study proposes a working hypothesis that reconciles contrasting results from multivoxel pattern analysis and repetition suppression, suggesting that the methods are complementary to each other but not necessarily interchangeable. Copyright © 2017 the American Physiological Society.
Baber, Chris; Butler, Mark
2012-06-01
The strategies of novice and expert crime scene examiners were compared in searching crime scenes. Previous studies have demonstrated that experts frame a scene through reconstructing the likely actions of a criminal and use contextual cues to develop hypotheses that guide subsequent search for evidence. Novice (first-year undergraduate students of forensic sciences) and expert (experienced crime scene examiners) examined two "simulated" crime scenes. Performance was captured through a combination of concurrent verbal protocol and own-point recording, using head-mounted cameras. Although both groups paid attention to the likely modus operandi of the perpetrator (in terms of possible actions taken), the novices paid more attention to individual objects, whereas the experts paid more attention to objects with "evidential value." Novices explore the scene in terms of the objects that it contains, whereas experts consider the evidence analysis that can be performed as a consequence of the examination. The suggestion is that the novices are putting effort into detailing the scene in terms of its features, whereas the experts are putting effort into the likely actions that can be performed as a consequence of the examination. The findings have helped in developing the expertise of novice crime scene examiners and approaches to training of expertise within this population.
Crime scene units: a look to the future
NASA Astrophysics Data System (ADS)
Baldwin, Hayden B.
1999-02-01
The scientific examination of physical evidence is well recognized as a critical element in conducting successful criminal investigations and prosecutions. The forensic science field is an ever changing discipline. With the arrival of DNA, new processing techniques for latent prints, portable lasers, and electro-static dust print lifters, and training of evidence technicians has become more important than ever. These scientific and technology breakthroughs have increased the possibility of collecting and analyzing physical evidence that was never possible before. The problem arises with the collection of physical evidence from the crime scene not from the analysis of the evidence. The need for specialized units in the processing of all crime scenes is imperative. These specialized units, called crime scene units, should be trained and equipped to handle all forms of crime scenes. The crime scenes units would have the capability to professionally evaluate and collect pertinent physical evidence from the crime scenes.
NASA Astrophysics Data System (ADS)
Altschuler, Bruce R.; Monson, Keith L.
1998-03-01
Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a fixed optical bench with simulated crime scene models of the people and furniture to assess feasibility, requirements and utility of such a system for crime scene documentation and analysis.
Photogrammetry and remote sensing for visualization of spatial data in a virtual reality environment
NASA Astrophysics Data System (ADS)
Bhagawati, Dwipen
2001-07-01
Researchers in many disciplines have started using the tool of Virtual Reality (VR) to gain new insights into problems in their respective disciplines. Recent advances in computer graphics, software and hardware technologies have created many opportunities for VR systems, advanced scientific and engineering applications being among them. In Geometronics, generally photogrammetry and remote sensing are used for management of spatial data inventory. VR technology can be suitably used for management of spatial data inventory. This research demonstrates usefulness of VR technology for inventory management by taking the roadside features as a case study. Management of roadside feature inventory involves positioning and visualization of the features. This research has developed a methodology to demonstrate how photogrammetric principles can be used to position the features using the video-logging images and GPS camera positioning and how image analysis can help produce appropriate texture for building the VR, which then can be visualized in a Cave Augmented Virtual Environment (CAVE). VR modeling was implemented in two stages to demonstrate the different approaches for modeling the VR scene. A simulated highway scene was implemented with the brute force approach, while modeling software was used to model the real world scene using feature positions produced in this research. The first approach demonstrates an implementation of the scene by writing C++ codes to include a multi-level wand menu for interaction with the scene that enables the user to interact with the scene. The interactions include editing the features inside the CAVE display, navigating inside the scene, and performing limited geographic analysis. The second approach demonstrates creation of a VR scene for a real roadway environment using feature positions determined in this research. The scene looks realistic with textures from the real site mapped on to the geometry of the scene. Remote sensing and digital image processing techniques were used for texturing the roadway features in this scene.
Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub
2016-10-01
In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Getting the Gist of Events: Recognition of Two-Participant Actions from Brief Displays
Hafri, Alon; Papafragou, Anna; Trueswell, John C.
2013-01-01
Unlike rapid scene and object recognition from brief displays, little is known about recognition of event categories and event roles from minimal visual information. In three experiments, we displayed naturalistic photographs of a wide range of two-participant event scenes for 37 ms and 73 ms followed by a mask, and found that event categories (the event gist, e.g., ‘kicking’, ‘pushing’, etc.) and event roles (i.e., Agent and Patient) can be recognized rapidly, even with various actor pairs and backgrounds. Norming ratings from a subsequent experiment revealed that certain physical features (e.g., outstretched extremities) that correlate with Agent-hood could have contributed to rapid role recognition. In a final experiment, using identical twin actors, we then varied these features in two sets of stimuli, in which Patients had Agent-like features or not. Subjects recognized the roles of event participants less accurately when Patients possessed Agent-like features, with this difference being eliminated with two-second durations. Thus, given minimal visual input, typical Agent-like physical features are used in role recognition but, with sufficient input from multiple fixations, people categorically determine the relationship between event participants. PMID:22984951
The Characteristics and Limits of Rapid Visual Categorization
Fabre-Thorpe, Michèle
2011-01-01
Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180
From Image Analysis to Computer Vision: Motives, Methods, and Milestones.
1998-07-01
images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision
Auditory Scene Analysis: An Attention Perspective
ERIC Educational Resources Information Center
Sussman, Elyse S.
2017-01-01
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal…
Animal spotting in Alzheimer's disease: an eye tracking study of object categorization.
Boucart, Muriel; Bubbico, Giovanna; Szaffarczyk, Sébastien; Pasquier, Florence
2014-01-01
We investigated rapid object categorization and, more specifically, the ability to detect a target object within a natural scene in people with mild Alzheimer's disease (AD) using a saccadic choice task. It has been suggested that the anatomical pathway likely used to initiate rapid oculomotor responses in the saccadic choice task could involve the Frontal Eye Field, a structure that is part of the dorsal attentional network, in which connectivity is disrupted in AD. Seventeen patients with mild AD and 23 healthy age-matched controls took part in the study. A group of 24 young healthy observers was included as it has been reported that normal aging affects eye movements. Participants were presented with pairs of colored photographs of natural scenes, one containing an animal (the target) and one containing various objects (distracter), displayed for 1 s left and right of fixation. They were asked to saccade to the scene containing an animal. Neither pathology nor age affected temporal (saccade latencies and durations) and spatial (saccade amplitude) parameters of eye movements. Patients with AD were significantly less accurate than age-matched controls, and older participants were less accurate than young observers. The results are interpreted in terms of noisier sensory information and increased uncertainty in relation to deficits in the magnocellular pathway. The results suggest that, even at a mild stage of the pathology, people exhibit difficulties in selecting relevant objects.
A knowledge-based machine vision system for space station automation
NASA Technical Reports Server (NTRS)
Chipman, Laure J.; Ranganath, H. S.
1989-01-01
A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.
Phase 1 Development Report for the SESSA Toolkit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knowlton, Robert G.; Melton, Brad J; Anderson, Robert J.
The Site Exploitation System for Situational Awareness ( SESSA ) tool kit , developed by Sandia National Laboratories (SNL) , is a comprehensive de cision support system for crime scene data acquisition and Sensitive Site Exploitation (SSE). SESSA is an outgrowth of another SNL developed decision support system , the Building R estoration Operations Optimization Model (BROOM), a hardware/software solution for data acquisition, data management, and data analysis. SESSA was designed to meet forensic crime scene needs as defined by the DoD's Military Criminal Investigation Organiza tion (MCIO) . SESSA is a very comprehensive toolki t with a considerable amountmore » of database information managed through a Microsoft SQL (Structured Query Language) database engine, a Geographical Information System (GIS) engine that provides comprehensive m apping capabilities, as well as a an intuitive Graphical User Interface (GUI) . An electronic sketch pad module is included. The system also has the ability to efficiently generate necessary forms for forensic crime scene investigations (e.g., evidence submittal, laboratory requests, and scene notes). SESSA allows the user to capture photos on site, and can read and generate ba rcode labels that limit transcription errors. SESSA runs on PC computers running Windows 7, but is optimized for touch - screen tablet computers running Windows for ease of use at crime scenes and on SSE deployments. A prototype system for 3 - dimensional (3 D) mapping and measur e ments was also developed to complement the SESSA software. The mapping system employs a visual/ depth sensor that captures data to create 3D visualizations of an interior space and to make distance measurements with centimeter - level a ccuracy. Output of this 3D Model Builder module provides a virtual 3D %22walk - through%22 of a crime scene. The 3D mapping system is much less expensive and easier to use than competitive systems. This document covers the basic installation and operation of th e SESSA tool kit in order to give the user enough information to start using the tool kit . SESSA is currently a prototype system and this documentation covers the initial release of the tool kit . Funding for SESSA was provided by the Department of Defense (D oD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL) . ACKNOWLEDGEMENTS The authors wish to acknowledge the funding support for the development of the Site Exploitation System for Situational Awareness (SESSA) toolkit from the Department of Defense (DoD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL). Special thanks to Mr. Garold Warner, of DFSC, who served as the Project Manager. Individuals that worked on the design, functional attributes, algorithm development, system arc hitecture, and software programming include: Robert Knowlton, Brad Melton, Robert Anderson, and Wendy Amai.« less
Baumann, Oliver; Mattingley, Jason B
2016-02-24
The human parahippocampal cortex has been ascribed central roles in both visuospatial and mnemonic processes. More specifically, evidence suggests that the parahippocampal cortex subserves both the perceptual analysis of scene layouts as well as the retrieval of associative contextual memories. It remains unclear, however, whether these two functional roles can be dissociated within the parahippocampal cortex anatomically. Here, we provide evidence for a dissociation between neural activation patterns associated with visuospatial analysis of scenes and contextual mnemonic processing along the parahippocampal longitudinal axis. We used fMRI to measure parahippocampal responses while participants engaged in a task that required them to judge the contextual relatedness of scene and object pairs, which were presented either as words or pictures. Results from combined factorial and conjunction analyses indicated that the posterior section of parahippocampal cortex is driven predominantly by judgments associated with pictorial scene analysis, whereas its anterior section is more active during contextual judgments regardless of stimulus category (scenes vs objects) or modality (word vs picture). Activation maxima associated with visuospatial and mnemonic processes were spatially segregated, providing support for the existence of functionally distinct subregions along the parahippocampal longitudinal axis and suggesting that, in humans, the parahippocampal cortex serves as a functional interface between perception and memory systems. Copyright © 2016 the authors 0270-6474/16/362536-07$15.00/0.
Wedel, Michel; Pieters, Rik; Liechty, John
2008-06-01
Eye movements across advertisements express a temporal pattern of bursts of respectively relatively short and long saccades, and this pattern is systematically influenced by activated scene perception goals. This was revealed by a continuous-time hidden Markov model applied to eye movements of 220 participants exposed to 17 ads under a free-viewing condition, and a scene-learning goal (ad memorization), a scene-evaluation goal (ad appreciation), a target-learning goal (product learning), or a target-evaluation goal (product evaluation). The model reflects how attention switches between two states--local and global--expressed in saccades of shorter and longer amplitude on a spatial grid with 48 cells overlaid on the ads. During the 5- to 6-s duration of self-controlled exposure to ads in the magazine context, attention predominantly started in the local state and ended in the global state, and rapidly switched about 5 times between states. The duration of the local attention state was much longer than the duration of the global state. Goals affected the frequency of switching between attention states and the duration of the local, but not of the global, state. (c) 2008 APA, all rights reserved
Super Typhoon Halong off Taiwan
NASA Technical Reports Server (NTRS)
2002-01-01
On July 14, 2002, Super Typhoon Halong was east of Taiwan (left edge) in the western Pacific Ocean. At the time this image was taken the storm was a Category 4 hurricane, with maximum sustained winds of 115 knots (132 miles per hour), but as recently as July 12, winds were at 135 knots (155 miles per hour). Halong has moved northwards and pounded Okinawa, Japan, with heavy rain and high winds, just days after tropical Storm Chataan hit the country, creating flooding and killing several people. The storm is expected to be a continuing threat on Monday and Tuesday. This image was acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra satellite on July 14, 2002. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of the scene at the sensor's fullest resolution, visit the MODIS Rapid Response Image Gallery. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC
ERIC Educational Resources Information Center
Marill, Thomas; And Others
The aim of the CYCLOPS Project research is the development of techniques for allowing computers to perform visual scene analysis, pre-processing of visual imagery, and perceptual learning. Work on scene analysis and learning has previously been described. The present report deals with research on pre-processing and with further work on scene…
A statistical model for radar images of agricultural scenes
NASA Technical Reports Server (NTRS)
Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.
1982-01-01
The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.
Research and Technology Development for Construction of 3d Video Scenes
NASA Astrophysics Data System (ADS)
Khlebnikova, Tatyana A.
2016-06-01
For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.
Rapid authentication of edible bird's nest by FTIR spectroscopy combined with chemometrics.
Guo, Lili; Wu, Yajun; Liu, Mingchang; Ge, Yiqiang; Chen, Ying
2018-06-01
Edible bird's nests (EBNs) have been traditionally regarded as a kind of medicinal and healthy food in China. For economic reasons, they are frequently subjected to adulteration with some cheaper substitutes, such as Tremella fungus, agar, fried pigskin, and egg white. As a kind of precious and functional product, it is necessary to establish a robust method for the rapid authentication of EBNs with small amounts of samples by simple processes. In this study, the Fourier transform infrared spectroscopy (FTIR) system was utilized and its feasibility for identification of EBNs was verified. FTIR spectra data of authentic and adulterated EBNs were analyzed by chemometrics analyses including principal component analysis, linear discriminant analysis (LDA), support vector machine (SVM) and one-class partial least squares (OCPLS). The results showed that the established LDA and SVM models performed well and had satisfactory classification ability, with the former 94.12% and the latter 100%. The OCPLS model was developed with prediction sensitivity of 0.937 and specificity of 0.886. Further detection of commercial EBN samples confirmed these results. FTIR is applicable in the scene of rapid authentication of EBNs, especially for quality supervision departments, entry-exit inspection and quarantine, and customs administration. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
ERIC Educational Resources Information Center
Simkus, Joyce
2010-01-01
Claude Monet and the Impressionists were the forward thinkers and painters of their time. They used quick brushstrokes and a rapid pace to capture lively outdoor scenes. Inspired by the colors and shadows revealed by sunlight, the Impressionists typically worked outside, without many preliminary sketches or drafts. This was in direct contrast to…
Bringing in the Bard: Shakespearean Plays as Context for Instrumental Analysis Projects
ERIC Educational Resources Information Center
Kloepper, Kathryn D.
2015-01-01
Scenes from the works of William Shakespeare were incorporated into individual and group projects for an upper-level chemistry class, instrumental analysis. Students read excerpts from different plays and then viewed a corresponding video clip from a stage or movie production. Guided-research assignments were developed based on these scenes. These…
2015-03-31
analysis. For scene analysis, we use Temporal Data Crystallization (TDC), and for logical analysis, we use Speech Act theory and Toulmin Argumentation...utterance in the discussion record. (i) An utterance ID, and a speaker ID (ii) Speech acts (iii) Argument structure Speech act denotes...mediator is expected to use more OQs than CQs. When the speech act of an utterance is an argument, furthermore, we recognize the conclusion part
Intelligent bandwith compression
NASA Astrophysics Data System (ADS)
Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.
1980-02-01
The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.
Virtual environments for scene of crime reconstruction and analysis
NASA Astrophysics Data System (ADS)
Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon
2000-02-01
This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.
Research on three-dimensional visualization based on virtual reality and Internet
NASA Astrophysics Data System (ADS)
Wang, Zongmin; Yang, Haibo; Zhao, Hongling; Li, Jiren; Zhu, Qiang; Zhang, Xiaohong; Sun, Kai
2007-06-01
To disclose and display water information, a three-dimensional visualization system based on Virtual Reality (VR) and Internet is researched for demonstrating "digital water conservancy" application and also for routine management of reservoir. To explore and mine in-depth information, after completion of modeling high resolution DEM with reliable quality, topographical analysis, visibility analysis and reservoir volume computation are studied. And also, some parameters including slope, water level and NDVI are selected to classify easy-landslide zone in water-level-fluctuating zone of reservoir area. To establish virtual reservoir scene, two kinds of methods are used respectively for experiencing immersion, interaction and imagination (3I). First virtual scene contains more detailed textures to increase reality on graphical workstation with virtual reality engine Open Scene Graph (OSG). Second virtual scene is for internet users with fewer details for assuring fluent speed.
Multispectral imaging system based on laser-induced fluorescence for security applications
NASA Astrophysics Data System (ADS)
Caneve, L.; Colao, F.; Del Franco, M.; Palucci, A.; Pistilli, M.; Spizzichino, V.
2016-10-01
The development of portable sensors for fast screening of crime scenes is required to reduce the number of evidences useful to be collected, optimizing time and resources. Laser based spectroscopic techniques are good candidates to this scope due to their capability to operate in field, in remote and rapid way. In this work, the prototype of a multispectral imaging LIF (Laser Induced Fluorescence) system able to detect evidence of different materials on large very crowded and confusing areas at distances up to some tens of meters will be presented. Data collected as both 2D fluorescence images and LIF spectra are suitable to the identification and the localization of the materials of interest. A reduced scan time, preserving at the same time the accuracy of the results, has been taken into account as a main requirement in the system design. An excimer laser with high energy and repetition rate coupled to a gated high sensitivity ICCD assures very good performances for this purpose. Effort has been devoted to speed up the data processing. The system has been tested in outdoor and indoor real scenarios and some results will be reported. Evidence of the plastics polypropylene (PP) and polyethilene (PE) and polyester have been identified and their localization on the examined scenes has been highlighted through the data processing. By suitable emission bands, the instrument can be used for the rapid detection of other material classes (i.e. textiles, woods, varnishes). The activities of this work have been supported by the EU-FP7 FORLAB project (Forensic Laboratory for in-situ evidence analysis in a post blast scenario).
4D light-field sensing system for people counting
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Zhang, Chi; Wang, Yunlong; Sun, Zhenan
2016-03-01
Counting the number of people is still an important task in social security applications, and a few methods based on video surveillance have been proposed in recent years. In this paper, we design a novel optical sensing system to directly acquire the depth map of the scene from one light-field camera. The light-field sensing system can count the number of people crossing the passageway, and record the direction and intensity of rays at a snapshot without any assistant light devices. Depth maps are extracted from the raw light-ray sensing data. Our smart sensing system is equipped with a passive imaging sensor, which is able to naturally discern the depth difference between the head and shoulders for each person. Then a human model is built. Through detecting the human model from light-field images, the number of people passing the scene can be counted rapidly. We verify the feasibility of the sensing system as well as the accuracy by capturing real-world scenes passing single and multiple people under natural illumination.
2017-12-08
In this mostly cloud-free true-color scene, much of Scandinavia can be seen to be still covered by snow. From left to right across the top of this image are the countries of Norway, Sweden, Finland, and northwestern Russia. The Baltic Sea is located in the bottom center of this scene, with the Gulf of Bothnia to the north (in the center of this scene) and the Gulf of Finland to the northeast. This image was acquired on March 15, 2002, by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite. Image courtesy Jacques Descloitres, rapidfire.sci.gsfc.nasa.gov/ MODIS Land Rapid Response Team at NASA GSFC Credit: NASA Earth Observatory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Usability of aerial video footage for 3-D scene reconstruction and structural damage assessment
NASA Astrophysics Data System (ADS)
Cusicanqui, Johnny; Kerle, Norman; Nex, Francesco
2018-06-01
Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of texture, or moving objects. We conclude that not only are current video data more rapidly available than photos, but they also have a comparable ability to assist in image-based structural damage assessment and other post-disaster activities.
NASA Astrophysics Data System (ADS)
Sadat, Mojtaba T.; Viti, Francesco
2015-02-01
Machine vision is rapidly gaining popularity in the field of Intelligent Transportation Systems. In particular, advantages are foreseen by the exploitation of Aerial Vehicles (AV) in delivering a superior view on traffic phenomena. However, vibration on AVs makes it difficult to extract moving objects on the ground. To partly overcome this issue, image stabilization/registration procedures are adopted to correct and stitch multiple frames taken of the same scene but from different positions, angles, or sensors. In this study, we examine the impact of multiple feature-based techniques for stabilization, and we show that SURF detector outperforms the others in terms of time efficiency and output similarity.
Driving with indirect viewing sensors: understanding the visual perception issues
NASA Astrophysics Data System (ADS)
O'Kane, Barbara L.
1996-05-01
Visual perception is one of the most important elements of driving in that it enables the driver to understand and react appropriately to the situation along the path of the vehicle. The visual perception of the driver is enabled to the greatest extent while driving during the day. Noticeable decrements in visual acuity, range of vision, depth of field and color perception occur at night and under certain weather conditions. Indirect viewing sensors, utilizing various technologies and spectral bands, may assist the driver's normal mode of driving. Critical applications in the military as well as other official activities may require driving at night without headlights. In these latter cases, it is critical that the device, being the only source of scene information, provide the required scene cues needed for driving on, and often-times, off road. One can speculate about the scene information that a driver needs, such as road edges, terrain orientation, people and object detection in or near the path of the vehicle, and so on. But the perceptual qualities of the scene that give rise to these perceptions are little known and thus not quantified for evaluation of indirect viewing devices. This paper discusses driving with headlights and compares the scene content with that provided by a thermal system in the 8 - 12 micrometers micron spectral band, which may be used for driving at some time. The benefits and advantages of each are discussed as well as their limitations in providing information useful for the driver who must make rapid and critical decisions based upon the scene content available. General recommendations are made for potential avenues of development to overcome some of these limitations.
Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M
2011-01-01
Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less
Palaeomicrobiology meets forensic medicine: time as a fourth-dimension for the crime scene.
Bazaj, A; Turrina, S; De Leo, D; Cornaglia, G
2015-03-01
The unrelenting progress of laboratory techniques is rapidly unleashing the huge potential of palaeomicrobiology. That bodies are often found in poor condition is common to both palaeomicrobiology and forensic medicine, and this might stimulate them towards a joint quest to extract reproducible data for reliable specimens.
Preadolescent Girls' and Boys' Virtual MUD Play
ERIC Educational Resources Information Center
Calvert, Sandra L.; Strouse, Gabrielle A.; Strong, Bonnie L.; Huffaker, David A.; Lai, Sean
2009-01-01
Same and opposite-sex pairs of preadolescents interacted twice in a MUD, a virtual domain where they created characters known as avatars and socially interacted with one another. Boys interacted primarily through rapid scene shifts and playful exchanges; girls interacted with one another through written dialogue. Opposite-sex pairs lagged behind…
Harvard Education Letter. Volume 26, Number 5, September-October 2010
ERIC Educational Resources Information Center
Walser, Nancy, Ed.
2010-01-01
"Harvard Education Letter" is published bimonthly at the Harvard Graduate School of Education. This issue of "Harvard Education Letter" contains the following articles: (1) Scenes from the School Turnaround Movement: Passion, Frustration, Mid-Course Corrections Mark Rapid Reforms (Laura Pappano); (2) The Media Savvy Educator:…
Does object view influence the scene consistency effect?
Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko
2015-04-01
Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.
Factors influencing pre-hospital care time intervals in Iran: a qualitative study.
Khorasani-Zavareh, Davoud; Mohammadi, Reza; Bohm, Katarina
2018-06-23
Pre-hospital time management provides better access to victims of road traffic crashes (RTCs) and can help minimize preventable deaths, injuries and disabilities. While most studies have been focused on measuring various time intervals in the pre-hospital phase, to our best knowledge there is no study exploring the barriers and facilitators that affects these various intervals qualitatively. The present study aimed to explore factors affecting various time intervals relating to road traffic incidents in the pre-hospital phase and provides suggestions for improvements in Iran. The study was conducted during 2013-2014 at both the national and local level in Iran. Overall, 18 face-to-face interviews with emergency medical services (EMS) personnel were used for data collection. Qualitative content analysis was employed to analyze the data. The most important barriers in relation to pre-hospital intervals were related to the manner of cooperation by members of the public with the EMS and their involvement at the crash scene, as well as to pre-hospital system factors, including the number and location of EMS facilities, type and number of ambulances and manpower. These factors usually affect how rapidly the EMS can arrive at the scene of the crash and how quickly victims can be transferred to hospital. These two categories have six main themes: notification interval; activation interval; response interval; on-scene interval; transport interval; and delivery interval. Despite more focus on physical resources, cooperation from members of the public needs to be taken in account in order to achieve better pre-hospital management of the various intervals, possibly through the use of public education campaigns.
Design of 3D simulation engine for oilfield safety training
NASA Astrophysics Data System (ADS)
Li, Hua-Ming; Kang, Bao-Sheng
2015-03-01
Aiming at the demand for rapid custom development of 3D simulation system for oilfield safety training, this paper designs and implements a 3D simulation engine based on script-driven method, multi-layer structure, pre-defined entity objects and high-level tools such as scene editor, script editor, program loader. A scripting language been defined to control the system's progress, events and operating results. Training teacher can use this engine to edit 3D virtual scenes, set the properties of entity objects, define the logic script of task, and produce a 3D simulation training system without any skills of programming. Through expanding entity class, this engine can be quickly applied to other virtual training areas.
Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters
Zhang, Sirou; Qiao, Xiaoya
2017-01-01
In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311
Applying Image Matching to Video Analysis
2010-09-01
image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face
Unconscious analyses of visual scenes based on feature conjunctions.
Tachibana, Ryosuke; Noguchi, Yasuki
2015-06-01
To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).
Dimensionality of visual complexity in computer graphics scenes
NASA Astrophysics Data System (ADS)
Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce
2008-02-01
How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.
A scheme for racquet sports video analysis with the combination of audio-visual information
NASA Astrophysics Data System (ADS)
Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua
2005-07-01
As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.
Landscape preference assessment of Louisiana river landscapes: a methodological study
Michael S. Lee
1979-01-01
The study pertains to the development of an assessment system for the analysis of visual preference attributed to Louisiana river landscapes. The assessment system was utilized in the evaluation of 20 Louisiana river scenes. Individuals were tested for their free choice preference for the same scenes. A statistical analysis was conducted to examine the relationship...
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
Using 3D Visualization to Communicate Scientific Results to Non-scientists
NASA Astrophysics Data System (ADS)
Whipple, S.; Mellors, R. J.; Sale, J.; Kilb, D.
2002-12-01
If "a picture is worth a thousand words" then an animation is worth millions. 3D animations and visualizations are useful for geoscientists but are perhaps even more valuable for rapidly illustrating standard geoscience ideas and concepts (such as faults, seismicity patterns, and topography) to non-specialists. This is useful not only for purely educational needs but also in rapidly briefing decision makers where time may be critical. As a demonstration of this we juxtapose large geophysical datasets (e.g., Southern California seismicity and topography) with other large societal datasets (such as highways and urban areas), which allows an instant understanding of the correlations. We intend to work out a methodology to aid other datasets such as hospitals and bridges, for example, in an ongoing fashion. The 3D scenes we create from the separate datasets can be "flown" through and individual snapshots that emphasize the concepts of interest are quickly rendered and converted to formats accessible to all. Viewing the snapshots and scenes greatly aids non-specialists comprehension of the problems and tasks at hand. For example, seismicity clusters (such as aftershocks) and faults near urban areas are clearly visible. A simple "fly-by" through our Southern California scene demonstrates simple concepts such as the topographic features due to plate motion along faults, and the demarcation of the North American/Pacific Plate boundary by the complex fault system (e.g., Elsinore, San Jacinto and San Andreas faults) in Southern California.
The use of higher-order statistics in rapid object categorization in natural scenes.
Banno, Hayaki; Saiki, Jun
2015-02-04
We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.
An interactive display system for large-scale 3D models
NASA Astrophysics Data System (ADS)
Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman
2018-04-01
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
[Development of the helicopter-rescue concept in the Basel region].
Demartines, N; Castelli, I; Scheidegger, D; Harder, F
1992-03-24
1927 medical helicopter transports were performed in Basle between 1986 and 1989. Of the total flights, 173 transports without patients and 186 incubator transports were excluded from the study. Treatment and transportation were provided for 1085 victims of trauma (70.2%) and 461 medical-surgical patients (29.8%), mostly with life-threatening conditions. 589 trauma patients were treated at the scene of accident and later transported by helicopter to a nearby medical center (54.3%). The 4.3% rate of negative emergency flights is low. Since introduction of the helicopter rescue system at Basle in 1975, scene flights have increased from 29% in 1984 to 46% in 1989. 47.4% of all patients were categorized as seriously ill or severely injured. 36.4% of all patients required intubation and assisted ventilation. Of the trauma patients, 54.3% involved scene-flights requiring in-field intensive therapy. Helicopter transport provides not only a rapid source of transportation, but also vital medical assistance at the scene of emergency. Transport generally occurs only after stabilization of vital functions. These factors contribute to the low mortality before return flights (3%) as well as during transport (0.3%). We conclude that early aggressive in-field intensive therapy can help to decrease both morbidity and mortality in emergency-care patients.
Wilkinson, Krista M; Light, Janice
2011-12-01
Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs. However, many VSDs omit human figures. In this study, the authors sought to describe the distribution of visual attention to humans in naturalistic scenes as compared with other elements. Nineteen college students observed 8 photographs in which a human figure appeared near 1 or more items that might be expected to compete for visual attention (such as a Christmas tree or a table loaded with food). Eye-tracking technology allowed precise recording of participants' gaze. The fixation duration over a 7-s viewing period and latency to view elements in the photograph were measured. Participants fixated on the human figures more rapidly and for longer than expected based on the size of these figures, regardless of the other elements in the scene. Human figures attract attention in a photograph even when presented alongside other attractive distracters. Results suggest that humans may be a powerful means to attract visual attention to key elements in VSDs.
Memory-guided attention during active viewing of edited dynamic scenes.
Valuch, Christian; König, Peter; Ansorge, Ulrich
2017-01-01
Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.
NASA Astrophysics Data System (ADS)
Aydogdu, Eyup
Thanks to the rapid developments in science and technology in recent decades, especially in the past two decades, forensic sciences have been making invaluable contributions to criminal justice systems. With scientific evaluation of physical evidence, policing has become more effective in fighting crime and criminals. On the other hand, law enforcement personnel have made mistakes during the detection, protection, collection, and evaluation of physical evidence. Law enforcement personnel, especially patrol officers, have been criticized for ignoring or overlooking physical evidence at crime scenes. This study, conducted in a large American police department, was aimed to determine the perceptions of patrol officers, their supervisors and administrators, detectives, and crime scene technicians about the forensic science needs of patrol officers. The results showed no statistically significant difference among the perceptions of the said groups. More than half of the respondents perceived that 14 out of 16 areas of knowledge were important for patrol officers to have: crime scene documentation, evidence collection, interviewing techniques, firearm evidence, latent and fingerprint evidence, blood evidence, death investigation information, DNA evidence, document evidence, electronically recorded evidence, trace evidence, biological fluid evidence, arson and explosive evidence, and impression evidence. Less than half of the respondents perceived forensic entomology and plant evidence as important for patrol officers.
Stimuli eliciting sexual arousal in males who offend adult women: an experimental study.
Kolárský, A; Madlafousek, J; Novotná, V
1978-03-01
The sexually arousing effects of short film scenes showing a naked actress's seductive behavior were phalloplethysmographically measured in 14 sexual deviates. These were males who had offended adult women, predominantly exhibitionists. Controls were 14 normal men. Deviates responded positively to the scenes and differentiated strong and weak seduction scenes similarly to normals. Consequently, the question arises of why deviates avoid their victim's erotic cooperation and why they do not offend their regular sexual partners. Post hoc analysis of five scenes which elicited a strikingly higher response in deviates than in normals suggested that these scenes contained reduced seductive behavior but unrestrained presentation of the genitals. This finding further encourages the laboratory study of stimulus conditions for abnormal sexual arousal which occurs during the sexual offense.
NASA Technical Reports Server (NTRS)
Wrigley, R. C. (Principal Investigator)
1984-01-01
A second quadrant from the Sacramento, CA scene 44/33 acquired by LANDSAT-4 was tested for band to band resolution. Results show that all measured misregistrations are within 0.03 pixels for similar band pairs. Two LANDSAT-5 scenes (one from Corpus Christi, TX and the other from Huntsville, AL) were also tested for band to band resolution. All measured misregistrations in the Texas scene are less than 0.03 pixels. The across scan misregistration Alabama scene is -0.66 pixels and thus needs correction. A 512 x 512 pixel area of the Pacific Ocean was corrected for the pixel offsets. Modulation transfer function analysis of the San Mateo Bridge using data from the San Francisco scene was accomplished.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
Sensor fusion of range and reflectance data for outdoor scene analysis
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebvert, Martial; Kanade, Takeo
1988-01-01
In recognizing objects in an outdoor scene, range and reflectance (or color) data provide complementary information. Results of experiments in recognizing outdoor scenes containing roads, trees, and cars are presented. The recognition program uses range and reflectance data obtained by a scanning laser range finder, as well as color data from a color TV camera. After segmentation of each image into primitive regions, models of objects are matched using various properties.
Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.
2016-05-01
Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.
Ultra Rapid Object Categorization: Effects of Level, Animacy and Context
Praß, Maren; Grimsen, Cathleen; König, Martina; Fahle, Manfred
2013-01-01
It is widely agreed that in object categorization bottom-up and top-down influences interact. How top-down processes affect categorization has been primarily investigated in isolation, with only one higher level process at a time being manipulated. Here, we investigate the combination of different top-down influences (by varying the level of category, the animacy and the background of the object) and their effect on rapid object categorization. Subjects participated in a two-alternative forced choice rapid categorization task, while we measured accuracy and reaction times. Subjects had to categorize objects on the superordinate, basic or subordinate level. Objects belonged to the category animal or vehicle and each object was presented on a gray, congruent (upright) or incongruent (inverted) background. The results show that each top-down manipulation impacts object categorization and that they interact strongly. The best categorization was achieved on the superordinate level, providing no advantage for basic level in rapid categorization. Categorization between vehicles was faster than between animals on the basic level and vice versa on the subordinate level. Objects in homogenous gray background (context) yielded better overall performance than objects embedded in complex scenes, an effect most prominent on the subordinate level. An inverted background had no negative effect on object categorization compared to upright scenes. These results show how different top-down manipulations, such as category level, category type and background information, are related. We discuss the implications of top-down interactions on the interpretation of categorization results. PMID:23840810
Ultra rapid object categorization: effects of level, animacy and context.
Praß, Maren; Grimsen, Cathleen; König, Martina; Fahle, Manfred
2013-01-01
It is widely agreed that in object categorization bottom-up and top-down influences interact. How top-down processes affect categorization has been primarily investigated in isolation, with only one higher level process at a time being manipulated. Here, we investigate the combination of different top-down influences (by varying the level of category, the animacy and the background of the object) and their effect on rapid object categorization. Subjects participated in a two-alternative forced choice rapid categorization task, while we measured accuracy and reaction times. Subjects had to categorize objects on the superordinate, basic or subordinate level. Objects belonged to the category animal or vehicle and each object was presented on a gray, congruent (upright) or incongruent (inverted) background. The results show that each top-down manipulation impacts object categorization and that they interact strongly. The best categorization was achieved on the superordinate level, providing no advantage for basic level in rapid categorization. Categorization between vehicles was faster than between animals on the basic level and vice versa on the subordinate level. Objects in homogenous gray background (context) yielded better overall performance than objects embedded in complex scenes, an effect most prominent on the subordinate level. An inverted background had no negative effect on object categorization compared to upright scenes. These results show how different top-down manipulations, such as category level, category type and background information, are related. We discuss the implications of top-down interactions on the interpretation of categorization results.
Local Planning Considerations for the Wildland-Structural Intermix in the Year 2000
Robert L. Irwin
1987-01-01
California's foothill counties are the scene of rapid development. All types of construction in former wildlands is creating an intermix of wildland-structures-wildland that is different from the traditional "urban-wildland interface." The fire and structural environment for seven counties is described. Fire statistics are compared with growth patterns...
Palaeomicrobiology meets forensic medicine: time as a fourth-dimension for the crime scene
Bazaj, A.; Turrina, S.; De Leo, D.; Cornaglia, G.
2015-01-01
The unrelenting progress of laboratory techniques is rapidly unleashing the huge potential of palaeomicrobiology. That bodies are often found in poor condition is common to both palaeomicrobiology and forensic medicine, and this might stimulate them towards a joint quest to extract reproducible data for reliable specimens. PMID:25830027
Film Scenes in Interdisciplinary Education: Teaching the Internet of Things
ERIC Educational Resources Information Center
Hwang, Young-mee; Kim, Kwang-sun; Im, Tami
2017-01-01
The Internet of Things (IoT) is gaining importance in education owing to its rapid development. This study addresses the importance of interdisciplinary education between technology and the humanities. The use of films as a teaching resource is suitable for interdisciplinary education because films represent creative forecasts and predictions on…
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
Lin, Qing; Han, Youngjoon
2014-01-01
A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. PMID:25302812
Investigation of several aspects of LANDSAT-4 data quality
NASA Technical Reports Server (NTRS)
Wrigley, R. C. (Principal Investigator)
1983-01-01
No insurmountable problems in change detection analysis were found when portions of scenes collected simultaneously by LANDSAT 4 MSS and either LANDSAT 2 or 3. The cause of the periodic noise in LANDSAT 4 MSS images which had a RMS value of approximately 2DN should be corrected in the LANDSAT D instrument before its launch. Analysis of the P-tape of the Arkansas scene shows bands within the same focal plane very well registered except for the thermal band which was misregistered by approximately three 28.5 meter pixels in both directions. It is possible to derive tight confidence bounds for the registration errors. Preliminary analyses of the Sacramento and Arkansas scenes reveals a very high degree of consistency with earlier results for bands 3 vs 1, 3 vs 4, and 3 vs 5. Results are presented in table form. It is suggested that attention be given to the standard deviations of registrations errors to judge whether or not they will be within specification once any known mean registration errors are corrected. Techniques used for MTF analysis of a Washington scene produced noisy results.
NASA Astrophysics Data System (ADS)
Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.
2018-05-01
In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.
NASA Technical Reports Server (NTRS)
Thorne, J. F.
1977-01-01
State agencies need rapid, synoptic and inexpensive methods for lake assessment to comply with the 1972 Amendments to the Federal Water Pollution Control Act. Low altitude aerial photography may be useful in providing information on algal type and quantity. Photography must be calibrated properly to remove sources of error including airlight, surface reflectance and scene-to-scene illumination differences. A 550-nm narrow wavelength band black and white photographic exposure provided a better correlation to algal biomass than either red or infrared photographic exposure. Of all the biomass parameters tested, depth-integrated chlorophyll a concentration correlated best to remote sensing data. Laboratory-measured reflectance of selected algae indicate that different taxonomic classes of algae may be discriminated on the basis of their reflectance spectra.
Additional Crime Scenes for Projectile Motion Unit
NASA Astrophysics Data System (ADS)
Fullerton, Dan; Bonner, David
2011-12-01
Building students' ability to transfer physics fundamentals to real-world applications establishes a deeper understanding of underlying concepts while enhancing student interest. Forensic science offers a great opportunity for students to apply physics to highly engaging, real-world contexts. Integrating these opportunities into inquiry-based problem solving in a team environment provides a terrific backdrop for fostering communication, analysis, and critical thinking skills. One such activity, inspired jointly by the museum exhibit "CSI: The Experience"2 and David Bonner's TPT article "Increasing Student Engagement and Enthusiasm: A Projectile Motion Crime Scene,"3 provides students with three different crime scenes, each requiring an analysis of projectile motion. In this lesson students socially engage in higher-order analysis of two-dimensional projectile motion problems by collecting information from 3-D scale models and collaborating with one another on its interpretation, in addition to diagramming and mathematical analysis typical to problem solving in physics.
Neurotoxic lesions of ventrolateral prefrontal cortex impair object-in-place scene memory
Wilson, Charles R E; Gaffan, David; Mitchell, Anna S; Baxter, Mark G
2007-01-01
Disconnection of the frontal lobe from the inferotemporal cortex produces deficits in a number of cognitive tasks that require the application of memory-dependent rules to visual stimuli. The specific regions of frontal cortex that interact with the temporal lobe in performance of these tasks remain undefined. One capacity that is impaired by frontal–temporal disconnection is rapid learning of new object-in-place scene problems, in which visual discriminations between two small typographic characters are learned in the context of different visually complex scenes. In the present study, we examined whether neurotoxic lesions of ventrolateral prefrontal cortex in one hemisphere, combined with ablation of inferior temporal cortex in the contralateral hemisphere, would impair learning of new object-in-place scene problems. Male macaque monkeys learned 10 or 20 new object-in-place problems in each daily test session. Unilateral neurotoxic lesions of ventrolateral prefrontal cortex produced by multiple injections of a mixture of ibotenate and N-methyl-d-aspartate did not affect performance. However, when disconnection from inferotemporal cortex was completed by ablating this region contralateral to the neurotoxic prefrontal lesion, new learning was substantially impaired. Sham disconnection (injecting saline instead of neurotoxin contralateral to the inferotemporal lesion) did not affect performance. These findings support two conclusions: first, that the ventrolateral prefrontal cortex is a critical area within the frontal lobe for scene memory; and second, the effects of ablations of prefrontal cortex can be confidently attributed to the loss of cell bodies within the prefrontal cortex rather than to interruption of fibres of passage through the lesioned area. PMID:17445247
Neural Correlates of Divided Attention in Natural Scenes.
Fagioli, Sabrina; Macaluso, Emiliano
2016-09-01
Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes.
Database improvements for motor vehicle/bicycle crash analysis
Lusk, Anne C; Asgarzadeh, Morteza; Farvid, Maryam S
2015-01-01
Background Bicycling is healthy but needs to be safer for more to bike. Police crash templates are designed for reporting crashes between motor vehicles, but not between vehicles/bicycles. If written/drawn bicycle-crash-scene details exist, these are not entered into spreadsheets. Objective To assess which bicycle-crash-scene data might be added to spreadsheets for analysis. Methods Police crash templates from 50 states were analysed. Reports for 3350 motor vehicle/bicycle crashes (2011) were obtained for the New York City area and 300 cases selected (with drawings and on roads with sharrows, bike lanes, cycle tracks and no bike provisions). Crashes were redrawn and new bicycle-crash-scene details were coded and entered into the existing spreadsheet. The association between severity of injuries and bicycle-crash-scene codes was evaluated using multiple logistic regression. Results Police templates only consistently include pedal-cyclist and helmet. Bicycle-crash-scene coded variables for templates could include: 4 bicycle environments, 18 vehicle impact-points (opened-doors and mirrors), 4 bicycle impact-points, motor vehicle/bicycle crash patterns, in/out of the bicycle environment and bike/relevant motor vehicle categories. A test of including these variables suggested that, with bicyclists who had minor injuries as the control group, bicyclists on roads with bike lanes riding outside the lane had lower likelihood of severe injuries (OR, 0.40, 95% CI 0.16 to 0.98) compared with bicyclists riding on roads without bicycle facilities. Conclusions Police templates should include additional bicycle-crash-scene codes for entry into spreadsheets. Crash analysis, including with big data, could then be conducted on bicycle environments, motor vehicle potential impact points/doors/mirrors, bicycle potential impact points, motor vehicle characteristics, location and injury. PMID:25835304
NASA Astrophysics Data System (ADS)
Lari, Z.; El-Sheimy, N.
2017-09-01
In recent years, the increasing incidence of climate-related disasters has tremendously affected our environment. In order to effectively manage and reduce dramatic impacts of such events, the development of timely disaster management plans is essential. Since these disasters are spatial phenomena, timely provision of geospatial information is crucial for effective development of response and management plans. Due to inaccessibility of the affected areas and limited budget of first-responders, timely acquisition of the required geospatial data for these applications is usually possible only using low-cost imaging and georefencing sensors mounted on unmanned platforms. Despite rapid collection of the required data using these systems, available processing techniques are not yet capable of delivering geospatial information to responders and decision makers in a timely manner. To address this issue, this paper introduces a new technique for dense 3D reconstruction of the affected scenes which can deliver and improve the needed geospatial information incrementally. This approach is implemented based on prior 3D knowledge of the scene and employs computationally-efficient 2D triangulation, feature descriptor, feature matching and point verification techniques to optimize and speed up 3D dense scene reconstruction procedure. To verify the feasibility and computational efficiency of the proposed approach, an experiment using a set of consecutive images collected onboard a UAV platform and prior low-density airborne laser scanning over the same area is conducted and step by step results are provided. A comparative analysis of the proposed approach and an available image-based dense reconstruction technique is also conducted to prove the computational efficiency and competency of this technique for delivering geospatial information with pre-specified accuracy.
Enhancing the performance of regional land cover mapping
NASA Astrophysics Data System (ADS)
Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping
2016-10-01
Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.
Seamon, Mark J; Doane, Stephen M; Gaughan, John P; Kulp, Heather; D'Andrea, Anthony P; Pathak, Abhijit S; Santora, Thomas A; Goldberg, Amy J; Wydro, Gerald C
2013-05-01
Advanced Life Support (ALS) providers may perform more invasive prehospital procedures, while Basic Life Support (BLS) providers offer stabilisation care and often "scoop and run". We hypothesised that prehospital interventions by urban ALS providers prolong prehospital time and decrease survival in penetrating trauma victims. We prospectively analysed 236 consecutive ambulance-transported, penetrating trauma patients an our urban Level-1 trauma centre (6/2008-12/2009). Inclusion criteria included ICU admission, length of stay >/=2 days, or in-hospital death. Demographics, clinical characteristics, and outcomes were compared between ALS and BLS patients. Single and multiple variable logistic regression analysis determined predictors of hospital survival. Of 236 patients, 71% were transported by ALS and 29% by BLS. When ALS and BLS patients were compared, no differences in age, penetrating mechanism, scene GCS score, Injury Severity Score, or need for emergency surgery were detected (p>0.05). Patients transported by ALS units more often underwent prehospital interventions (97% vs. 17%; p<0.01), including endotracheal intubation, needle thoracostomy, cervical collar, IV placement, and crystalloid resuscitation. While ALS ambulance on-scene time was significantly longer than that of BLS (p<0.01), total prehospital time was not (p=0.98) despite these prehospital interventions (1.8 ± 1.0 per ALS patient vs. 0.2 ± 0.5 per BLS patient; p<0.01). Overall, 69.5% ALS patients and 88.4% of BLS patients (p<0.01) survived to hospital discharge. Prehospital resuscitative interventions by ALS units performed on penetrating trauma patients may lengthen on-scene time but do not significantly increase total prehospital time. Regardless, these interventions did not appear to benefit our rapidly transported, urban penetrating trauma patients. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang
2007-12-01
In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.
Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram
2016-01-15
An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.
How humans use visual optic flow to regulate stepping during walking.
Salinas, Mandy M; Wilken, Jason M; Dingwell, Jonathan B
2017-09-01
Humans use visual optic flow to regulate average walking speed. Among many possible strategies available, healthy humans walking on motorized treadmills allow fluctuations in stride length (L n ) and stride time (T n ) to persist across multiple consecutive strides, but rapidly correct deviations in stride speed (S n =L n /T n ) at each successive stride, n. Several experiments verified this stepping strategy when participants walked with no optic flow. This study determined how removing or systematically altering optic flow influenced peoples' stride-to-stride stepping control strategies. Participants walked on a treadmill with a virtual reality (VR) scene projected onto a 3m tall, 180° semi-cylindrical screen in front of the treadmill. Five conditions were tested: blank screen ("BLANK"), static scene ("STATIC"), or moving scene with optic flow speed slower than ("SLOW"), matched to ("MATCH"), or faster than ("FAST") walking speed. Participants took shorter and faster strides and demonstrated increased stepping variability during the BLANK condition compared to the other conditions. Thus, when visual information was removed, individuals appeared to walk more cautiously. Optic flow influenced both how quickly humans corrected stride speed deviations and how successful they were at enacting this strategy to try to maintain approximately constant speed at each stride. These results were consistent with Weber's law: healthy adults more-rapidly corrected stride speed deviations in a no optic flow condition (the lower intensity stimuli) compared to contexts with non-zero optic flow. These results demonstrate how the temporal characteristics of optic flow influence ability to correct speed fluctuations during walking. Copyright © 2017 Elsevier B.V. All rights reserved.
Discourse Analysis of Encouragement in Healthcare Manga
ERIC Educational Resources Information Center
Matsuoka, Rieko; Smith, Ian; Uchimura, Mari
2011-01-01
This article examines how healthcare professionals use encouragement. Focusing on GAMBARU ["to try hard"], forty-one scenes were collected from healthcare manga. Each scene of encouragement was analyzed from three perspectives; the contextual background of the communication, the relationship with the patients and the patients' response…
Progress in high-level exploratory vision
NASA Astrophysics Data System (ADS)
Brand, Matthew
1993-08-01
We have been exploring the hypothesis that vision is an explanatory process, in which causal and functional reasoning about potential motion plays an intimate role in mediating the activity of low-level visual processes. In particular, we have explored two of the consequences of this view for the construction of purposeful vision systems: Causal and design knowledge can be used to (1) drive focus of attention, and (2) choose between ambiguous image interpretations. An important result of visual understanding is an explanation of the scene's causal structure: How action is originated, constrained, and prevented, and what will happen in the immediate future. In everyday visual experience, most action takes the form of motion, and most causal analysis takes the form of dynamical analysis. This is even true of static scenes, where much of a scene's interest lies in how possible motions are arrested. This paper describes our progress in developing domain theories and visual processes for the understanding of various kinds of structured scenes, including structures built out of children's constructive toys and simple mechanical devices.
Coding of navigational affordances in the human visual system
Epstein, Russell A.
2017-01-01
A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669
An Analysis of the High Frequency Vibrations in Early Thematic Mapper Scenes
NASA Technical Reports Server (NTRS)
Kogut, J.; Larduinat, E.
1984-01-01
The potential effects of high frequency vibrations on the final Thematic Mapper (TM) image are evaluated for 26 scenes. The angular displacements of the TM detectors from their nominal pointing directions as measured by the TM Angular Displacement Sensor (ADS) and the spacecraft Dry Rotor Inertial Reference Unit (DRIRU) give data on the along scan and cross scan high frequency vibrations present in each scan of a scene. These measurements are to find the maximum overlap and underlap between successive scans, and to analyze the spectrum of the high frequency vibrations acting on the detectors. The Fourier spectrum of the along scan and cross scan vibrations for each scene also evaluated. The spectra of the scenes examined indicate that the high frequency vibrations arise primarily from the motion of the TM and MSS mirrors, and that their amplitudes are well within expected ranges.
Touroo, R; Fitch, A
2016-09-01
Although it is the obligation of the veterinary forensic pathologist to be competent in identifying, collecting, and preserving evidence from the body, it is also necessary for them to understand the relevance of conditions on the crime scene. The body is just one piece of the puzzle that needs to be considered when determining the cause of death. The information required for a complete postmortem analysis should also include details of the animal's environment and items of evidence present on the crime scene. These factors will assist the veterinary forensic pathologist in the interpretation of necropsy findings. Therefore, the veterinary forensic pathologist needs to have a basic understanding of how the crime scene is processed, as well as the role of the forensic veterinarian on scene. In addition, the veterinary forensic pathologist must remain unbiased, necessitating an understanding of evidence maintenance and authentication. © The Author(s) 2016.
Behind the Scenes of Music Education in China: A Survey of Historical Memory
ERIC Educational Resources Information Center
Ho, Wai-Chung
2013-01-01
This study explores how the government of mainland China values Chinese nationalism as a component of its historical memory and traces its relationship with music education from the twentieth century to the global age within broader social contexts. In a rapidly commercializing and modernizing China, nationalism remains the main driving force…
Multilingual Codeswitching in Quebec Rap: Poetry, Pragmatics and Performativity
ERIC Educational Resources Information Center
Sarkar, Mela; Winer, Lise
2006-01-01
Quebec rap lyrics stand out on the world Hip-Hop scene by virtue of the ease and rapidity with which performers in this multilingual, multiethnic youth community codeswitch, frequently among three or more languages or language varieties (usually over a French and/or English base) in the same song. We construct a framework for understanding…
Wildfires and Forest Development in Tropical and Subtropical Asia: Outlook for the Year 2000
Johann G. Goldammer
1987-01-01
California's foothill counties are the scene of rapid development. All types of construction in former wildlands is creating an intermix of wildland-structures-wildland that is different from the traditional "urban-wildland interface." The fire and structural environment for seven counties is described. Fire statistics are compared with growth patterns...
ERIC Educational Resources Information Center
Bassford, Marie L.; Crisp, Annette; O'Sullivan, Angela; Bacon, Joanne; Fowler, Mark
2016-01-01
Interactive experiences are rapidly becoming popular via the surge of "escape rooms"; part game and part theatre, the "escape" experience is exploding globally, having gone from zero offered at the outset of 2010 to at least 2800 different experiences available worldwide today. CrashEd is an interactive learning experience that…
Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali
2016-01-01
Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3-7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.
Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali
2016-01-01
Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3–7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception. PMID:28018197
Robust selectivity to two-object images in human visual cortex
Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105
Kauffmann, Louise; Chauvin, Alan; Pichat, Cédric; Peyrin, Carole
2015-10-01
According to current models of visual perception scenes are processed in terms of spatial frequencies following a predominantly coarse-to-fine processing sequence. Low spatial frequencies (LSF) reach high-order areas rapidly in order to activate plausible interpretations of the visual input. This triggers top-down facilitation that guides subsequent processing of high spatial frequencies (HSF) in lower-level areas such as the inferotemporal and occipital cortices. However, dynamic interactions underlying top-down influences on the occipital cortex have never been systematically investigated. The present fMRI study aimed to further explore the neural bases and effective connectivity underlying coarse-to-fine processing of scenes, particularly the role of the occipital cortex. We used sequences of six filtered scenes as stimuli depicting coarse-to-fine or fine-to-coarse processing of scenes. Participants performed a categorization task on these stimuli (indoor vs. outdoor). Firstly, we showed that coarse-to-fine (compared to fine-to-coarse) sequences elicited stronger activation in the inferior frontal gyrus (in the orbitofrontal cortex), the inferotemporal cortex (in the fusiform and parahippocampal gyri), and the occipital cortex (in the cuneus). Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. DCM results revealed that coarse-to-fine processing resulted in increased connectivity from the occipital cortex to the inferior frontal gyrus and from the inferior frontal gyrus to the inferotemporal cortex. Critically, we also observed an increase in connectivity strength from the inferior frontal gyrus to the occipital cortex, suggesting that top-down influences from frontal areas may guide processing of incoming signals. The present results support current models of visual perception and refine them by emphasizing the role of the occipital cortex as a cortical site for feedback projections in the neural network underlying coarse-to-fine processing of scenes. Copyright © 2015 Elsevier Inc. All rights reserved.
Neural representations of contextual guidance in visual search of real-world scenes.
Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P
2013-05-01
Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.
NASA Technical Reports Server (NTRS)
Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry
2009-01-01
The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Accuracy and usefulness of the AVOXimeter 4000 as routine analysis of carboxyhemoglobin.
Fujihara, Junko; Kinoshita, Hiroshi; Tanaka, Naoko; Yasuda, Toshihiro; Takeshita, Haruo
2013-07-01
The measurement of blood carboxyhemoglobin (CO-Hb) is important to determine the cause of death. The AVOXimeter 4000 (AVOX), a portable CO-oximeter, has the advantages of a low purchase price and operating cost, ease of operation, and rapid results. Little information is available on the usefulness of AVOX in the forensic sample, and the previous study investigated only six samples. Therefore, in this study, we confirmed the usefulness of the AVOX through a comparison of its results with data previously obtained using the double wavelength spectrophotometric method in autopsies. Regression analysis was performed between CO-Hb levels measured by the AVOX and those measured by the conventional double wavelength spectrophotometric method in postmortem blood samples: a significant correlation was observed. This study suggests the usefulness of the AVOX to analyze postmortem blood, and the AVOX is suitable for routine forensic analysis and can be applied at the crime scene. © 2013 American Academy of Forensic Sciences.
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
A new approach to modeling the influence of image features on fixation selection in scenes
Nuthmann, Antje; Einhäuser, Wolfgang
2015-01-01
Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. PMID:25752239
Forensic botany as a useful tool in the crime scene: Report of a case.
Margiotta, Gabriele; Bacaro, Giovanni; Carnevali, Eugenia; Severini, Simona; Bacci, Mauro; Gabbrielli, Mario
2015-08-01
The ubiquitous presence of plant species makes forensic botany useful for many criminal cases. Particularly, bryophytes are useful for forensic investigations because many of them are clonal and largely distributed. Bryophyte shoots can easily become attached to shoes and clothes and it is possible to be found on footwear, providing links between crime scene and individuals. We report a case of suicide of a young girl happened in Siena, Tuscany, Italia. The cause of traumatic injuries could be ascribed to suicide, to homicide, or to accident. In absence of eyewitnesses who could testify the dynamics of the event, the crime scene investigation was fundamental to clarify the accident. During the scene analysis, some fragments of Tortula muralis Hedw. and Bryum capillare Hedw were found. The fragments were analyzed by a bryologists in order to compare them with the moss present on the stairs that the victim used immediately before the death. The analysis of these bryophytes found at the crime scene allowed to reconstruct the accident. Even if this evidence, of course, is circumstantial, it can be useful in forensic cases, together with the other evidences, to reconstruct the dynamics of events. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Analysis of Vietnamization: Summary and Evaluation
1973-11-01
Ellsberg, Daniel . Some Lessons from Failure in Vietnam, P-4036. Santa Monica: The RAND Corp., July 1969. Fulbright, J. William (ed.). The Vietnam...34 Chira and North Vietnam: Two Revolutionary Paths, " Part I, Current Scene, Vol. IX, No. II (Nov 7, 1971), Part II, Current Scene. Vol. IX, No. IZ (Doc 7
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Kotabe, Hiroki P; Kardan, Omid; Berman, Marc G
2017-08-01
Natural environments have powerful aesthetic appeal linked to their capacity for psychological restoration. In contrast, disorderly environments are aesthetically aversive, and have various detrimental psychological effects. But in our research, we have repeatedly found that natural environments are perceptually disorderly. What could explain this paradox? We present 3 competing hypotheses: the aesthetic preference for naturalness is more powerful than the aesthetic aversion to disorder (the nature-trumps-disorder hypothesis ); disorder is trivial to aesthetic preference in natural contexts (the harmless-disorder hypothesis ); and disorder is aesthetically preferred in natural contexts (the beneficial-disorder hypothesis ). Utilizing novel methods of perceptual study and diverse stimuli, we rule in the nature-trumps-disorder hypothesis and rule out the harmless-disorder and beneficial-disorder hypotheses. In examining perceptual mechanisms, we find evidence that high-level scene semantics are both necessary and sufficient for the nature-trumps-disorder effect. Necessity is evidenced by the effect disappearing in experiments utilizing only low-level visual stimuli (i.e., where scene semantics have been removed) and experiments utilizing a rapid-scene-presentation procedure that obscures scene semantics. Sufficiency is evidenced by the effect reappearing in experiments utilizing noun stimuli which remove low-level visual features. Furthermore, we present evidence that the interaction of scene semantics with low-level visual features amplifies the nature-trumps-disorder effect-the effect is weaker both when statistically adjusting for quantified low-level visual features and when using noun stimuli which remove low-level visual features. These results have implications for psychological theories bearing on the joint influence of low- and high-level perceptual inputs on affect and cognition, as well as for aesthetic design. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu
2017-01-01
Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112
The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude
Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander
2016-01-01
Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103
Use of the TM tasseled cap transform for interpretation of spectral contrasts in an urban scene
NASA Technical Reports Server (NTRS)
Goward, S. N.; Wharton, S. W.
1984-01-01
Investigations are being conducted with the objective to develop automated numerical image analysis procedures. In this context, an examination is performed of physically-based multispectral data transforms as a means to incorporate a priori knowledge of land radiance properties in the analysis process. A physically-based transform of TM observations was developed. This transform extends the Landsat MSS Tasseled Cap transform reported by Kauth and Thomas (1976) to TM data observations. The present study has the aim to examine the utility of the TM Tasseled Cap transform as applied to TM data from an urban landscape. The analysis conducted is based on 512 x 512 subset of the Washington, DC November 2, 1982 TM scene, centered on Springfield, VA. It appears that the TM tasseled cap transformation provides a good means to explain land physical attributes of the Washington scene. This result provides a suggestion regarding a direction by which a priori knowledge of landscape spectral patterns may be incorporated into numerical image analysis.
ERIC Educational Resources Information Center
Lakes, Robert Maxwell
2016-01-01
Changes in climate and the corresponding environmental issues are major concerns facing the world today. Human consumption, which is leading the rapid depletion of the earth's finite resources and causing a dramatic loss of biodiversity, is largely to blame (Pearson, Lowry, Dorrian, & Litchfield, 2014). American zoos and aquariums are…
Vanmarcke, Steven; Wagemans, Johan
2015-01-01
In everyday life, we are generally able to dynamically understand and adapt to socially (ir)elevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996), participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal) almost perfectly (even with 20 ms exposures). Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog) and effects of perceptual similarity (animals versus vehicles) and object category size (natural versus animal versus dog). In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli) and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men). PMID:26034569
A Web-Based Search Service to Support Imaging Spectrometer Instrument Operations
NASA Technical Reports Server (NTRS)
Smith, Alexander; Thompson, David R.; Sayfi, Elias; Xing, Zhangfan; Castano, Rebecca
2013-01-01
Imaging spectrometers yield rich and informative data products, but interpreting them demands time and expertise. There is a continual need for new algorithms and methods for rapid first-draft analyses to assist analysts during instrument opera-tions. Intelligent data analyses can summarize scenes to draft geologic maps, searching images to direct op-erator attention to key features. This validates data quality while facilitating rapid tactical decision making to select followup targets. Ideally these algorithms would operate in seconds, never grow bored, and be free from observation bias about the kinds of mineral-ogy that will be found.
Passive IFF: Autonomous Nonintrusive Rapid Identification of Friendly Assets
NASA Technical Reports Server (NTRS)
Moynihan, Philip; Steenburg, Robert Van; Chao, Tien-Hsin
2004-01-01
A proposed optoelectronic instrument would identify targets rapidly, without need to radiate an interrogating signal, apply identifying marks to the targets, or equip the targets with transponders. The instrument was conceived as an identification, friend or foe (IFF) system in a battlefield setting, where it would be part of a targeting system for weapons, by providing rapid identification for aimed weapons to help in deciding whether and when to trigger them. The instrument could also be adapted to law-enforcement and industrial applications in which it is necessary to rapidly identify objects in view. The instrument would comprise mainly an optical correlator and a neural processor (see figure). The inherent parallel-processing speed and capability of the optical correlator would be exploited to obtain rapid identification of a set of probable targets within a scene of interest and to define regions within the scene for the neural processor to analyze. The neural processor would then concentrate on each region selected by the optical correlator in an effort to identify the target. Depending on whether or not a target was recognized by comparison of its image data with data in an internal database on which the neural processor was trained, the processor would generate an identifying signal (typically, friend or foe ). The time taken for this identification process would be less than the time needed by a human or robotic gunner to acquire a view of, and aim at, a target. An optical correlator that has been under development for several years and that has been demonstrated to be capable of tracking a cruise missile might be considered a prototype of the optical correlator in the proposed IFF instrument. This optical correlator features a 512-by-512-pixel input image frame and operates at an input frame rate of 60 Hz. It includes a spatial light modulator (SLM) for video-to-optical image conversion, a pair of precise lenses to effect Fourier transforms, a filter SLM for digital-to-optical correlation-filter data conversion, and a charge-coupled device (CCD) for detection of correlation peaks. In operation, the input scene grabbed by a video sensor is streamed into the input SLM. Precomputed correlation-filter data files representative of known targets are then downloaded and sequenced into the filter SLM at a rate of 1,000 Hz. When there occurs a match between the input target data and one of the known-target data files, the CCD detects a correlation peak at the location of the target. Distortion- invariant correlation filters from a bank of such filters are then sequenced through the optical correlator for each input frame. The net result is the rapid preliminary recognition of one or a few targets.
Tobacco imagery on New Zealand television 2002-2004.
McGee, Rob; Ketchel, Juanita
2006-10-01
Considerable emphasis has been placed on the importance of tobacco imagery in the movies as one of the "drivers" of smoking among young people. Findings are presented from a content analysis of 98 hours of prime-time programming on New Zealand television 2004, identifying 152 scenes with tobacco imagery, and selected characteristics of those scenes. About one in four programmes contained tobacco imagery, most of which might be regarded as "neutral or positive". This amounted to about two scenes containing such imagery for every hour of programming. A comparison with our earlier content analysis of programming in 2002 indicated little change in the level of tobacco imagery. The effect of this imagery in contributing to young viewers taking up smoking, and sustaining the addiction among those already smoking, deserves more research attention.
Space Radar Image of West Texas - SAR Scan
1999-04-15
This radar image of the Midland/Odessa region of West Texas, demonstrates an experimental technique, called ScanSAR, that allows scientists to rapidly image large areas of the Earth's surface. The large image covers an area 245 kilometers by 225 kilometers (152 miles by 139 miles). It was obtained by the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying aboard the space shuttle Endeavour on October 5, 1994. The smaller inset image is a standard SIR-C image showing a portion of the same area, 100 kilometers by 57 kilometers (62 miles by 35 miles) and was taken during the first flight of SIR-C on April 14, 1994. The bright spots on the right side of the image are the cities of Odessa (left) and Midland (right), Texas. The Pecos River runs from the top center to the bottom center of the image. Along the left side of the image are, from top to bottom, parts of the Guadalupe, Davis and Santiago Mountains. North is toward the upper right. Unlike conventional radar imaging, in which a radar continuously illuminates a single ground swath as the space shuttle passes over the terrain, a Scansar radar illuminates several adjacent ground swaths almost simultaneously, by "scanning" the radar beam across a large area in a rapid sequence. The adjacent swaths, typically about 50 km (31 miles) wide, are then merged during ground processing to produce a single large scene. Illumination for this L-band scene is from the top of the image. The beams were scanned from the top of the scene to the bottom, as the shuttle flew from left to right. This scene was acquired in about 30 seconds. A normal SIR-C image is acquired in about 13 seconds. The ScanSAR mode will likely be used on future radar sensors to construct regional and possibly global radar images and topographic maps. The ScanSAR processor is being designed for 1996 implementation at NASA's Alaska SAR Facility, located at the University of Alaska Fairbanks, and will produce digital images from the forthcoming Canadian RADARSAT satellite. http://photojournal.jpl.nasa.gov/catalog/PIA01787
2017-01-01
The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner’s office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types. PMID:28604832
Sanford, Michelle R
2017-01-01
The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner's office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types.
Adaptation of facial synthesis to parameter analysis in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yu, Lu; Zhang, Jingyu; Liu, Yunhai
2000-12-01
In MPEG-4, Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs) are defined to animate 1 a facial object. Most of the previous facial animation reconstruction systems were focused on synthesizing animation from manually or automatically generated FAPs but not the FAPs extracted from natural video scene. In this paper, an analysis-synthesis MPEG-4 visual communication system is established, in which facial animation is reconstructed from FAPs extracted from natural video scene.
NASA Astrophysics Data System (ADS)
Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.
2016-10-01
This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.
An optical systems analysis approach to image resampling
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
1997-01-01
All types of image registration require some type of resampling, either during the registration or as a final step in the registration process. Thus the image(s) must be regridded into a spatially uniform, or angularly uniform, coordinate system with some pre-defined resolution. Frequently the ending resolution is not the resolution at which the data was observed with. The registration algorithm designer and end product user are presented with a multitude of possible resampling methods each of which modify the spatial frequency content of the data in some way. The purpose of this paper is threefold: (1) to show how an imaging system modifies the scene from an end to end optical systems analysis approach, (2) to develop a generalized resampling model, and (3) empirically apply the model to simulated radiometric scene data and tabulate the results. A Hanning windowed sinc interpolator method will be developed based upon the optical characterization of the system. It will be discussed in terms of the effects and limitations of sampling, aliasing, spectral leakage, and computational complexity. Simulated radiometric scene data will be used to demonstrate each of the algorithms. A high resolution scene will be "grown" using a fractal growth algorithm based on mid-point recursion techniques. The result scene data will be convolved with a point spread function representing the optical response. The resultant scene will be convolved with the detection systems response and subsampled to the desired resolution. The resultant data product will be subsequently resampled to the correct grid using the Hanning windowed sinc interpolator and the results and errors tabulated and discussed.
Brain mechanisms underlying cue-based memorizing during free viewing of movie Memento.
Kauttonen, Janne; Hlushchuk, Yevhen; Jääskeläinen, Iiro P; Tikka, Pia
2018-05-15
How does the human brain recall and connect relevant memories with unfolding events? To study this, we presented 25 healthy subjects, during functional magnetic resonance imaging, the movie 'Memento' (director C. Nolan). In this movie, scenes are presented in chronologically reverse order with certain scenes briefly overlapping previously presented scenes. Such overlapping "key-frames" serve as effective memory cues for the viewers, prompting recall of relevant memories of the previously seen scene and connecting them with the concurrent scene. We hypothesized that these repeating key-frames serve as immediate recall cues and would facilitate reconstruction of the story piece-by-piece. The chronological version of Memento, shown in a separate experiment for another group of subjects, served as a control condition. Using multivariate event-related pattern analysis method and representational similarity analysis, focal fingerprint patterns of hemodynamic activity were found to emerge during presentation of key-frame scenes. This effect was present in higher-order cortical network with regions including precuneus, angular gyrus, cingulate gyrus, as well as lateral, superior, and middle frontal gyri within frontal poles. This network was right hemispheric dominant. These distributed patterns of brain activity appear to underlie ability to recall relevant memories and connect them with ongoing events, i.e., "what goes with what" in a complex story. Given the real-life likeness of cinematic experience, these results provide new insight into how the human brain recalls, given proper cues, relevant memories to facilitate understanding and prediction of everyday life events. Copyright © 2018 Elsevier Inc. All rights reserved.
Using Science Fiction Movie Scenes to Support Critical Analysis of Science
ERIC Educational Resources Information Center
Barnett, Michael; Kafka, Alan
2007-01-01
This paper discusses pedagogical advantages and challenges of using science-fiction movies and television shows in an introductory science class for elementary teachers. The authors describe two instructional episodes in which scenes from the movies "Red Planet" and "The Core" were used to engage students in critiquing science as presented in…
An Analysis of Korean Homicide Crime-Scene Actions
ERIC Educational Resources Information Center
Salfati, C. Gabrielle; Park, Jisun
2007-01-01
Recent studies have focused on how different styles of homicides will be reflected in the different types of behaviors committed by offenders at a crime scene. It is suggested that these different types of behaviors best be understood using two frameworks, expressive/instrumental aggression and planned/unplanned violence, to analyze the way the…
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
World's First 24/7 Mobile Stroke Unit: Initial 6-Month Experience at Mercy Health in Toledo, Ohio.
Lin, Eugene; Calderon, Victoria; Goins-Whitmore, Julie; Bansal, Vibhav; Zaidat, Osama
2018-01-01
As the fourth mobile stroke unit (MSU) in the nation, and the first 24/7 unit worldwide, we review our initial experience with the Mercy Health MSU and institutional protocols implemented to facilitate rapid treatment of acute stroke patients and field triage for patients suffering other time-sensitive, acute neurologic emergencies in Lucas County, Ohio, and the greater Toledo metropolitan area. Data was prospectively collected for all patients transported and treated by the MSU during the first 6 months of service. Data was abstracted from documentation of on-scene emergency medical services (EMS) personnel, critical care nurses, and onboard physicians, who participated through telemedicine. The MSU was dispatched 248 times and transported 105 patients after on-scene examination with imaging. Intravenous (IV) tissue plasminogen activator (tPA) was administered to 10 patients; 8 patients underwent successful endovascular therapy after a large vessel occlusion was identified using CT performed within the MSU without post treatment symptomatic hemorrhage. Moreover, 14 patients were treated with IV anti-epileptics for status epilepticus, and 19 patients received IV anti-hypertensive agents for malignant hypertension. MSU alarm to on-scene times and treatment times were 34.7 min (25-49) and 50.6 min (44.4-56.8), respectively. The world's first 24/7 MSU has been successfully implemented with IV-tPA administration rates and times comparable to other MSUs nation-wide, while demonstrating rapid triage and treatment in the field for neurologic emergencies, including status epilepticus. With the rising number of MSUs worldwide, further data will drive standardized protocols that can be adopted nationwide by EMS.
Is moral beauty different from facial beauty? Evidence from an fMRI study
Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald
2015-01-01
Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010
Common and Innovative Visuals: A sparsity modeling framework for video.
Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder
2014-05-02
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.
Alcohol imagery on New Zealand television
McGee, Rob; Ketchel, Juanita; Reeder, Anthony I
2007-01-01
Background To examine the extent and nature of alcohol imagery on New Zealand (NZ) television, a content analysis of 98 hours of prime-time television programs and advertising was carried out over 7 consecutive days' viewing in June/July 2004. The main outcome measures were number of scenes in programs, trailers and advertisements depicting alcohol imagery; the extent of critical versus neutral and promotional imagery; and the mean number of scenes with alcohol per hour, and characteristics of scenes in which alcohol featured. Results There were 648 separate depictions of alcohol imagery across the week, with an average of one scene every nine minutes. Scenes depicting uncritical imagery outnumbered scenes showing possible adverse health consequences of drinking by 12 to 1. Conclusion The evidence points to a large amount of alcohol imagery incidental to storylines in programming on NZ television. Alcohol is also used in many advertisements to market non-alcohol goods and services. More attention needs to be paid to the extent of alcohol imagery on television from the industry, the government and public health practitioners. Health education with young people could raise critical awareness of the way alcohol imagery is presented on television. PMID:17270053
Carbon-monoxide poisoning in young drug addicts due to indoor use of a gasoline-powered generator.
Marc, B; Bouchez-Buvry, A; Wepierre, J L; Boniol, L; Vaquero, P; Garnier, M
2001-06-01
We report six fatal cases of unintentional carbon-monoxide poisoning which occurred in a house occupied by young people. The source of carbon monoxide was a gasoline-powered generator. For all victims, an external body examination was carried out and blood and urine samples collected. Blood carboxyhaemoglobin (COHb) was performed using an automated visible spectrophotometric analysis. Blood-alcohol level quantification was performed using gas chromatography and drug screening in urine was performed by a one-step manual qualitative immunochromatography (Syva Rapid test, Behring Diagnostics Inc.) for benzoylecgonine (the main metabolite of cocaine in urine), morphine, 11-nor-Delta(9)-THC-9-COOH (cannabinoids) and d-methamphetamine. In all victims the COHb value was as high or higher than 65%. No alcohol was found in blood samples, but urine samples were positive for methamphetamine, cocaine and cannabis in five cases and for opiates in one case. In four victims, the urine sample was positive for at least three drugs. The availability and accuracy of rapid toxicological screening is an important tool for the medical examiner at the immediate scene of a clinical forensic examination.
Statistics of high-level scene context.
Greene, Michelle R
2013-01-01
CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.
Mitigation of image artifacts in LWIR microgrid polarimeter images
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; Tyo, J. Scott; Boger, James K.; Black, Wiley T.; Bowers, David M.; Kumar, Rakesh
2007-09-01
Microgrid polarimeters, also known as division of focal plane (DoFP) polarimeters, are composed of an integrated array of micropolarizing elements that immediately precedes the FPA. The result of the DoFP device is that neighboring pixels sense different polarization states. The measurements made at each pixel can be combined to estimate the Stokes vector at every reconstruction point in a scene. DoFP devices have the advantage that they are mechanically rugged and inherently optically aligned. However, they suffer from the severe disadvantage that the neighboring pixels that make up the Stokes vector estimates have different instantaneous fields of view (IFOV). This IFOV error leads to spatial differencing that causes false polarization signatures, especially in regions of the image where the scene changes rapidly in space. Furthermore, when the polarimeter is operating in the LWIR, the FPA has inherent response problems such as nonuniformity and dead pixels that make the false polarization problem that much worse. In this paper, we present methods that use spatial information from the scene to mitigate two of the biggest problems that confront DoFP devices. The first is a polarimetric dead pixel replacement (DPR) scheme, and the second is a reconstruction method that chooses the most appropriate polarimetric interpolation scheme for each particular pixel in the image based on the scene properties. We have found that these two methods can greatly improve both the visual appearance of polarization products as well as the accuracy of the polarization estimates, and can be implemented with minimal computational cost.
3D visualization of numeric planetary data using JMARS
NASA Astrophysics Data System (ADS)
Dickenshied, S.; Christensen, P. R.; Anwar, S.; Carter, S.; Hagee, W.; Noss, D.
2013-12-01
JMARS (Java Mission-planning and Analysis for Remote Sensing) is a free geospatial application developed by the Mars Space Flight Facility at Arizona State University. Originally written as a mission planning tool for the THEMIS instrument on board the MARS Odyssey Spacecraft, it was released as an analysis tool to the general public in 2003. Since then it has expanded to be used for mission planning and scientific data analysis by additional NASA missions to Mars, the Moon, and Vesta, and it has come to be used by scientists, researchers and students of all ages from more than 40 countries around the world. The public version of JMARS now also includes remote sensing data for Mercury, Venus, Earth, the Moon, Mars, and a number of the moons of Jupiter and Saturn. Additional datasets for asteroids and other smaller bodies are being added as they becomes available and time permits. In addition to visualizing multiple datasets in context with one another, significant effort has been put into on-the-fly projection of georegistered data over surface topography. This functionality allows a user to easily create and modify 3D visualizations of any regional scene where elevation data is available in JMARS. This can be accomplished through the use of global topographic maps or regional numeric data such as HiRISE or HRSC DTMs. Users can also upload their own regional or global topographic dataset and use it as an elevation source for 3D rendering of their scene. The 3D Layer in JMARS allows the user to exaggerate the z-scale of any elevation source to emphasize the vertical variance throughout a scene. In addition, the user can rotate, tilt, and zoom the scene to any desired angle and then illuminate it with an artificial light source. This scene can be easily overlain with additional JMARS datasets such as maps, images, shapefiles, contour lines, or scale bars, and the scene can be easily saved as a graphic image for use in presentations or publications.
Yi, Minhan; Chen, Feng; Luo, Majing; Cheng, Yibin; Zhao, Huabin; Cheng, Hanhua; Zhou, Rongjia
2014-01-01
The Piwi-interacting RNA (piRNA) pathway is responsible for germline specification, gametogenesis, transposon silencing, and genome integrity. Transposable elements can disrupt genome and its functions. However, piRNA pathway evolution and its adaptation to transposon diversity in the teleost fish remain unknown. This article unveils evolutionary scene of piRNA pathway and its association with diverse transposons by systematically comparative analysis on diverse teleost fish genomes. Selective pressure analysis on piRNA pathway and miRNA/siRNA (microRNA/small interfering RNA) pathway genes between teleosts and mammals showed an accelerated evolution of piRNA pathway genes in the teleost lineages, and positive selection on functional PAZ (Piwi/Ago/Zwille) and Tudor domains involved in the Piwi–piRNA/Tudor interaction, suggesting that the amino acid substitutions are adaptive to their functions in piRNA pathway in the teleost fish species. Notably five piRNA pathway genes evolved faster in the swamp eel, a kind of protogynous hermaphrodite fish, than the other teleosts, indicating a differential evolution of piRNA pathway between the swamp eel and other gonochoristic fishes. In addition, genome-wide analysis showed higher diversity of transposons in the teleost fish species compared with mammals. Our results suggest that rapidly evolved piRNA pathway in the teleost fish is likely to be involved in the adaption to transposon diversity. PMID:24846630
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Out-of-hospital and emergency department management of epidemic scombroid poisoning.
Eckstein, M; Serna, M; DelaCruz, P; Mallon, W K
1999-09-01
To report two epidemic outbreaks of scombroid food poisoning and their emergency medical services (EMS) response and emergency department (ED) treatment, analyzing the impact of early physician involvement and on-line medical control. Retrospective case series of two multiple-casualty incidents (MCIs) involving scombroid food poisoning. A total 57 patients were treated from two separate incidents, with 30 patients transported to area hospitals. One patient required treatment with a cardiac medication in the field and another patient eventually required hospital admission. On-scene medical control (incident 1) and early identification of the index case (incident 2) were instrumental to out-of-hospital care interventions and conservation of resources. Patient triage, field treatment, and hospital transport were expedited, with some patients treated and released from the scene. Immediate diagnosis of a food-borne illness in the out-of-hospital setting allows rapid treatment at the scene and allows for the efficient transport of multiple patients to a single receiving facility. EMS medical directors should be able to immediately respond to such incidents to make presumptive diagnoses and accurately direct patient care. When this is not possible, early identification of the index case facilitates early diagnosis and treatment.
Vos, Leia; Whitman, Douglas
2014-01-01
A considerable literature suggests that the right hemisphere is dominant in vigilance for novel and survival-related stimuli, such as predators, across a wide range of species. In contrast to vigilance for change, change blindness is a failure to detect obvious changes in a visual scene when they are obscured by a disruption in scene presentation. We studied lateralised change detection using a series of scenes with salient changes in either the left or right visual fields. In Study 1 left visual field changes were detected more rapidly than right visual field changes, confirming a right hemisphere advantage for change detection. Increasing stimulus difficulty resulted in greater right visual field detections and left hemisphere detection was more likely when change occurred in the right visual field on a prior trial. In Study 2 an intervening distractor task disrupted the influence of prior trials. Again, faster detection speeds were observed for the left visual field changes with a shift to a right visual field advantage with increasing time-to-detection. This suggests that a right hemisphere role for vigilance, or catching attention, and a left hemisphere role for target evaluation, or maintaining attention, is present at the earliest stage of change detection.
Trained Eyes: Experience Promotes Adaptive Gaze Control in Dynamic and Uncertain Visual Environments
Taya, Shuichiro; Windridge, David; Osman, Magda
2013-01-01
Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. PMID:23951147
A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.
NASA Technical Reports Server (NTRS)
Leigh, Albert B.; Pal, Sankar K.
1992-01-01
This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.
Ultrafast scene detection and recognition with limited visual information
Hagmann, Carl Erick; Potter, Mary C.
2016-01-01
Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263
Infrared imaging of the crime scene: possibilities and pitfalls.
Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G
2013-09-01
All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. © 2013 American Academy of Forensic Sciences.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
Natural scene logo recognition by joint boosting feature selection in salient regions
NASA Astrophysics Data System (ADS)
Fan, Wei; Sun, Jun; Naoi, Satoshi; Minagawa, Akihiro; Hotta, Yoshinobu
2011-01-01
Logos are considered valuable intellectual properties and a key component of the goodwill of a business. In this paper, we propose a natural scene logo recognition method which is segmentation-free and capable of processing images extremely rapidly and achieving high recognition rates. The classifiers for each logo are trained jointly, rather than independently. In this way, common features can be shared across multiple classes for better generalization. To deal with large range of aspect ratio of different logos, a set of salient regions of interest (ROI) are extracted to describe each class. We ensure the selected ROIs to be both individually informative and two-by-two weakly dependant by a Class Conditional Entropy Maximization criteria. Experimental results on a large logo database demonstrate the effectiveness and efficiency of our proposed method.
Analysis of suspicious powders following the post 9/11 anthrax scare.
Wills, Brandon; Leikin, Jerrold; Rhee, James; Saeedi, Bijan
2008-06-01
Following the 9/11 terrorist attacks, SET Environmental, Inc., a Chicago-based environmental and hazardous materials management company received a large number of suspicious powders for analysis. Samples of powders were submitted to SET for anthrax screening and/or unknown identification (UI). Anthrax screening was performed on-site using a ruggedized analytical pathogen identification device (R.A.P.I.D.) (Idaho Technologies, Salt Lake City, UT). UI was performed at SET headquarters (Wheeling, IL) utilizing a combination of wet chemistry techniques, infrared spectroscopy, and gas chromatography/mass spectroscopy. Turnaround time was approximately 2-3 hours for either anthrax or UI. Between October 10, 2001 and October 11, 2002, 161 samples were analyzed. Of these, 57 were for anthrax screening only, 78 were for anthrax and UI, and 26 were for UI only. Sources of suspicious powders included industries (66%), U.S. Postal Service (19%), law enforcement (9%), and municipalities (7%). There were 0/135 anthrax screens that were positive. There were no positive anthrax screens performed by SET in the Chicago area following the post-9/11 anthrax scare. The only potential biological or chemical warfare agent identified (cyanide) was provided by law enforcement. Rapid anthrax screening and identification of unknown substances at the scene are useful to prevent costly interruption of services and potential referral for medical evaluation.
1981 Image II Conference Proceedings.
1981-11-01
rapid motion of terrain detail across the display requires fast display processors. Other difficulties are perceptual: the visual displays must convey...has been a continuing effort by Vought in the last decade. Early systems were restricted by the unavailability of video bulk storage with fast random...each photograph. The calculations aided in the proper sequencing of the scanned scenes on the tape recorder and eventually facilitated fast random
ERIC Educational Resources Information Center
Winters, Marcus A.
2009-01-01
Charter schools have recently emerged as popular and effective alternatives to traditional public schools. Less than two decades since charter schools first came on the scene, the nation has 4,578 charter schools dispersed across forty-one states and the District of Columbia. These schools enroll 1.4 million students, and their rapid growth shows…
Language translation, doman specific languages and ANTLR
NASA Technical Reports Server (NTRS)
Craymer, Loring; Parr, Terence
2002-01-01
We will discuss the features of ANTLR that make it an attractive tool for rapid developement of domain specific language translators and present some practical examples of its use: extraction of information from the Cassini Command Language specification, the processing of structured binary data, and IVL--an English-like language for generating VRML scene graph, which is used in configuring the jGuru.com server.
ERIC Educational Resources Information Center
Vanmarcke, Steven; Mullin, Caitlin; Van der Hallen, Ruth; Evers, Kris; Noens, Ilse; Steyaert, Jean; Wagemans, Johan
2016-01-01
Typically developing (TD) adults are able to extract global information from natural images and to categorize them within a single glance. This study aimed at extending these findings to individuals with autism spectrum disorder (ASD) using a free description open-encoding paradigm. Participants were asked to freely describe what they saw when…
Li, Ming; Zhang, Jingjing; Jiang, Jie; Zhang, Jing; Gao, Jing; Qiao, Xiaolin
2014-04-07
In this paper, a novel approach based on paper spray ionization coupled with ion mobility spectrometry (PSI-IMS) was developed for rapid, in situ detection of cocaine residues in liquid samples and on various surfaces (e.g. glass, marble, skin, wood, fingernails), without tedious sample pretreatment. The obvious advantages of PSI are its low cost, easy operation and simple configuration without using nebulizing gas or discharge gas. Compared with mass spectrometry, ion mobility spectrometry (IMS) takes advantage of its low cost, easy operation, and simple configuration without requiring a vacuum system. Therefore, IMS is a more congruous detection method for PSI in the case of rapid, in situ analysis. For the analysis of cocaine residues in liquid samples, dynamic responses from 5 μg mL(-1) to 200 μg mL(-1) with a linear coefficient (R(2)) of 0.992 were obtained. In this case, the limit of detection (LOD) was calculated to be 2 μg mL(-1) as signal to noise (S/N) was 3 with a relative standard deviation (RSD) of 6.5% for 11 measurements (n = 11). Cocaine residues on various surfaces such as metal, glass, marble, wood, skin, and fingernails were also directly analyzed before wiping the surfaces with a piece of paper. The LOD was calculated to be as low as 5 ng (S/N = 3, RSD = 6.3%, n = 11). This demonstrates the capability of the PSI-IMS method for direct detection of cocaine residues at scenes of cocaine administration. Our results show that PSI-IMS is a simple, sensitive, rapid and economical method for in situ detection of this illicit drug, which could help governments to combat drug abuse.
Radiometrically accurate scene-based nonuniformity correction for array sensors.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2003-10-01
A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.
Using 3D range cameras for crime scene documentation and legal medicine
NASA Astrophysics Data System (ADS)
Cavagnini, Gianluca; Sansoni, Giovanna; Trebeschi, Marco
2009-01-01
Crime scene documentation and legal medicine analysis are part of a very complex process which is aimed at identifying the offender starting from the collection of the evidences on the scene. This part of the investigation is very critical, since the crime scene is extremely volatile, and once it is removed, it can not be precisely created again. For this reason, the documentation process should be as complete as possible, with minimum invasiveness. The use of optical 3D imaging sensors has been considered as a possible aid to perform the documentation step, since (i) the measurement is contactless and (ii) the process required to editing and modeling the 3D data is quite similar to the reverse engineering procedures originally developed for the manufacturing field. In this paper we show the most important results obtained in the experimentation.
Forensic Comparison of Soil Samples Using Nondestructive Elemental Analysis.
Uitdehaag, Stefan; Wiarda, Wim; Donders, Timme; Kuiper, Irene
2017-07-01
Soil can play an important role in forensic cases in linking suspects or objects to a crime scene by comparing samples from the crime scene with samples derived from items. This study uses an adapted ED-XRF analysis (sieving instead of grinding to prevent destruction of microfossils) to produce elemental composition data of 20 elements. Different data processing techniques and statistical distances were evaluated using data from 50 samples and the log-LR cost (C llr ). The best performing combination, Canberra distance, relative data, and square root values, is used to construct a discriminative model. Examples of the spatial resolution of the method in crime scenes are shown for three locations, and sampling strategy is discussed. Twelve test cases were analyzed, and results showed that the method is applicable. The study shows how the combination of an analysis technique, a database, and a discriminative model can be used to compare multiple soil samples quickly. © 2016 American Academy of Forensic Sciences.
Sadeghi, Zahra; McClelland, James L; Hoffman, Paul
2015-09-01
An influential position in lexical semantics holds that semantic representations for words can be derived through analysis of patterns of lexical co-occurrence in large language corpora. Firth (1957) famously summarised this principle as "you shall know a word by the company it keeps". We explored whether the same principle could be applied to non-verbal patterns of object co-occurrence in natural scenes. We performed latent semantic analysis (LSA) on a set of photographed scenes in which all of the objects present had been manually labelled. This resulted in a representation of objects in a high-dimensional space in which similarity between two objects indicated the degree to which they appeared in similar scenes. These representations revealed similarities among objects belonging to the same taxonomic category (e.g., items of clothing) as well as cross-category associations (e.g., between fruits and kitchen utensils). We also compared representations generated from this scene dataset with two established methods for elucidating semantic representations: (a) a published database of semantic features generated verbally by participants and (b) LSA applied to a linguistic corpus in the usual fashion. Statistical comparisons of the three methods indicated significant association between the structures revealed by each method, with the scene dataset displaying greater convergence with feature-based representations than did LSA applied to linguistic data. The results indicate that information about the conceptual significance of objects can be extracted from their patterns of co-occurrence in natural environments, opening the possibility for such data to be incorporated into existing models of conceptual representation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A
1994-01-01
In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.
Purification of crime scene DNA extracts using centrifugal filter devices
2013-01-01
Background The success of forensic DNA analysis is limited by the size, quality and purity of biological evidence found at crime scenes. Sample impurities can inhibit PCR, resulting in partial or negative DNA profiles. Various DNA purification methods are applied to remove impurities, for example, employing centrifugal filter devices. However, irrespective of method, DNA purification leads to DNA loss. Here we evaluate the filter devices Amicon Ultra 30 K and Microsep 30 K with respect to recovery rate and general performance for various types of PCR-inhibitory crime scene samples. Methods Recovery rates for DNA purification using Amicon Ultra 30 K and Microsep 30 K were gathered using quantitative PCR. Mock crime scene DNA extracts were analyzed using quantitative PCR and short tandem repeat (STR) profiling to test the general performance and inhibitor-removal properties of the two filter devices. Additionally, the outcome of long-term routine casework DNA analysis applying each of the devices was evaluated. Results Applying Microsep 30 K, 14 to 32% of the input DNA was recovered, whereas Amicon Ultra 30 K retained 62 to 70% of the DNA. The improved purity following filter purification counteracted some of this DNA loss, leading to slightly increased electropherogram peak heights for blood on denim (Amicon Ultra 30 K and Microsep 30 K) and saliva on envelope (Amicon Ultra 30 K). Comparing Amicon Ultra 30 K and Microsep 30 K for purification of DNA extracts from mock crime scene samples, the former generated significantly higher peak heights for rape case samples (P-values <0.01) and for hairs (P-values <0.036). In long-term routine use of the two filter devices, DNA extracts purified with Amicon Ultra 30 K were considerably less PCR-inhibitory in Quantifiler Human qPCR analysis compared to Microsep 30 K. Conclusions Amicon Ultra 30 K performed better than Microsep 30 K due to higher DNA recovery and more efficient removal of PCR-inhibitory substances. The different performances of the filter devices are likely caused by the quality of the filters and plastic wares, for example, their DNA binding properties. DNA purification using centrifugal filter devices can be necessary for successful DNA profiling of impure crime scene samples and for consistency between different PCR-based analysis systems, such as quantification and STR analysis. In order to maximize the possibility to obtain complete STR DNA profiles and to create an efficient workflow, the level of DNA purification applied should be correlated to the inhibitor-tolerance of the STR analysis system used. PMID:23618387
Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.
2018-01-01
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861
Is moral beauty different from facial beauty? Evidence from an fMRI study.
Wang, Tingting; Mo, Lei; Mo, Ce; Tan, Li Hai; Cant, Jonathan S; Zhong, Luojin; Cupchik, Gerald
2015-06-01
Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts 'facial aesthetic judgment > facial gender judgment' and 'scene moral aesthetic judgment > scene gender judgment' identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Modeling Of Object- And Scene-Prototypes With Hierarchically Structured Classes
NASA Astrophysics Data System (ADS)
Ren, Z.; Jensch, P.; Ameling, W.
1989-03-01
The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.
High-speed switching of biphoton delays through electro-optic pump frequency modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odele, Ogaga D.; Lukens, Joseph M.; Jaramillo-Villegas, Jose A.
The realization of high-speed tunable delay control has received significant attention in the scene of classical photonics. In quantum optics, however, such rapid delay control systems for entangled photons have remained undeveloped. Here for the first time, we demonstrate rapid (2.5 MHz) modulation of signal-idler arrival times through electro-optic pump frequency modulation. Our technique applies the quantum phenomenon of nonlocal dispersion cancellation along with pump frequency tuning to control the relative delay between photon pairs. Chirped fiber Bragg gratings are employed to provide large amounts of dispersion which result in biphoton delays exceeding 30 ns. This rapid delay modulation schememore » could be useful for on-demand single-photon distribution in addition to quantum versions of pulse position modulation.« less
High-speed switching of biphoton delays through electro-optic pump frequency modulation
Odele, Ogaga D.; Lukens, Joseph M.; Jaramillo-Villegas, Jose A.; ...
2016-12-08
The realization of high-speed tunable delay control has received significant attention in the scene of classical photonics. In quantum optics, however, such rapid delay control systems for entangled photons have remained undeveloped. Here for the first time, we demonstrate rapid (2.5 MHz) modulation of signal-idler arrival times through electro-optic pump frequency modulation. Our technique applies the quantum phenomenon of nonlocal dispersion cancellation along with pump frequency tuning to control the relative delay between photon pairs. Chirped fiber Bragg gratings are employed to provide large amounts of dispersion which result in biphoton delays exceeding 30 ns. This rapid delay modulation schememore » could be useful for on-demand single-photon distribution in addition to quantum versions of pulse position modulation.« less
The Perception of Concurrent Sound Objects in Harmonic Complexes Impairs Gap Detection
ERIC Educational Resources Information Center
Leung, Ada W. S.; Jolicoeur, Pierre; Vachon, Francois; Alain, Claude
2011-01-01
Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a…
ERIC Educational Resources Information Center
Lenoble, Martine; And Others
1991-01-01
Four ideas for French language classroom activities include creation of a parody horoscope, reenactment of household scenes from a comic strip, an exercise in memorizing grammatical rules through children's chants, and analysis of a videotape's content, aural, and visual components. (MSE)
A Model of Auditory-Cognitive Processing and Relevance to Clinical Applicability.
Edwards, Brent
2016-01-01
Hearing loss and cognitive function interact in both a bottom-up and top-down relationship. Listening effort is tied to these interactions, and models have been developed to explain their relationship. The Ease of Language Understanding model in particular has gained considerable attention in its explanation of the effect of signal distortion on speech understanding. Signal distortion can also affect auditory scene analysis ability, however, resulting in a distorted auditory scene that can affect cognitive function, listening effort, and the allocation of cognitive resources. These effects are explained through an addition to the Ease of Language Understanding model. This model can be generalized to apply to all sounds, not only speech, representing the increased effort required for auditory environmental awareness and other nonspeech auditory tasks. While the authors have measures of speech understanding and cognitive load to quantify these interactions, they are lacking measures of the effect of hearing aid technology on auditory scene analysis ability and how effort and attention varies with the quality of an auditory scene. Additionally, the clinical relevance of hearing aid technology on cognitive function and the application of cognitive measures in hearing aid fittings will be limited until effectiveness is demonstrated in real-world situations.
Depth estimation using a lightfield camera
NASA Astrophysics Data System (ADS)
Roper, Carissa
The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.
Onboard Classifiers for Science Event Detection on a Remote Sensing Spacecraft
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Mazzoni, Dominic; Tang, Nghia; Greeley, Ron; Doggett, Thomas; Cichy, Ben; Chien, Steve; Davies, Ashley
2006-01-01
Typically, data collected by a spacecraft is downlinked to Earth and pre-processed before any analysis is performed. We have developed classifiers that can be used onboard a spacecraft to identify high priority data for downlink to Earth, providing a method for maximizing the use of a potentially bandwidth limited downlink channel. Onboard analysis can also enable rapid reaction to dynamic events, such as flooding, volcanic eruptions or sea ice break-up. Four classifiers were developed to identify cryosphere events using hyperspectral images. These classifiers include a manually constructed classifier, a Support Vector Machine (SVM), a Decision Tree and a classifier derived by searching over combinations of thresholded band ratios. Each of the classifiers was designed to run in the computationally constrained operating environment of the spacecraft. A set of scenes was hand-labeled to provide training and testing data. Performance results on the test data indicate that the SVM and manual classifiers outperformed the Decision Tree and band-ratio classifiers with the SVM yielding slightly better classifications than the manual classifier.
Parallel architecture for rapid image generation and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nerheim, R.J.
1987-01-01
A multiprocessor architecture inspired by the Disney multiplane camera is proposed. For many applications, this approach produces a natural mapping of processors to objects in a scene. Such a mapping promotes parallelism and reduces the hidden-surface work with minimal interprocessor communication and low-overhead cost. Existing graphics architectures store the final picture as a monolithic entity. The architecture here stores each object's image separately. It assembles the final composite picture from component images only when the video display needs to be refreshed. This organization simplifies the work required to animate moving objects that occlude other objects. In addition, the architecture hasmore » multiple processors that generate the component images in parallel. This further shortens the time needed to create a composite picture. In addition to generating images for animation, the architecture has the ability to decompose images.« less
Medico legal investigations into sudden sniffing deaths linked with trichloroethylene.
Da Broi, Ugo; Colatutto, Antonio; Sala, Pierguido; Desinan, Lorenzo
2015-08-01
Sudden deaths attributed to sniffing trichloroethylene are caused by the abuse of this solvent which produces pleasant inebriating effects with rapid dissipation. In the event of repeated cycles of inhalation, a dangerous and uncontrolled systemic accumulation of trichloroethylene may occur, followed by central nervous system depression, coma and lethal cardiorespiratory arrest. Sometimes death occurs outside the hospital environment, without medical intervention or witnesses and without specific necroscopic signs. Medico legal investigations into sudden sniffing deaths associated with trichloroethylene demand careful analysis of the death scene and related circumstances, a detailed understanding of the deceased's medical history and background of substance abuse and an accurate evaluation of all autopsy and laboratory data, with close cooperation between the judiciary, coroners and toxicologists. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Parallel detection of violations of color constancy
Foster, David H.; Nascimento, Sérgio M. C.; Amano, Kinjiro; Arend, Larry; Linnell, Karina J.; Nieves, Juan Luis; Plet, Sabrina; Foster, Jeffrey S.
2001-01-01
The perceived colors of reflecting surfaces generally remain stable despite changes in the spectrum of the illuminating light. This color constancy can be measured operationally by asking observers to distinguish illuminant changes on a scene from changes in the reflecting properties of the surfaces comprising it. It is shown here that during fast illuminant changes, simultaneous changes in spectral reflectance of one or more surfaces in an array of other surfaces can be readily detected almost independent of the numbers of surfaces, suggesting a preattentive, spatially parallel process. This process, which is perfect over a spatial window delimited by the anatomical fovea, may form an early input to a multistage analysis of surface color, providing the visual system with information about a rapidly changing world in advance of the generation of a more elaborate and stable perceptual representation. PMID:11438751
Sikirzhytskaya, Aliaksandra; Sikirzhytski, Vitali; Lednev, Igor K
2012-03-10
Traces of human body fluids, such as blood, saliva, sweat, semen and vaginal fluid, play an increasingly important role in forensic investigations. However, a nondestructive, easy and rapid identification of body fluid traces at the scene of a crime has not yet been developed. The obstacles have recently been addressed in our studies, which demonstrated the considerable potential of Raman spectroscopy. In this study, we continued to build a full library of body fluid spectroscopic signatures. The problems concerning vaginal fluid stain identification were addressed using Raman spectroscopy coupled with advanced statistical analysis. Calculated characteristic Raman and fluorescent spectral components were used to build a multidimensional spectroscopic signature of vaginal fluid, which demonstrated good specificity and was able to handle heterogeneous samples from different donors. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Impact of LANDSAT MSS sensor differences on change detection analysis
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1983-01-01
Some 512 by 512 pixel subwindows for simultaneously acquired scene pairs obtained by LANDSAT 2,3 and 4 multispectral band scanners were coregistered using LANDSAT 4 scenes as the base to which the other images were registered. Scattergrams between the coregistered scenes (a form of contingency analysis) were used to radiometrically compare data from the various sensors. Mode values were derived and used to visually fit a linear regression. Root mean square errors of the registration varied between .1 and 1.5 pixels. There appear to be no major problem preventing the use of LANDSAT 4 MSS with previous MSS sensors for change detection, provided the noise interference can be removed or minimized. Data normalizations for change detection should be based on the data rather than solely on calibration information. This allows simultaneous normalization of the atmosphere as well as the radiometry.
Morrison, Jack; Watts, Giles; Hobbs, Glyn; Dawnay, Nick
2018-04-01
Field based forensic tests commonly provide information on the presence and identity of biological stains and can also support the identification of species. Such information can support downstream processing of forensic samples and generate rapid intelligence. These approaches have traditionally used chemical and immunological techniques to elicit the result but some are known to suffer from a lack of specificity and sensitivity. The last 10 years has seen the development of field-based genetic profiling systems, with specific focus on moving the mainstay of forensic genetic analysis, namely STR profiling, out of the laboratory and into the hands of the non-laboratory user. In doing so it is now possible for enforcement officers to generate a crime scene DNA profile which can then be matched to a reference or database profile. The introduction of these novel genetic platforms also allows for further development of new molecular assays aimed at answering the more traditional questions relating to body fluid identity and species detection. The current drive for field-based molecular tools is in response to the needs of the criminal justice system and enforcement agencies, and promises a step-change in how forensic evidence is processed. However, the adoption of such systems by the law enforcement community does not represent a new strategy in the way forensic science has integrated previous novel approaches. Nor do they automatically represent a threat to the quality control and assurance practices that are central to the field. This review examines the historical need and subsequent research and developmental breakthroughs in field-based forensic analysis over the past two decades with particular focus on genetic methods Emerging technologies from a range of scientific fields that have potential applications in forensic analysis at the crime scene are identified and associated issues that arise from the shift from laboratory into operational field use are discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Debeck, Kora; Wood, Evan; Qi, Jiezhi; Fu, Eric; McArthur, Doug; Montaner, Julio; Kerr, Thomas
2012-01-01
Limited attention has been given to the potential role that the structure of housing available to people who are entrenched in street-based drug scenes may play in influencing the amount of time injection drug users (IDU) spend on public streets. We sought to examine the relationship between time spent socializing in Vancouver's drug scene and access to private space. Using multivariate logistic regression we evaluated factors associated with socializing (three+ hours each day) in Vancouver's open drug scene among a prospective cohort of IDU. We also assessed attitudes towards relocating socializing activities if greater access to private indoor space was provided. Among our sample of 1114 IDU, 43% fit our criteria for socializing in the open drug scene. In multivariate analysis, having limited access to private space was independently associated with socializing (adjusted odds ratio: 1.80, 95% confidence interval: 1.28-2.55). In further analysis, 65% of 'socializers' reported positive attitudes towards relocating socializing if they had greater access to private space. These findings suggest that providing IDU with greater access to private indoor space may reduce one component of drug-related street disorder. Low-threshold supportive housing based on the 'housing first' model that include safeguards to manage behaviors associated with illicit drug use appear to offer important opportunities to create the types of private spaces that could support a reduction in street disorder. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Microbial soil community analyses for forensic science: Application to a blind test.
Demanèche, Sandrine; Schauser, Leif; Dawson, Lorna; Franqueville, Laure; Simonet, Pascal
2017-01-01
Soil complexity, heterogeneity and transferability make it valuable in forensic investigations to help obtain clues as to the origin of an unknown sample, or to compare samples from a suspect or object with samples collected at a crime scene. In a few countries, soil analysis is used in matters from site verification to estimates of time after death. However, up to date the application or use of soil information in criminal investigations has been limited. In particular, comparing bacterial communities in soil samples could be a useful tool for forensic science. To evaluate the relevance of this approach, a blind test was performed to determine the origin of two questioned samples (one from the mock crime scene and the other from a 50:50 mixture of the crime scene and the alibi site) compared to three control samples (soil samples from the crime scene, from a context site 25m away from the crime scene and from the alibi site which was the suspect's home). Two biological methods were used, Ribosomal Intergenic Spacer Analysis (RISA), and 16S rRNA gene sequencing with Illumina Miseq, to evaluate the discriminating power of soil bacterial communities. Both techniques discriminated well between soils from a single source, but a combination of both techniques was necessary to show that the origin was a mixture of soils. This study illustrates the potential of applying microbial ecology methodologies in soil as an evaluative forensic tool. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DeBeck, Kora; Wood, Evan; Qi, Jiezhi; Fu, Eric; McArthur, Doug; Montaner, Julio; Kerr, Thomas
2011-01-01
Background Limited attention has been given to the potential role that the structure of housing available to people who are entrenched in street-based drug scenes may play in influencing the amount of time injection drug users (IDU) spend on public streets. We sought to examine the relationship between time spent socializing in Vancouver's drug scene and access to private space. Methods Using multivariate logistic regression we evaluated factors associated with socializing (three+ hours each day) in Vancouver's open drug scene among a prospective cohort of IDU. We also assessed attitudes towards relocating socializing activities if greater access to private indoor space was provided. Results Among our sample of 1114 IDU, 43% fit our criteria for socializing in the open drug scene. In multivariate analysis, having limited access to private space was independently associated with socializing (adjusted odds ratio: 1.80, 95% confidence interval: 1.28 – 2.55). In further analysis, 65% of ‘socializers’ reported positive attitudes towards relocating socializing if they had greater access to private space. Conclusion These findings suggest that providing IDU with greater access to private indoor space may reduce one component of drug-related street disorder. Low-threshold supportive housing based on the ‘housing first’ model that include safeguards to manage behaviors associated with illicit drug use appear to offer important opportunities to create the types of private spaces that could support a reduction in street disorder. PMID:21764528
ERIC Educational Resources Information Center
Callow, Jon; Zammit, Katina
2012-01-01
From the original performances of Shakespeare's plays and the illustrated title page of his First Folios, to the current day use of YouTube, mobile phones and tablets, multimodal texts have been apparent across Western cultures. The rapid development over the last 20 years of technology has markedly increased access to a much broader array of…
A Heterogeneous Multiprocessor Graphics System Using Processor-Enhanced Memories
1989-02-01
frames per second, font generation directly from conic spline descriptions, and rapid calculation of radiosity form factors. The hardware consists of...generality for rendering curved surfaces, volume data, objects dcscri id with Constructive Solid Geometry, for rendering scenes using the radiosity ...f.aces and for computing a spherical radiosity lighting model (see Section 7.6). Custom Memory Chips \\ 208 bits x 128 pixels - Renderer Board ix p o a
Use of AFIS for linking scenes of crime.
Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David
2016-05-01
Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Scene analysis for effective visual search in rough three-dimensional-modeling scenes
NASA Astrophysics Data System (ADS)
Wang, Qi; Hu, Xiaopeng
2016-11-01
Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.
Application of composite small calibration objects in traffic accident scene photogrammetry.
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.
Spatial and temporal variability of hyperspectral signatures of terrain
NASA Astrophysics Data System (ADS)
Jones, K. F.; Perovich, D. K.; Koenig, G. G.
2008-04-01
Electromagnetic signatures of terrain exhibit significant spatial heterogeneity on a range of scales as well as considerable temporal variability. A statistical characterization of the spatial heterogeneity and spatial scaling algorithms of terrain electromagnetic signatures are required to extrapolate measurements to larger scales. Basic terrain elements including bare soil, grass, deciduous, and coniferous trees were studied in a quasi-laboratory setting using instrumented test sites in Hanover, NH and Yuma, AZ. Observations were made using a visible and near infrared spectroradiometer (350 - 2500 nm) and hyperspectral camera (400 - 1100 nm). Results are reported illustrating: i) several difference scenes; ii) a terrain scene time series sampled over an annual cycle; and iii) the detection of artifacts in scenes. A principal component analysis indicated that the first three principal components typically explained between 90 and 99% of the variance of the 30 to 40-channel hyperspectral images. Higher order principal components of hyperspectral images are useful for detecting artifacts in scenes.
Color constancy in natural scenes explained by global image statistics
Foster, David H.; Amano, Kinjiro; Nascimento, Sérgio M. C.
2007-01-01
To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance. PMID:16961965
Color constancy in natural scenes explained by global image statistics.
Foster, David H; Amano, Kinjiro; Nascimento, Sérgio M C
2006-01-01
To what extent do observers' judgments of surface color with natural scenes depend on global image statistics? To address this question, a psychophysical experiment was performed in which images of natural scenes under two successive daylights were presented on a computer-controlled high-resolution color monitor. Observers reported whether there was a change in reflectance of a test surface in the scene. The scenes were obtained with a hyperspectral imaging system and included variously trees, shrubs, grasses, ferns, flowers, rocks, and buildings. Discrimination performance, quantified on a scale of 0 to 1 with a color-constancy index, varied from 0.69 to 0.97 over 21 scenes and two illuminant changes, from a correlated color temperature of 25,000 K to 6700 K and from 4000 K to 6700 K. The best account of these effects was provided by receptor-based rather than colorimetric properties of the images. Thus, in a linear regression, 43% of the variance in constancy index was explained by the log of the mean relative deviation in spatial cone-excitation ratios evaluated globally across the two images of a scene. A further 20% was explained by including the mean chroma of the first image and its difference from that of the second image and a further 7% by the mean difference in hue. Together, all four global color properties accounted for 70% of the variance and provided a good fit to the effects of scene and of illuminant change on color constancy, and, additionally, of changing test-surface position. By contrast, a spatial-frequency analysis of the images showed that the gradient of the luminance amplitude spectrum accounted for only 5% of the variance.
High-dynamic-range scene compression in humans
NASA Astrophysics Data System (ADS)
McCann, John J.
2006-02-01
Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.
NASA Astrophysics Data System (ADS)
Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma
2018-04-01
Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.
Mu, Tingkui; Pacheco, Shaun; Chen, Zeyu; Zhang, Chunmin; Liang, Rongguang
2017-02-13
In this paper, the design and experimental demonstration of a snapshot linear-Stokes imaging spectropolarimeter (SLSIS) is presented. The SLSIS, which is based on division-of-focal-plane polarimetry with four parallel linear polarization channels and integral field spectroscopy with numerous slit dispersive paths, has no moving parts and provides video-rate Stokes-vector hyperspectral datacubes. It does not need any scanning in the spectral, spatial or polarization dimension and offers significant advantages of rapid reconstruction without heavy computation during post-processing. The principle and the experimental setup of the SLSIS are described in detail. The image registration, Stokes spectral reconstruction and calibration procedures are included, and the system is validated using measurements of tungsten light and a static scene. The SLSIS's snapshot ability to resolve polarization spectral signatures is demonstrated using measurements of a dynamic scene.
Mu, Tingkui; Pacheco, Shaun; Chen, Zeyu; Zhang, Chunmin; Liang, Rongguang
2017-01-01
In this paper, the design and experimental demonstration of a snapshot linear-Stokes imaging spectropolarimeter (SLSIS) is presented. The SLSIS, which is based on division-of-focal-plane polarimetry with four parallel linear polarization channels and integral field spectroscopy with numerous slit dispersive paths, has no moving parts and provides video-rate Stokes-vector hyperspectral datacubes. It does not need any scanning in the spectral, spatial or polarization dimension and offers significant advantages of rapid reconstruction without heavy computation during post-processing. The principle and the experimental setup of the SLSIS are described in detail. The image registration, Stokes spectral reconstruction and calibration procedures are included, and the system is validated using measurements of tungsten light and a static scene. The SLSIS’s snapshot ability to resolve polarization spectral signatures is demonstrated using measurements of a dynamic scene. PMID:28191819
Classification of Mobile Laser Scanning Point Clouds from Height Features
NASA Astrophysics Data System (ADS)
Zheng, M.; Lemmens, M.; van Oosterom, P.
2017-09-01
The demand for 3D maps of cities and road networks is steadily growing and mobile laser scanning (MLS) systems are often the preferred geo-data acquisition method for capturing such scenes. Because MLS systems are mounted on cars or vans they can acquire billions of points of road scenes within a few hours of survey. Manual processing of point clouds is labour intensive and thus time consuming and expensive. Hence, the need for rapid and automated methods for 3D mapping of dense point clouds is growing exponentially. The last five years the research on automated 3D mapping of MLS data has tremendously intensified. In this paper, we present our work on automated classification of MLS point clouds. In the present stage of the research we exploited three features - two height components and one reflectance value, and achieved an overall accuracy of 73 %, which is really encouraging for further refining our approach.
From the outside looking in: developing snapshot imaging spectro-polarimeters
NASA Astrophysics Data System (ADS)
Dereniak, E. L.
2014-09-01
The information from a scene is critical in autonomous optical systems, and the variety of information that can be extracted is determined by the application. To characterize a target, the information of interest captured is spectral (λ), polarization (S) and distance (Z). There are many technologies that capture this information in different ways to identify the target. In many fields, such as mining and military reconnaissance, there is a need for rapid data acquisition and, for this reason, a relatively new method has been devised that can obtain all this information simultaneously. The need for snapshot acquisition of data without moving parts was the goal of the research. This paper reviews the chain of novel research instruments that were sequentially developed to capture spectral and polarization information of a scene in a snapshot or flash. The distance (Z) is yet to be integrated.
Igarashi, Yutaka; Yokobori, Shoji; Yoshino, Yudai; Masuno, Tomohiko; Miyauchi, Masato; Yokota, Hiroyuki
2017-10-01
In Japan, the number of patients with foreign body airway obstruction by food is rapidly increasing with the increase in the population of the elderly and a leading cause of unexpected death. This study aimed to determine the factors that influence prognosis of these patients. This is a retrospective single institutional study. A total of 155 patients were included. We collected the variables from the medical records and analyzed them to determine the factors associated with patient outcome. Patient outcomes were evaluated using cerebral performance categories (CPCs) when patients were discharged or transferred to other hospitals. A favorable outcome was defined as CPC 1 or 2, and an unfavorable outcome was defined as CPC 3, 4, or 5. A higher proportion of patients with favorable outcomes than unfavorable outcomes had a witness present at the accident scene (68.8% vs. 44.7%, P=0.0154). Patients whose foreign body were removed by a bystander at the accident scene had a significantly high rate of favorable outcome than those whose foreign body were removed by emergency medical technicians or emergency physician at the scene (73.7% vs. 31.8%, P<0.0075) and at the hospital after transfer (73.7% vs. 9.6%, P<0.0001). The presence of a witness to the aspiration and removal of the airway obstruction of patients by bystanders at the accident scene improves outcomes in patients with foreign body airway obstruction. When airway obstruction occurs, bystanders should remove foreign bodies immediately. Copyright © 2017 Elsevier Inc. All rights reserved.
Pilot Task Profiles, Human Factors, And Image Realism
NASA Astrophysics Data System (ADS)
McCormick, Dennis
1982-06-01
Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.
Statistics of high-level scene context
Greene, Michelle R.
2013-01-01
Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition. PMID:24194723
NASA Astrophysics Data System (ADS)
Jarvis, Jan; Haertelt, Marko; Hugger, Stefan; Butschek, Lorenz; Fuchs, Frank; Ostendorf, Ralf; Wagner, Joachim; Beyerer, Juergen
2017-04-01
In this work we present data analysis algorithms for detection of hazardous substances in hyperspectral observations acquired using active mid-infrared (MIR) backscattering spectroscopy. We present a novel background extraction algorithm based on the adaptive target generation process proposed by Ren and Chang called the adaptive background generation process (ABGP) that generates a robust and physically meaningful set of background spectra for operation of the well-known adaptive matched subspace detection (AMSD) algorithm. It is shown that the resulting AMSD-ABGP detection algorithm competes well with other widely used detection algorithms. The method is demonstrated in measurement data obtained by two fundamentally different active MIR hyperspectral data acquisition devices. A hyperspectral image sensor applicable in static scenes takes a wavelength sequential approach to hyperspectral data acquisition, whereas a rapid wavelength-scanning single-element detector variant of the same principle uses spatial scanning to generate the hyperspectral observation. It is shown that the measurement timescale of the latter is sufficient for the application of the data analysis algorithms even in dynamic scenarios.
Hasty retreat of glaciers in the Palena province of Chile
NASA Astrophysics Data System (ADS)
Paul, F.; Mölg, N.; Bolch, T.
2013-12-01
Mapping glacier extent from optical satellite data has become a most efficient tool to create or update glacier inventories and determine glacier changes over time. A most valuable archive in this regard is the nearly 30-year time series of Landsat Thematic Mapper (TM) data that is freely available (already orthorectified) for most regions in the world from the USGS. One region with a most dramatic glacier shrinkage and a missing systematic assessment of changes, is the Palena province in Chile, south of Puerto Montt. A major bottleneck for accurate determination of glacier changes in this region is related to the huge amounts of snow falling in this very maritime region, hiding the perimeter of glaciers throughout the year. Consequently, we found only three years with Landsat scenes that can be used to map glacier extent through time. We here present the results of a glacier change analysis from six Landsat scenes (path-rows 232-89/90) acquired in 1985, 2000 and 2011 covering the Palena district in Chile. Clean glacier ice was mapped automatically with a standard technique (TM3/TM band ratio) and manual editing was applied to remove wrongly classified lakes and to add debris-covered glacier parts. The digital elevation model (DEM) from SRTM was used to derive drainage divides, determine glacier specific topographic parameters, and analyse the area changes in regard to topography. The scene from 2000 has the best snow conditions and was used to eliminate seasonal snow in the other two scenes by digital combination of the binary glacier masks. The observed changes show a huge spatial variability with a strong dependence on elevation and glacier hypsometry. While small mountain glaciers at high elevations and steep slopes show virtually no change over the 26-year period, ice at low elevations from large valley glaciers shows a dramatic decline (area and thickness loss). Some glaciers retreated more than 3 km over this time period or even disappeared completely. Typically, these glaciers lost contact to the accumulation areas of tributaries and now consist of an ablation area only. Furthermore, numerous pro-glacial lakes formed or expanded rapidly, increasing the local hazard potential. On the other hand, some glaciers located on or near to (still active) volcanoes have also advanced in the same time period. Observed trends in temperature (decreasing) are in contrast to the observed strong glacier shrinkage.
Oberholzer, Nicole; Kaserer, Alexander; Albrecht, Roland; Seifert, Burkhardt; Tissi, Mario; Spahn, Donat R; Maurer, Konrad; Stein, Philipp
2017-07-01
Pain is frequently encountered in the prehospital setting and needs to be treated quickly and sufficiently. However, incidences of insufficient analgesia after prehospital treatment by emergency medical services are reported to be as high as 43%. The purpose of this analysis was to identify modifiable factors in a specific emergency patient cohort that influence the pain suffered by patients when admitted to the hospital. For that purpose, this retrospective observational study included all patients with significant pain treated by a Swiss physician-staffed helicopter emergency service between April and October 2011 with the following characteristics to limit selection bias: Age > 15 years, numerical rating scale (NRS) for pain documented at the scene and at hospital admission, NRS > 3 at the scene, initial Glasgow coma scale > 12, and National Advisory Committee for Aeronautics score < VI. Univariate and multivariable logistic regression analyses were performed to evaluate patient and mission characteristics of helicopter emergency service associated with insufficient pain management. A total of 778 patients were included in the analysis. Insufficient pain management (NRS > 3 at hospital admission) was identified in 298 patients (38%). Factors associated with insufficient pain management were higher National Advisory Committee for Aeronautics scores, high NRS at the scene, nontrauma patients, no analgesic administration, and treatment by a female physician. In 16% (128 patients), despite ongoing pain, no analgesics were administered. Factors associated with this untreated persisting pain were short time at the scene (below 10 minutes), secondary missions of helicopter emergency service, moderate pain at the scene, and nontrauma patients. Sufficient management of severe pain is significantly better if ketamine is combined with an opioid (65%), compared to a ketamine or opioid monotherapy (46%, P = .007). In the studied specific Swiss cohort, nontrauma patients, patients on secondary missions, patients treated only for a short time at the scene before transport, patients who receive no analgesic, and treatment by a female physician may be risk factors for insufficient pain management. Patients suffering pain at the scene (NRS > 3) should receive an analgesic whenever possible. Patients with severe pain at the scene (NRS ≥ 8) may benefit from the combination of ketamine with an opioid. The finding about sex differences concerning analgesic administration is intriguing and possibly worthy of further study.
Time Series Analysis of Vegetation Change using Hyperspectral and Multispectral Data
2012-09-01
rivers clogged with sediment” (Hartman, 2008). In addition, backpackers, campers, and skiers are in danger of being hit by falling trees. Mountain...information from hyperspectral data without a priori knowledge or requiring ground observations” (Kruse & Perry, 2009). Figure 16. Spectral...known endmembers and the scene spectra (Boardman & Kruse, 2011). Known endmembers come from analysts’ knowledge of an area in a scene, or from
Atmospheric correction analysis on LANDSAT data over the Amazon region. [Manaus, Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dias, L. A. V.; Dossantos, J. R.; Formaggio, A. R.
1983-01-01
The Amazon Region natural resources were studied in two ways and compared. A LANDSAT scene and its attributes were selected, and a maximum likelihood classification was made. The scene was atmospherically corrected, taking into account Amazonic peculiarities revealed by (ground truth) of the same area, and the subsequent classification. Comparison shows that the classification improves with the atmospherically corrected images.
Eye movements, visual search and scene memory, in an immersive virtual environment.
Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.
Guided exploration in virtual environments
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas
2001-06-01
We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.
Functional neuroanatomy of auditory scene analysis in Alzheimer's disease
Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.
2015-01-01
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629
Yi, Minhan; Chen, Feng; Luo, Majing; Cheng, Yibin; Zhao, Huabin; Cheng, Hanhua; Zhou, Rongjia
2014-05-19
The Piwi-interacting RNA (piRNA) pathway is responsible for germline specification, gametogenesis, transposon silencing, and genome integrity. Transposable elements can disrupt genome and its functions. However, piRNA pathway evolution and its adaptation to transposon diversity in the teleost fish remain unknown. This article unveils evolutionary scene of piRNA pathway and its association with diverse transposons by systematically comparative analysis on diverse teleost fish genomes. Selective pressure analysis on piRNA pathway and miRNA/siRNA (microRNA/small interfering RNA) pathway genes between teleosts and mammals showed an accelerated evolution of piRNA pathway genes in the teleost lineages, and positive selection on functional PAZ (Piwi/Ago/Zwille) and Tudor domains involved in the Piwi-piRNA/Tudor interaction, suggesting that the amino acid substitutions are adaptive to their functions in piRNA pathway in the teleost fish species. Notably five piRNA pathway genes evolved faster in the swamp eel, a kind of protogynous hermaphrodite fish, than the other teleosts, indicating a differential evolution of piRNA pathway between the swamp eel and other gonochoristic fishes. In addition, genome-wide analysis showed higher diversity of transposons in the teleost fish species compared with mammals. Our results suggest that rapidly evolved piRNA pathway in the teleost fish is likely to be involved in the adaption to transposon diversity. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
MTF analysis of LANDSAT-4 Thematic Mapper
NASA Technical Reports Server (NTRS)
Schowengerdt, R.
1983-01-01
The spatial radiance distribution of a ground target must be known to a resolution at least four to five times greater than that of the system under test when measuring a satellite sensor's modulation transfer function. Calibration of the target requires either the use of man-made special purpose targets with known properties, e.g., a small reflective mirror or a dark-light linear pattern such as line or edge, or use of relatively high resolution underflight imagery to calibrate an arbitrary ground scene. Both approaches are to be used in addition a technique that utilizes an analytical model for the scene spatial frequency power spectrum is being investigated as an alternative to calibration of the scene.
MTF Analysis of LANDSAT-4 Thematic Mapper
NASA Technical Reports Server (NTRS)
Schowengerdt, R.
1985-01-01
The spatial radiance distribution of a ground target must be known to a resolution at least four to five times greater than that of the system under test when measuring a satellite sensor's modulation transfer function. Calibration of the target requires either the use of man-made special purpose targets with known properties, e.g., a small reflective mirror or a dark-light linear pattern such as line or edge, or use of relatively high resolution underflight imagery to calibrate an arbitrary ground scene. Both approaches are to be used, in addition a technique that utilizes an analytical model of the scene spatial frequency power spectrum is being investigated as an alternative to calibration of the scene.
Acquaintance Rape: Applying Crime Scene Analysis to the Prediction of Sexual Recidivism.
Lehmann, Robert J B; Goodwill, Alasdair M; Hanson, R Karl; Dahle, Klaus-Peter
2016-10-01
The aim of the current study was to enhance the assessment and predictive accuracy of risk assessments for sexual offenders by utilizing detailed crime scene analysis (CSA). CSA was conducted on a sample of 247 male acquaintance rapists from Berlin (Germany) using a nonmetric, multidimensional scaling (MDS) Behavioral Thematic Analysis (BTA) approach. The age of the offenders at the time of the index offense ranged from 14 to 64 years (M = 32.3; SD = 11.4). The BTA procedure revealed three behavioral themes of hostility, criminality, and pseudo-intimacy, consistent with previous CSA research on stranger rape. The construct validity of the three themes was demonstrated through correlational analyses with known sexual offending measures and criminal histories. The themes of hostility and pseudo-intimacy were significant predictors of sexual recidivism. In addition, the pseudo-intimacy theme led to a significant increase in the incremental validity of the Static-99 actuarial risk assessment instrument for the prediction of sexual recidivism. The results indicate the potential utility and validity of crime scene behaviors in the applied risk assessment of sexual offenders. © The Author(s) 2015.
Fractal dimension and the navigational information provided by natural scenes.
Shamsyeh Zahedi, Moosarreza; Zeil, Jochen
2018-01-01
Recent work on virtual reality navigation in humans has suggested that navigational success is inversely correlated with the fractal dimension (FD) of artificial scenes. Here we investigate the generality of this claim by analysing the relationship between the fractal dimension of natural insect navigation environments and a quantitative measure of the navigational information content of natural scenes. We show that the fractal dimension of natural scenes is in general inversely proportional to the information they provide to navigating agents on heading direction as measured by the rotational image difference function (rotIDF). The rotIDF determines the precision and accuracy with which the orientation of a reference image can be recovered or maintained and the range over which a gradient descent in image differences will find the minimum of the rotIDF, that is the reference orientation. However, scenes with similar fractal dimension can differ significantly in the depth of the rotIDF, because FD does not discriminate between the orientations of edges, while the rotIDF is mainly affected by edge orientation parallel to the axis of rotation. We present a new equation for the rotIDF relating navigational information to quantifiable image properties such as contrast to show (1) that for any given scene the maximum value of the rotIDF (its depth) is proportional to pixel variance and (2) that FD is inversely proportional to pixel variance. This contrast dependence, together with scene differences in orientation statistics, explains why there is no strict relationship between FD and navigational information. Our experimental data and their numerical analysis corroborate these results.
Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin
2015-12-01
A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Crime event 3D reconstruction based on incomplete or fragmentary evidence material--case report.
Maksymowicz, Krzysztof; Tunikowski, Wojciech; Kościuk, Jacek
2014-09-01
Using our own experience in 3D analysis, the authors will demonstrate the possibilities of 3D crime scene and event reconstruction in cases where originally collected material evidence is largely insufficient. The necessity to repeat forensic evaluation is often down to the emergence of new facts in the course of case proceedings. Even in cases when a crime scene and its surroundings have undergone partial or complete transformation, with regard to elements significant to the course of the case, or when the scene was not satisfactorily secured, it is still possible to reconstruct it in a 3D environment based on the originally-collected, even incomplete, material evidence. In particular cases when no image of the crime scene is available, its partial or even full reconstruction is still potentially feasible. Credibility of evidence for such reconstruction can still satisfy the evidence requirements in court. Reconstruction of the missing elements of the crime scene is still possible with the use of information obtained from current publicly available databases. In the study, we demonstrate that these can include Google Maps(®*), Google Street View(®*) and available construction and architecture archives. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Fixation and saliency during search of natural scenes: the case of visual agnosia.
Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey
2009-07-01
Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.
Fusion of monocular cues to detect man-made structures in aerial imagery
NASA Technical Reports Server (NTRS)
Shufelt, Jefferey; Mckeown, David M.
1991-01-01
The extraction of buildings from aerial imagery is a complex problem for automated computer vision. It requires locating regions in a scene that possess properties distinguishing them as man-made objects as opposed to naturally occurring terrain features. It is reasonable to assume that no single detection method can correctly delineate or verify buildings in every scene. A cooperative-methods paradigm is useful in approaching the building extraction problem. Using this paradigm, each extraction technique provides information which can be added or assimilated into an overall interpretation of the scene. Thus, the main objective is to explore the development of computer vision system that integrates the results of various scene analysis techniques into an accurate and robust interpretation of the underlying three dimensional scene. The problem of building hypothesis fusion in aerial imagery is discussed. Building extraction techniques are briefly surveyed, including four building extraction, verification, and clustering systems. A method for fusing the symbolic data generated by these systems is described, and applied to monocular image and stereo image data sets. Evaluation methods for the fusion results are described, and the fusion results are analyzed using these methods.
Illumination discrimination in the absence of a fixed surface-reflectance layout
Radonjić, Ana; Ding, Xiaomao; Krieger, Avery; Aston, Stacey; Hurlbert, Anya C.; Brainard, David H.
2018-01-01
Previous studies have shown that humans can discriminate spectral changes in illumination and that this sensitivity depends both on the chromatic direction of the illumination change and on the ensemble of surfaces in the scene. These studies, however, always used stimulus scenes with a fixed surface-reflectance layout. Here we compared illumination discrimination for scenes in which the surface reflectance layout remains fixed (fixed-surfaces condition) to those in which surface reflectances were shuffled randomly across scenes, but with the mean scene reflectance held approximately constant (shuffled-surfaces condition). Illumination discrimination thresholds in the fixed-surfaces condition were commensurate with previous reports. Thresholds in the shuffled-surfaces condition, however, were considerably elevated. Nonetheless, performance in the shuffled-surfaces condition exceeded that attainable through random guessing. Analysis of eye fixations revealed that in the fixed-surfaces condition, low illumination discrimination thresholds (across observers) were predicted by low overall fixation spread and high consistency of fixation location and fixated surface reflectances across trial intervals. Performance in the shuffled-surfaces condition was not systematically related to any of the eye-fixation characteristics we examined for that condition, but was correlated with performance in the fixed-surfaces condition. PMID:29904786
Lee, So-Yeon; Ha, Eun-Ju; Woo, Seung-Kyun; Lee, So-Min; Lim, Kyung-Hee; Eom, Yong-Bin
2017-07-01
Telogen hairs presented in the crime scene are commonly encountered as trace evidence. However, short tandem repeat (STR) profiling of the hairs currently have low and limited use due to poor success rate. To increase the success rate of STR profiling of telogen hairs, we developed a rapid and cost-effective method to estimate the number of nuclei in the hair roots. Five cationic dyes, Methyl green (MG), Harris hematoxylin (HH), Methylene blue (MB), Toluidine blue (TB), and Safranin O (SO) were evaluated in this study. We conducted a screening test based on microscopy and the percentage of loss with nuclear DNA, in order to select the best dye. MG was selected based on its specific nuclei staining and low adverse effect on the hair-associated nuclear DNA. We examined 330 scalp and 100 pubic telogen hairs with MG. Stained hairs were classified into five groups and analyzed by STR. The fast staining method revealed 70% (head hair) and 33.4% (pubic hair) of full (30 alleles) and high partial (18-29 alleles) STR profiling proportion from the lowest nuclei count group (one to ten nuclei). The results of this study demonstrated a rapid, specific, nondestructive, and high yield DNA profiling method applicable for screening telogen hairs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ViCoMo: visual context modeling for scene understanding in video surveillance
NASA Astrophysics Data System (ADS)
Creusen, Ivo M.; Javanbakhti, Solmaz; Loomans, Marijn J. H.; Hazelhoff, Lykele B.; Roubtsova, Nadejda; Zinger, Svitlana; de With, Peter H. N.
2013-10-01
The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan-tilt-zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.
Preliminary Comparisons of the Information Content and Utility of TM Versus MSS Data
NASA Technical Reports Server (NTRS)
Markham, B. L.
1984-01-01
Comparisons were made between subscenes from the first TM scene acquired of the Washington, D.C. area and a MSS scene acquired approximately one year earlier. Three types of analyses were conducted to compare TM and MSS data: a water body analysis, a principal components analysis and a spectral clustering analysis. The water body analysis compared the capability of the TM to the MSS for detecting small uniform targets. Of the 59 ponds located on aerial photographs 34 (58%) were detected by the TM with six commission errors (15%) and 13 (22%) were detected by the MSS with three commission errors (19%). The smallest water body detected by the TM was 16 meters; the smallest detected by the MSS was 40 meters. For the principal components analysis, means and covariance matrices were calculated for each subscene, and principal components images generated and characterized. In the spectral clustering comparison each scene was independently clustered and the clusters were assigned to informational classes. The preliminary comparison indicated that TM data provides enhancements over MSS in terms of (1) small target detection and (2) data dimensionality (even with 4-band data). The extra dimension, partially resultant from TM band 1, appears useful for built-up/non-built-up area separation.
[Forensic entomology exemplified by a homicide. A combined stain and postmortem time analysis].
Benecke, M; Seifert, B
1999-01-01
The combined analysis of both ant and blow fly evidence recovered from a corpse, and from the boot of a suspect, suggested that an assumed scenario in a high profile murder case was likely to be true. The ants (Lasius fuliginous) were used as classical crime scene stains that linked the suspect to the scene. Blow fly maggots (Calliphora spec.) helped to determine the post mortem interval (PMI) with the calculated PMI overlapping with the assumed time of the killing. In the trial, the results of the medico-legal analysis of the insects was understood to be crucial scientific evidence, and the suspect was sentenced to 8 years in prison.
Electron microscopy and forensic practice
NASA Astrophysics Data System (ADS)
Kotrlý, Marek; Turková, Ivana
2013-05-01
Electron microanalysis in forensic practice ranks among basic applications used in investigation of traces (latents, stains, etc.) from crime scenes. Applying electron microscope allows for rapid screening and receiving initial information for a wide range of traces. SEM with EDS/WDS makes it possible to observe topography surface and morphology samples and examination of chemical components. Physical laboratory of the Institute of Criminalistics Prague use SEM especially for examination of inorganic samples, rarely for biology and other material. Recently, possibilities of electron microscopy have been extended considerably using dual systems with focused ion beam. These systems are applied mainly in study of inner micro and nanoparticles , thin layers (intersecting lines in graphical forensic examinations, analysis of layers of functional glass, etc.), study of alloys microdefects, creating 3D particles and aggregates models, etc. Automated mineralogical analyses are a great asset to analysis of mineral phases, particularly soils, similarly it holds for cathode luminescence, predominantly colour one and precise quantitative measurement of their spectral characteristics. Among latest innovations that are becoming to appear also at ordinary laboratories are TOF - SIMS systems and micro Raman spectroscopy with a resolution comparable to EDS/WDS analysis (capable of achieving similar level as through EDS/WDS analysis).
Search by photo methodology for signature properties assessment by human observers
NASA Astrophysics Data System (ADS)
Selj, Gorm K.; Heinrich, Daniela H.
2015-05-01
Reliable, low-cost and simple methods for assessment of signature properties for military purposes are very important. In this paper we present such an approach that uses human observers in a search by photo assessment of signature properties of generic test targets. The method was carried out by logging a large number of detection times of targets recorded in relevant terrain backgrounds. The detection times were harvested by using human observers searching for targets in scene images shown by a high definition pc screen. All targets were identically located in each "search image", allowing relative comparisons (and not just rank by order) of targets. To avoid biased detections, each observer only searched for one target per scene. Statistical analyses were carried out for the detection times data. Analysis of variance was chosen if detection times distribution associated with all targets satisfied normality, and non-parametric tests, such as Wilcoxon's rank test, if otherwise. The new methodology allows assessment of signature properties in a reproducible, rapid and reliable setting. Such assessments are very complex as they must sort out what is of relevance in a signature test, but not loose information of value. We believe that choosing detection times as the primary variable for a comparison of signature properties, allows a careful and necessary inspection of observer data as the variable is continuous rather than discrete. Our method thus stands in opposition to approaches based on detections by subsequent, stepwise reductions in distance to target, or based on probability of detection.
Briefly Cuing Memories Leads to Suppression of Their Neural Representations
Norman, Kenneth A.
2014-01-01
Previous studies have linked partial memory activation with impaired subsequent memory retrieval (e.g., Detre et al., 2013) but have not provided an account of this phenomenon at the level of memory representations: How does partial activation change the neural pattern subsequently elicited when the memory is cued? To address this question, we conducted a functional magnetic resonance imaging (fMRI) experiment in which participants studied word-scene paired associates. Later, we weakly reactivated some memories by briefly presenting the cue word during a rapid serial visual presentation (RSVP) task; other memories were more strongly reactivated or not reactivated at all. We tested participants' memory for the paired associates before and after RSVP. Cues that were briefly presented during RSVP triggered reduced levels of scene activity on the post-RSVP memory test, relative to the other conditions. We used pattern similarity analysis to assess how representations changed as a function of the RSVP manipulation. For briefly cued pairs, we found that neural patterns elicited by the same cue on the pre- and post-RSVP tests (preA–postA; preB–postB) were less similar than neural patterns elicited by different cues (preA–postB; preB–postA). These similarity reductions were predicted by neural measures of memory activation during RSVP. Through simulation, we show that our pattern similarity results are consistent with a model in which partial memory activation triggers selective weakening of the strongest parts of the memory. PMID:24899722
Kinect Fusion improvement using depth camera calibration
NASA Astrophysics Data System (ADS)
Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.
2014-06-01
Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.
Scene recognition following locomotion around a scene.
Motes, Michael A; Finlay, Cory A; Kozhevnikov, Maria
2006-01-01
Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40 degrees to 360 degrees). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36 degrees to 180 degrees. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.
Application scenario analysis of Power Grid Marketing Large Data
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Yuan; Zhang, Qianyu
2018-01-01
In recent years, large data has become an important strategic asset in the commercial economy, and its efficient management and application has become the focus of government, enterprise and academia. Power grid marketing data covers real data of electricity and other energy consumption and consumption costs and so on, which is closely related to each customer and the overall economic operation. Fully tap the inherent value of marketing data is of great significance for power grid company to make rapid and efficient response to the market demand and improve service level. The development of large data technology provides a new technical scheme for the development of marketing business under the new situation. Based on the study on current situation of marketing business, marketing information system and marketing data, this paper puts forward the application direction of marketing data and designed typical scenes for internal and external applications.
Sit Down to Float: The Cultural Meaning of Ketamine Use in Hong Kong
Joe-Laidler, Karen; Hunt, Geoffrey
2009-01-01
From the late 1990s onward, ketamine use among young persons in Hong Kong grew rapidly becoming the drug of choice. This article examines ketamine’s attraction in Hong Kong, and in so doing uncover the cultural meaning of ketamine use. The analysis is organized around the emergence and shifts in meanings and experiences of those who initiate and continue to use ketamine. The data stems from a comparative study of the social setting of club drug use in Hong Kong, San Francisco, and Rotterdam. Here we draw from 100 in-depth interviews to examine the experiences of young persons who have used drugs in dance venues in Hong Kong. Our findings indicate that ketamine has become embedded in a distinctively working class youth dance scene, is accessible in terms of supply and cost, shared among a group of friends, and results in a stimulating yet liberating experience beyond that of ecstasy. PMID:19759834
Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry
Chen, Qiang; Xu, Hongguo; Tan, Lidong
2015-01-01
In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052
Structure preserving clustering-object tracking via subgroup motion pattern segmentation
NASA Astrophysics Data System (ADS)
Fan, Zheyi; Zhu, Yixuan; Jiang, Jiao; Weng, Shuqin; Liu, Zhiwen
2018-01-01
Tracking clustering objects with similar appearances simultaneously in collective scenes is a challenging task in the field of collective motion analysis. Recent work on clustering-object tracking often suffers from poor tracking accuracy and terrible real-time performance due to the neglect or the misjudgment of the motion differences among objects. To address this problem, we propose a subgroup motion pattern segmentation framework based on a multilayer clustering structure and establish spatial constraints only among objects in the same subgroup, which entails having consistent motion direction and close spatial position. In addition, the subgroup segmentation results are updated dynamically because crowd motion patterns are changeable and affected by objects' destinations and scene structures. The spatial structure information combined with the appearance similarity information is used in the structure preserving object tracking framework to track objects. Extensive experiments conducted on several datasets containing multiple real-world crowd scenes validate the accuracy and the robustness of the presented algorithm for tracking objects in collective scenes.
NASA Astrophysics Data System (ADS)
Shin, Jaewook; Bosworth, Bryan T.; Foster, Mark A.
2017-02-01
The process of multiple scattering has inherent characteristics that are attractive for high-speed imaging with high spatial resolution and a wide field-of-view. A coherent source passing through a multiple-scattering medium naturally generates speckle patterns with diffraction-limited features over an arbitrarily large field-of-view. In addition, the process of multiple scattering is deterministic allowing a given speckle pattern to be reliably reproduced with identical illumination conditions. Here, by exploiting wavelength dependent multiple scattering and compressed sensing, we develop a high-speed 2D time-stretch microscope. Highly chirped pulses from a 90-MHz mode-locked laser are sent through a 2D grating and a ground-glass diffuser to produce 2D speckle patterns that rapidly evolve with the instantaneous frequency of the chirped pulse. To image a scene, we first characterize the high-speed evolution of the generated speckle patterns. Subsequently we project the patterns onto the microscopic region of interest and collect the total light from the scene using a single high-speed photodetector. Thus the wavelength dependent speckle patterns serve as high-speed pseudorandom structured illumination of the scene. An image sequence is then recovered using the time-dependent signal received by the photodetector, the known speckle pattern evolution, and compressed sensing algorithms. Notably, the use of compressed sensing allows for reconstruction of a time-dependent scene using a highly sub-Nyquist number of measurements, which both increases the speed of the imager and reduces the amount of data that must be collected and stored. We will discuss our experimental demonstration of this approach and the theoretical limits on imaging speed.
Development of an ultra-high temperature infrared scene projector at Santa Barbara Infrared Inc.
NASA Astrophysics Data System (ADS)
Franks, Greg; Laveigne, Joe; Danielson, Tom; McHugh, Steve; Lannon, John; Goodwin, Scott
2015-05-01
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.
Fast, Danya; Small, Will; Wood, Evan; Kerr, Thomas
2009-01-01
Recent research has highlighted the ways in which social structural processes and physical environments operate to push young drug users towards risk. We undertook this study in order to explore how young people who were currently street-entrenched characterized and understood their initiation into the local drug scene in downtown Vancouver, Canada. Semi-structured qualitative interviews were conducted with 38 individuals recruited from a cohort of young drug users known as the At-Risk Youth Study (ARYS). Participant narratives reflected an understanding among young people that they are simultaneously pulled and pushed towards the local scene. Push factors were understood as circumstances that propelled young people towards this setting, in some cases because of proximity to it from a very early age, and in other cases because of adverse situations experienced elsewhere and the need to find a new place to live that was both affordable and safe. Interwoven with accounts of how youth were pushed towards the local scene were stories that emphasized a high degree of autonomy and the factors that initially attracted them to this scene, including a desire for excitement, independence and belonging. Once young people were more permanently based in downtown Vancouver, participants identified several factors that accelerated their entrenchment in this locale, including increasingly ‘problematic’ drug use, an intensified need to generate income, experiences of chronic homelessness, and unstable social relationships. Our findings stress the need for early intervention with youth, before they are initiated into the social networks and processes that rapidly propel young people towards risk within these contexts. Once initiation has occurred, the boundary between safety and risk quickly becomes difficult to navigate, and young people become highly vulnerable to numerous harms. PMID:19700232
Mass chemical casualties: treatment of 41 patients with burns by anhydrous ammonia.
Zhang, Fang; Zheng, Xing-Feng; Ma, Bing; Fan, Xiao-Ming; Wang, Guang-Yi; Xia, Zhao-Fan
2015-09-01
This article reports a chemical burn incident that occurred on 31 August 2013 in Shanghai. We describe situations at the scene, emergency management, triage, evacuation, and follow-up of the victims. The scene of the incident and information on the 41 victims of this industrial chemical incident were investigated. The emergency management, triage, evacuation, and hospitalization data of the patients were summarized. At the time of the incident, 58 employees were working in a closed refrigerator workshop, 41 of whom sustained burns following the leakage of anhydrous ammonia. Ten victims died of severe inhalation injury at the scene, and another five victims died during the process of evacuation to the nearest hospital. After receiving information on the incident, a contingency plan for the burn disaster was launched immediately, and a first-aid group and an emergency and triage group were dispatched by the Changhai Hospital to the scene to aid the medical organization, emergency management, triage, and evacuation. All casualties were first rushed to the nearest hospital by ambulance. The six most serious patients with inhalation injuries were evacuated to the Changhai Hospital and admitted to the burn intensive care unit (BICU) for further treatment, one of whom died of respiratory failure and pulmonary infection. This mass casualty incident of anhydrous ammonia leakage caused potential devastating effects to the society, especially to the victims and their families. Early first-aid organization, emergency management, triage, and evacuation were of paramount importance, especially rapid evaluation of the severity of inhalation injury, and subsequent corresponding medical treatment. The prognosis of ammonia burns was poor and the sequelae were severe. Management and treatment lessons were drawn from this mass casualty chemical burn incident. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
NASA Astrophysics Data System (ADS)
Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.
2017-10-01
In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.
NASA Astrophysics Data System (ADS)
Bernier, Jean D.
1991-09-01
The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.
Emergence of forensic podiatry--A novel sub-discipline of forensic sciences.
Krishan, Kewal; Kanchan, Tanuj; DiMaggio, John A
2015-10-01
"Forensic podiatry is defined as the application of sound and researched podiatric knowledge and experience in forensic investigations; to show the association of an individual with a scene of crime, or to answer any other legal question concerned with the foot or footwear that requires knowledge of the functioning foot". Forensic podiatrists can contribute to forensic identification by associating the pedal evidence with the criminal or crime scene. The most common pedal evidence collected from the crime scene is in the form of footprints, shoeprints and their tracks and trails. Forensic podiatrists can establish identity of the individuals from the footprints in many ways. The analysis of bare footprints involves the identification based on the individualistic features like flat footedness, ridges, humps, creases, an extra toe, missing toe, corns, cuts, cracks, pits, deformities, and various features of the toe and heel region. All these individualistic features can link the criminal with the crime. In addition to these, parameters of body size like stature and body weight as well as sex can also be estimated by using anthropometric methods. If a series of footprints are recovered from the crime scene, then parameters of the gait analysis such as stride/step length and general movement of the criminal can be traced. Apart from these, a newly established biometric parameter of the footprints i.e. footprint ridge density can also be evaluated for personal identification. Careful analysis of the footprint ridge density can give an idea about the sex of the criminal whose footprints are recovered at the scene which can further help to reduce the burden of the investigating officer as the investigations then may be directed toward either a male suspect or a female suspect accordingly. This paper highlights various aspects of Forensic Podiatry and discusses the different methods of personal identification related to pedal evidence. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Effect of direct and indirect transfer status on trauma mortality in sub Saharan Africa.
Boschini, Laura P; Lu-Myers, Yemeng; Msiska, Nelson; Cairns, Bruce; Charles, Anthony G
2016-05-01
Traumatic injuries account for the greatest portion of global surgical burden particularly in low- and middle-income countries (LMICs). To assess effectiveness of a developing trauma system, we hypothesize that there are survival differences between direct and indirect transfer of trauma patients to a tertiary hospital in sub Saharan Africa. Retrospective analysis of 51,361 trauma patients within the Kamuzu Central Hospital (KCH) trauma registry from 2008 to 2012 was performed. Analysis of patient characteristics and logistic regression modelling for in-hospital mortality was performed. The primary study outcome is in hospital mortality in the direct and indirect transfer groups. There were 50,059 trauma patients were included in this study. 6578 patients transferred from referring facilities and 43,481 patients transported from the scene. The indirect and direct transfer cohorts were similar in age and sex. The mechanism of injury for transferred patients was 78.1% blunt, 14.5% penetrating, and 7.4% other, whereas for the scene group it was 70.7% blunt, 24.0% penetrating, and 5.2% other. Median times to presentation were 13 (4-30) and 3 (1-14)h for transferred and scene patients, respectively. Mortality rate was 4.2% and 1.6% for indirect and direct transfer cohorts, respectively. A total of 8816 patients were admitted of which 3636 and 5963 were in the transfer and scene cohort, respectively. After logistic regression analysis, the adjusted in-hospital mortality odds ratio was 2.09 (1.24-3.54); P=0.006 for indirect transfer versus direct transfer cohort, after controlling for significant covariates. Direct transfer of trauma patients from the scene to the tertiary care centre is associated with a survival benefit. Our findings suggest that trauma education and efforts directed at regionalization of trauma care, strengthening pre-hospital care and timely transfer from district hospitals could mitigate trauma-related mortality in a resource-poor setting. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mapping the Dynamics of Surface Water Extent 1999-2015 with Landsat 5, 7, and 8 Archives
NASA Astrophysics Data System (ADS)
Pickens, A. H.; Hansen, M.; Hancher, M.; Potapov, P.
2016-12-01
Surface water extent fluctuates through both seasons and years due to changes in climatic conditions and human extraction and impoundments. This study maps the presence of surface water every month since January 1999, evaluates the detection reliability, visualizes the trends, and explores future applications. The Global Land Analysis and Discovery group at the University of Maryland developed a 30-m mask of persistent water during the growing seasons of 2000-2012 in conjunction with the Global Forest Change product published by Hansen et al. in 2013. A total of 654,178 Landsat 7 scenes were used for the study. Persistent water was defined as all pixels with water classified in more than 50% of observations over the study period. We validated this mask by stratifying and comparing against a random sample of 135 RapidEye, single-date images at 5-m resolution. It was found to have estimated user's and producer's accuracies of 94% and 88%, respectively. This estimated error is due primarily to temporal differences, such as dam construction, and to mixed water-land pixels along water body edges and narrow rivers. In order to investigate temporal extent dynamics, we expanded our analysis of surface water to classify every Landsat 5, 7, and 8 scene since 1999, augmented with elevation data from SRTM and ASTER, via a series of decision trees applied using Google Earth Engine. The water and land observations are aggregated per each month of each year. We developed a model to visualize the dynamic trend in surface water presence since 1999, either per month or annually as shown below. This model can be used directly to assess the seasonal and inter-annual trends globally or regionally, or the raw monthly counts can be used for more intensive hydrological analysis and as inputs for other related studies such as wetland mapping.
Debeck, Kora; Wood, Evan; Zhang, Ruth; Buxton, Jane; Montaner, Julio; Kerr, Thomas
2011-08-01
While the community impacts of drug-related street disorder have been well described, lesser attention has been given to the potential health and social implications of drug scene exposure on street-involved people who use illicit drugs. Therefore, we sought to assess the impacts of exposure to a street-based drug scene among injection drug users (IDU) in a Canadian setting. Data were derived from a prospective cohort study known as the Vancouver Injection Drug Users Study. Four categories of drug scene exposure were defined based on the numbers of hours spent on the street each day. Three generalized estimating equation (GEE) logistic regression models were constructed to identify factors associated with varying levels of drug scene exposure (2-6, 6-15, over 15 hours) during the period of December 2005 to March 2009. Among our sample of 1,486 IDU, at baseline, a total of 314 (21%) fit the criteria for high drug scene exposure (>15 hours per day). In multivariate GEE analysis, factors significantly and independently associated with high exposure included: unstable housing (adjusted odds ratio [AOR] = 9.50; 95% confidence interval [CI], 6.36-14.20); daily crack use (AOR = 2.70; 95% CI, 2.07-3.52); encounters with police (AOR = 2.11; 95% CI, 1.62-2.75); and being a victim of violence (AOR = 1.49; 95 % CI, 1.14-1.95). Regular employment (AOR = 0.50; 95% CI, 0.38-0.65), and engagement with addiction treatment (AOR = 0.58; 95% CI, 0.45-0.75) were negatively associated with high exposure. Our findings indicate that drug scene exposure is associated with markers of vulnerability and higher intensity addiction. Intensity of drug scene exposure was associated with indicators of vulnerability to harm in a dose-dependent fashion. These findings highlight opportunities for policy interventions to address exposure to street disorder in the areas of employment, housing, and addiction treatment.
Multispectral system analysis through modeling and simulation
NASA Technical Reports Server (NTRS)
Malila, W. A.; Gleason, J. M.; Cicone, R. C.
1977-01-01
The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in Landsat data, examining system design and operational configuration, and development of information extraction techniques.
Multispectral system analysis through modeling and simulation
NASA Technical Reports Server (NTRS)
Malila, W. A.; Gleason, J. M.; Cicone, R. C.
1977-01-01
The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in LANDSAT data, examining system design and operational configuration, and development of information extraction techniques.
NASA Technical Reports Server (NTRS)
Harwood, P. (Principal Investigator); Malin, P.; Finley, R.; Mcculloch, S.; Murphy, D.; Hupp, B.; Schell, J. A.
1977-01-01
The author has identified the following significant results. Four LANDSAT scenes were analyzed for the Harbor Island area test sites to produce land cover and land use maps using both image interpretation and computer-assisted techniques. When evaluated against aerial photography, the mean accuracy for three scenes was 84% for the image interpretation product and 62% for the computer-assisted classification maps. Analysis of the fourth scene was not completed using the image interpretation technique, because of poor quality, false color composite, but was available from the computer technique. Preliminary results indicate that these LANDSAT products can be applied to a variety of planning and management activities in the Texas coastal zone.
Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah
2014-11-19
The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.
Torralbo, Ana; Walther, Dirk B.; Chai, Barry; Caddigan, Eamon; Fei-Fei, Li; Beck, Diane M.
2013-01-01
Within the range of images that we might categorize as a “beach”, for example, some will be more representative of that category than others. Here we first confirmed that humans could categorize “good” exemplars better than “bad” exemplars of six scene categories and then explored whether brain regions previously implicated in natural scene categorization showed a similar sensitivity to how well an image exemplifies a category. In a behavioral experiment participants were more accurate and faster at categorizing good than bad exemplars of natural scenes. In an fMRI experiment participants passively viewed blocks of good or bad exemplars from the same six categories. A multi-voxel pattern classifier trained to discriminate among category blocks showed higher decoding accuracy for good than bad exemplars in the PPA, RSC and V1. This difference in decoding accuracy cannot be explained by differences in overall BOLD signal, as average BOLD activity was either equivalent or higher for bad than good scenes in these areas. These results provide further evidence that V1, RSC and the PPA not only contain information relevant for natural scene categorization, but their activity patterns mirror the fundamentally graded nature of human categories. Analysis of the image statistics of our good and bad exemplars shows that variability in low-level features and image structure is higher among bad than good exemplars. A simulation of our neuroimaging experiment suggests that such a difference in variance could account for the observed differences in decoding accuracy. These results are consistent with both low-level models of scene categorization and models that build categories around a prototype. PMID:23555588
NASA Astrophysics Data System (ADS)
Vestrand, W. T.; Theiler, J.; Woznia, P. R.
2004-10-01
The existence of rapidly slewing robotic telescopes and fast alert distribution via the Internet is revolutionizing our capability to study the physics of fast astrophysical transients. But the salient challenge that optical time domain surveys must conquer is mining the torrent of data to recognize important transients in a scene full of normal variations. Humans simply do not have the attention span, memory, or reaction time required to recognize fast transients and rapidly respond. Autonomous robotic instrumentation with the ability to extract pertinent information from the data stream in real time will therefore be essential for recognizing transients and commanding rapid follow-up observations while the ephemeral behavior is still present. Here we discuss how the development and integration of three technologies: (1) robotic telescope networks; (2) machine learning; and (3) advanced database technology, can enable the construction of smart robotic telescopes, which we loosely call ``thinking'' telescopes, capable of mining the sky in real time.
Constructing, Perceiving, and Maintaining Scenes: Hippocampal Activity and Connectivity
Zeidman, Peter; Mullally, Sinéad L.; Maguire, Eleanor A.
2015-01-01
In recent years, evidence has accumulated to suggest the hippocampus plays a role beyond memory. A strong hippocampal response to scenes has been noted, and patients with bilateral hippocampal damage cannot vividly recall scenes from their past or construct scenes in their imagination. There is debate about whether the hippocampus is involved in the online processing of scenes independent of memory. Here, we investigated the hippocampal response to visually perceiving scenes, constructing scenes in the imagination, and maintaining scenes in working memory. We found extensive hippocampal activation for perceiving scenes, and a circumscribed area of anterior medial hippocampus common to perception and construction. There was significantly less hippocampal activity for maintaining scenes in working memory. We also explored the functional connectivity of the anterior medial hippocampus and found significantly stronger connectivity with a distributed set of brain areas during scene construction compared with scene perception. These results increase our knowledge of the hippocampus by identifying a subregion commonly engaged by scenes, whether perceived or constructed, by separating scene construction from working memory, and by revealing the functional network underlying scene construction, offering new insights into why patients with hippocampal lesions cannot construct scenes. PMID:25405941
Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution
1989-11-01
to image 3 larger scenes, this problem becomes more important. A byproduct of this investigation is a duality theorem which is a generalization of the...well-known Projection-Slice Theorem . The second prob- - lem proposed is that of imaging a rapidly-spinning object, for example in inverse SAR mode...slices is absent. There is a possible connection of the word to the Projection-Slice Theorem , but, as seen in Chapter 4, even this is absent in the
NASA Astrophysics Data System (ADS)
Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.
2010-08-01
In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.
The perception of naturalness correlates with low-level visual features of environmental scenes.
Berman, Marc G; Hout, Michael C; Kardan, Omid; Hunter, MaryCarol R; Yourganov, Grigori; Henderson, John M; Hanayik, Taylor; Karimi, Hossein; Jonides, John
2014-01-01
Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.
Putting lexical constraints in context into the visual-world paradigm.
Novick, Jared M; Thompson-Schill, Sharon L; Trueswell, John C
2008-06-01
Prior eye-tracking studies of spoken sentence comprehension have found that the presence of two potential referents, e.g., two frogs, can guide listeners toward a Modifier interpretation of Put the frog on the napkin... despite strong lexical biases associated with Put that support a Goal interpretation of the temporary ambiguity (Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634; Trueswell, J. C., Sekerina, I., Hill, N. M. & Logrip, M. L. (1999). The kindergarten-path effect: Studying on-line sentence processing in young children. Cognition, 73, 89-134). This pattern is not expected under constraint-based parsing theories: cue conflict between the lexical evidence (which supports the Goal analysis) and the visuo-contextual evidence (which supports the Modifier analysis) should result in uncertainty about the intended analysis and partial consideration of the Goal analysis. We reexamined these put studies (Experiment 1) by introducing a response time-constraint and a spatial contrast between competing referents (a frog on a napkin vs. a frog in a bowl). If listeners immediately interpret on the... as the start of a restrictive modifier, then their eye movements should rapidly converge on the intended referent (the frog on something). However, listeners showed this pattern only when the phrase was unambiguously a Modifier (Put the frog that's on the...). Syntactically ambiguous trials resulted in transient consideration of the Competitor animal (the frog in something). A reading study was also run on the same individuals (Experiment 2) and performance was compared between the two experiments. Those individuals who relied heavily on lexical biases to resolve a complement ambiguity in reading (The man heard/realized the story had been...) showed increased sensitivity to both lexical and contextual constraints in the put-task; i.e., increased consideration of the Goal analysis in 1-Referent Scenes, but also adeptness at using spatial constraints of prepositions (in vs. on) to restrict referential alternatives in 2-Referent Scenes. These findings cross-validate visual world and reading methods and support multiple-constraint theories of sentence processing in which individuals differ in their sensitivity to lexical contingencies.
Behavior analysis of video object in complicated background
NASA Astrophysics Data System (ADS)
Zhao, Wenting; Wang, Shigang; Liang, Chao; Wu, Wei; Lu, Yang
2016-10-01
This paper aims to achieve robust behavior recognition of video object in complicated background. Features of the video object are described and modeled according to the depth information of three-dimensional video. Multi-dimensional eigen vector are constructed and used to process high-dimensional data. Stable object tracing in complex scenes can be achieved with multi-feature based behavior analysis, so as to obtain the motion trail. Subsequently, effective behavior recognition of video object is obtained according to the decision criteria. What's more, the real-time of algorithms and accuracy of analysis are both improved greatly. The theory and method on the behavior analysis of video object in reality scenes put forward by this project have broad application prospect and important practical significance in the security, terrorism, military and many other fields.
A comparison of viewer reactions to outdoor scenes and photographs of those scenes
Elwood, Jr. Shafer; Thomas A. Richards; Thomas A. Richards
1974-01-01
A color-slide projection or photograph can be used to determine reactions to an actual scene if the presentation adequately includes most of the elements in the scene. Eight kinds of scenes were subjected to three different types of presentation: (A) viewing. the actual scenes, (B) viewing color slides of the scenes, and (C) viewing color photographs of the scenes. For...
Conceptual short term memory in perception and thought.
Potter, Mary C
2012-01-01
Conceptual short term memory (CSTM) is a theoretical construct that provides one answer to the question of how perceptual and conceptual processes are related. CSTM is a mental buffer and processor in which current perceptual stimuli and their associated concepts from long term memory (LTM) are represented briefly, allowing meaningful patterns or structures to be identified (Potter, 1993, 1999, 2009). CSTM is different from and complementary to other proposed forms of working memory: it is engaged extremely rapidly, has a large but ill-defined capacity, is largely unconscious, and is the basis for the unreflective understanding that is characteristic of everyday experience. The key idea behind CSTM is that most cognitive processing occurs without review or rehearsal of material in standard working memory and with little or no conscious reasoning. When one perceives a meaningful stimulus such as a word, picture, or object, it is rapidly identified at a conceptual level and in turn activates associated information from LTM. New links among concurrently active concepts are formed in CSTM, shaped by parsing mechanisms of language or grouping principles in scene perception and by higher-level knowledge and current goals. The resulting structure represents the gist of a picture or the meaning of a sentence, and it is this structure that we are conscious of and that can be maintained in standard working memory and consolidated into LTM. Momentarily activated information that is not incorporated into such structures either never becomes conscious or is rapidly forgotten. This whole cycle - identification of perceptual stimuli, memory recruitment, structuring, consolidation in LTM, and forgetting of non-structured material - may occur in less than 1 s when viewing a pictured scene or reading a sentence. The evidence for such a process is reviewed and its implications for the relation of perception and cognition are discussed.
Prediction beyond the borders: ERP indices of boundary extension-related error.
Czigler, István; Intraub, Helene; Stefanics, Gábor
2013-01-01
Boundary extension (BE) is a rapidly occurring memory error in which participants incorrectly remember having seen beyond the boundaries of a view. However, behavioral data has provided no insight into how quickly after the onset of a test picture the effect is detected. To determine the time course of BE from neural responses we conducted a BE experiment while recording EEG. We exploited a diagnostic response asymmetry to mismatched views (a closer and wider view of the same scene) in which the same pair of views is rated as more similar when the closer item is shown first than vice versa. On each trial, a closer or wider view was presented for 250 ms followed by a 250-ms mask and either the identical view or a mismatched view. Boundary ratings replicated the typical asymmetry. We found a similar asymmetry in ERP responses in the 265-285 ms interval where the second member of the close-then-wide pairs evoked less negative responses at left parieto-temporal sites compared to the wide-then-close condition. We also found diagnostic ERP effects in the 500-560 ms range, where ERPs to wide-then-close pairs were more positive at centro-parietal sites than in the other three conditions, which is thought to be related to participants' confidence in their perceptual decision. The ERP effect in the 265-285 ms range suggests the falsely remembered region beyond the view-boundaries of S1 is rapidly available and impacts assessment of the test picture within the first 265 ms of viewing, suggesting that extrapolated scene structure may be computed rapidly enough to play a role in the integration of successive views during visual scanning.
The Role of Prediction In Perception: Evidence From Interrupted Visual Search
Mereu, Stefania; Zacks, Jeffrey M.; Kurby, Christopher A.; Lleras, Alejandro
2014-01-01
Recent studies of rapid resumption—an observer’s ability to quickly resume a visual search after an interruption—suggest that predictions underlie visual perception. Previous studies showed that when the search display changes unpredictably after the interruption, rapid resumption disappears. This conclusion is at odds with our everyday experience, where the visual system seems to be quite efficient despite continuous changes of the visual scene; however, in the real world, changes can typically be anticipated based on previous knowledge. The present study aimed to evaluate whether changes to the visual display can be incorporated into the perceptual hypotheses, if observers are allowed to anticipate such changes. Results strongly suggest that an interrupted visual search can be rapidly resumed even when information in the display has changed after the interruption, so long as participants not only can anticipate them, but also are aware that such changes might occur. PMID:24820440
HemoVision: An automated and virtual approach to bloodstain pattern analysis.
Joris, Philip; Develter, Wim; Jenar, Els; Suetens, Paul; Vandermeulen, Dirk; Van de Voorde, Wim; Claes, Peter
2015-06-01
Bloodstain pattern analysis (BPA) is a subspecialty of forensic sciences, dealing with the analysis and interpretation of bloodstain patterns in crime scenes. The aim of BPA is uncovering new information about the actions that took place in a crime scene, potentially leading to a confirmation or refutation of a suspect's statement. A typical goal of BPA is to estimate the flight paths for a set of stains, followed by a directional analysis in order to estimate the area of origin for the stains. The traditional approach, referred to as stringing, consists of attaching a piece of string to each stain, and letting the string represent an approximation of the stain's flight path. Even though stringing has been used extensively, many (practical) downsides exist. We propose an automated and virtual approach, employing fiducial markers and digital images. By automatically reconstructing a single coordinate frame from several images, limited user input is required. Synthetic crime scenes were created and analysed in order to evaluate the approach. Results demonstrate the correct operation and practical advantages, suggesting that the proposed approach may become a valuable asset for practically analysing bloodstain spatter patterns. Accompanying software called HemoVision is currently provided as a demonstrator and will be further developed for practical use in forensic investigations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Mass balance investigation of alpine glaciers through LANDSAT TM data
NASA Technical Reports Server (NTRS)
Bayr, Klaus J.
1989-01-01
An analysis of LANDSAT Thematic Mapper (TM) data of the Pasterze Glacier and the Kleines Fleisskees in the Austrian Alps was undertaken and compared with meteorological data of nearby weather stations. Alpine or valley glaciers can be used to study regional and worldwide climate changes. Alpine glaciers respond relatively fast to a warming or cooling trend in temperature through an advance or a retreat of the terminus. In addition, the mass balance of the glacier is being affected. Last year two TM scenes of the Pasterze Glacier of Aug. 1984 and Aug. 1986 were used to study the difference in reflectance. This year, in addition to the scenes from last year, one MSS scene of Aug. 1976 and a TM scene from 1988 were examined for both the Pasterze Glacier and the Kleines Fleisskees. During the overpass of the LANDSAT on 6 Aug. 1988 ground truthing on the Pasterze Glacier was undertaken. The results indicate that there was considerable more reflectance in 1976 and 1984 than in 1986 and 1988. The climatological data of the weather stations Sonnblick and Rudolfshuette were examined and compared with the results found through the LANDSAT data. There were relations between the meteorological and LANDSAT data: the average temperature over the last 100 years showed an increase of .4 C, the snowfall was declining during the same time period but the overall precipitation did not reveal any significant change over the same period. With the use of an interactive image analysis computer, the LANDSAT scenes were studied. The terminus of the Pasterze Glacier retreated 348 m and the terminus of the Kleines Fleisskees 121 m since 1965. This approach using LANDSAT MSS and TM digital data in conjunction with meteorological data can be effectively used to monitor regional and worldwide climate changes.
Banno, Hayaki; Saiki, Jun
2015-03-01
Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.
Ganesh, Attigodu Chandrashekara; Berthommier, Frédéric; Schwartz, Jean-Luc
2016-01-01
We introduce "Audio-Visual Speech Scene Analysis" (AVSSA) as an extension of the two-stage Auditory Scene Analysis model towards audiovisual scenes made of mixtures of speakers. AVSSA assumes that a coherence index between the auditory and the visual input is computed prior to audiovisual fusion, enabling to determine whether the sensory inputs should be bound together. Previous experiments on the modulation of the McGurk effect by audiovisual coherent vs. incoherent contexts presented before the McGurk target have provided experimental evidence supporting AVSSA. Indeed, incoherent contexts appear to decrease the McGurk effect, suggesting that they produce lower audiovisual coherence hence less audiovisual fusion. The present experiments extend the AVSSA paradigm by creating contexts made of competing audiovisual sources and measuring their effect on McGurk targets. The competing audiovisual sources have respectively a high and a low audiovisual coherence (that is, large vs. small audiovisual comodulations in time). The first experiment involves contexts made of two auditory sources and one video source associated to either the first or the second audio source. It appears that the McGurk effect is smaller after the context made of the visual source associated to the auditory source with less audiovisual coherence. In the second experiment with the same stimuli, the participants are asked to attend to either one or the other source. The data show that the modulation of fusion depends on the attentional focus. Altogether, these two experiments shed light on audiovisual binding, the AVSSA process and the role of attention.
Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment
Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary
2014-01-01
Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905
Achieving ultra-high temperatures with a resistive emitter array
NASA Astrophysics Data System (ADS)
Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott
2016-05-01
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.
NASA Astrophysics Data System (ADS)
Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément
2015-07-01
3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.
Modifications to Improve Data Acquisition and Analysis for Camouflage Design
1983-01-01
terrains into facsimiles of the original scenes in 3, 4# or 5 colors in CIELAB notation. Tasks that were addressed included optimization of the...a histogram algorithm (HIST) was used as a first step In the clustering of the CIELAB values of the scene pixels. This algorithm Is highly efficient...however, an optimal process and the CIELAB coordinates of the final color domains can be Influenced by the color coordinate Increments used In the
Bloodstain pattern analysis--casework experience.
Karger, B; Rand, S; Fracasso, T; Pfeiffer, H
2008-10-25
The morphology of bloodstain distribution patterns at the crime scene carries vital information for a reconstruction of the events. Contrary to experimental work, case reports where the reconstruction has been verified have rarely been published. This is the reason why a series of four illustrative cases is presented where bloodstain pattern analysis at the crime scene made a reconstruction of the events possible and where this reconstruction was later verified by a confession of the offender. The cases include various types of bloodstains such as contact and smear stains, drop stains, arterial blood spatter and splash stains from both impact and cast-off pattern. Problems frequently encountered in practical casework are addressed, such as unfavourable environmental conditions or combinations of different bloodstain patterns. It is also demonstrated that the analysis of bloodstain morphology can support individualisation of stains by directing the selection of a limited number of stains from a complex pattern for DNA analysis. The complexity of real situations suggests a step-by-step approach starting with a comprehensive view of the overall picture. This is followed by a differentiation and analysis of single bloodstain patterns and a search for informative details. It is ideal when the expert inspecting the crime scene has also performed the autopsy, but he definitely must have detailed knowledge of the injuries of the deceased/injured and of the possible mechanisms of production.
Trends in international health development.
Lien, Lars
2002-01-01
"... Good population health is a crucial input into poverty reduction, economic growth and long-term economic development... This point is widely recognised by analysts and policy makers, but is greatly underestimated in its qualitative and quantitative significance, and in the investment allocations of many developing country and donor governments."--Commission on Macroeconomics and Health The international health development scene has changed rapidly during the past 5 years. From being a merely bilateral effort together with a few multilateral organisations and many NGOs new global partnerships have entered the scene and become major funding agencies. The provision of aid has also changed from small-scale project basis to financial support of large programmes. The purpose of this article is to describe some of the major transformations taken place in the organising, delivery and objective of international health development. But before presenting the new international health development agenda, a short introduction to the challenges inducing the need for renewed thinking about international aid is shortly presented.
Development of a high-definition IR LED scene projector
NASA Astrophysics Data System (ADS)
Norton, Dennis T.; LaVeigne, Joe; Franks, Greg; McHugh, Steve; Vengel, Tony; Oleson, Jim; MacDougal, Michael; Westerfeld, David
2016-05-01
Next-generation Infrared Focal Plane Arrays (IRFPAs) are demonstrating ever increasing frame rates, dynamic range, and format size, while moving to smaller pitch arrays.1 These improvements in IRFPA performance and array format have challenged the IRFPA test community to accurately and reliably test them in a Hardware-In-the-Loop environment utilizing Infrared Scene Projector (IRSP) systems. The rapidly-evolving IR seeker and sensor technology has, in some cases, surpassed the capabilities of existing IRSP technology. To meet the demands of future IRFPA testing, Santa Barbara Infrared Inc. is developing an Infrared Light Emitting Diode IRSP system. Design goals of the system include a peak radiance >2.0W/cm2/sr within the 3.0-5.0μm waveband, maximum frame rates >240Hz, and >4million pixels within a form factor supported by pixel pitches <=32μm. This paper provides an overview of our current phase of development, system design considerations, and future development work.
Ryals, Anthony J.; Wang, Jane X.; Polnaszek, Kelly L.; Voss, Joel L.
2015-01-01
Although hippocampus unequivocally supports explicit/ declarative memory, fewer findings have demonstrated its role in implicit expressions of memory. We tested for hippocampal contributions to an implicit expression of configural/relational memory for complex scenes using eye-movement tracking during functional magnetic resonance imaging (fMRI) scanning. Participants studied scenes and were later tested using scenes that resembled study scenes in their overall feature configuration but comprised different elements. These configurally similar scenes were used to limit explicit memory, and were intermixed with new scenes that did not resemble studied scenes. Scene configuration memory was expressed through eye movements reflecting exploration overlap (EO), which is the viewing of the same scene locations at both study and test. EO reliably discriminated similar study-test scene pairs from study-new scene pairs, was reliably greater for similarity-based recognition hits than for misses, and correlated with hippocampal fMRI activity. In contrast, subjects could not reliably discriminate similar from new scenes by overt judgments, although ratings of familiarity were slightly higher for similar than new scenes. Hippocampal fMRI correlates of this weak explicit memory were distinct from EO-related activity. These findings collectively suggest that EO was an implicit expression of scene configuration memory associated with hippocampal activity. Visual exploration can therefore reflect implicit hippocampal-related memory processing that can be observed in eye-movement behavior during naturalistic scene viewing. PMID:25620526
Situated sentence processing: the coordinated interplay account and a neurobehavioral model.
Crocker, Matthew W; Knoeferle, Pia; Mayberry, Marshall R
2010-03-01
Empirical evidence demonstrating that sentence meaning is rapidly reconciled with the visual environment has been broadly construed as supporting the seamless interaction of visual and linguistic representations during situated comprehension. Based on recent behavioral and neuroscientific findings, however, we argue for the more deeply rooted coordination of the mechanisms underlying visual and linguistic processing, and for jointly considering the behavioral and neural correlates of scene-sentence reconciliation during situated comprehension. The Coordinated Interplay Account (CIA; Knoeferle, P., & Crocker, M. W. (2007). The influence of recent scene events on spoken comprehension: Evidence from eye movements. Journal of Memory and Language, 57(4), 519-543) asserts that incremental linguistic interpretation actively directs attention in the visual environment, thereby increasing the salience of attended scene information for comprehension. We review behavioral and neuroscientific findings in support of the CIA's three processing stages: (i) incremental sentence interpretation, (ii) language-mediated visual attention, and (iii) the on-line influence of non-linguistic visual context. We then describe a recently developed connectionist model which both embodies the central CIA proposals and has been successfully applied in modeling a range of behavioral findings from the visual world paradigm (Mayberry, M. R., Crocker, M. W., & Knoeferle, P. (2009). Learning to attend: A connectionist model of situated language comprehension. Cognitive Science). Results from a new simulation suggest the model also correlates with event-related brain potentials elicited by the immediate use of visual context for linguistic disambiguation (Knoeferle, P., Habets, B., Crocker, M. W., & Münte, T. F. (2008). Visual scenes trigger immediate syntactic reanalysis: Evidence from ERPs during situated spoken comprehension. Cerebral Cortex, 18(4), 789-795). Finally, we argue that the mechanisms underlying interpretation, visual attention, and scene apprehension are not only in close temporal synchronization, but have co-adapted to optimize real-time visual grounding of situated spoken language, thus facilitating the association of linguistic, visual and motor representations that emerge during the course of our embodied linguistic experience in the world. Copyright 2009 Elsevier Inc. All rights reserved.
Visualisation of urban airborne laser scanning data with occlusion images
NASA Astrophysics Data System (ADS)
Hinks, Tommy; Carr, Hamish; Gharibi, Hamid; Laefer, Debra F.
2015-06-01
Airborne Laser Scanning (ALS) was introduced to provide rapid, high resolution scans of landforms for computational processing. More recently, ALS has been adapted for scanning urban areas. The greater complexity of urban scenes necessitates the development of novel methods to exploit urban ALS to best advantage. This paper presents occlusion images: a novel technique that exploits the geometric complexity of the urban environment to improve visualisation of small details for better feature recognition. The algorithm is based on an inversion of traditional occlusion techniques.
1973-06-22
SL2-81-157 (22 June 1973) --- This view of the Black Hills Region, SD (44.0N, 104.0W) shows the scenic Black Hills where Mt. Rushmore and other monuments are located. Cities and towns in this view include: Rapid City, Deadwood, and Belle Fourche with the nearby Belle Fourche Reservoir. Notable in this scene are the recovering burn scars (seen as irregular shaped light toned patches) from a 1959 forest fire in the Black Hills National Forest near the edge of the photo. Photo credit: NASA
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
Crime scene investigation, reporting, and reconstuction (CSIRR)
NASA Astrophysics Data System (ADS)
Booth, John F.; Young, Jeffrey M.; Corrigan, Paul
1997-02-01
Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.
Bar, Moshe; Aminoff, Elissa; Schacter, Daniel L.
2009-01-01
The parahippocampal cortex (PHC) has been implicated both in episodic memory and in place/scene processing. We proposed that this region should instead be seen as intrinsically mediating contextual associations, and not place/scene processing or episodic memory exclusively. Given that place/scene processing and episodic memory both rely on associations, this modified framework provides a platform for reconciling what seemed like different roles assigned to the same region. Comparing scenes with scenes, we show here that the PHC responds significantly more strongly to scenes with rich contextual associations compared with scenes of equal visual qualities but less associations. This result provides the strongest support to the view that the PHC mediates contextual associations in general, rather than places or scenes proper, and necessitates a revision of current views such as that the PHC contains a dedicated place/scenes “module.” PMID:18716212
Did limits on payments for tobacco placements in US movies affect how movies are made?
Morgenstern, Matthis; Stoolmiller, Mike; Bergamini, Elaina; Sargent, James D
2017-01-01
Objective To compare how smoking was depicted in Hollywood movies before and after an intervention limiting paid product placement for cigarette brands. Design Correlational analysis. Setting/Participants Top box office hits released in the USA primarily between 1988 and 2011 (n=2134). Intervention The Master Settlement Agreement (MSA), implemented in 1998. Main outcome measures This study analyses trends for whether or not movies depicted smoking, and among movies with smoking, counts for character smoking scenes and average smoking scene duration. Results There was no detectable trend for any measure prior to the MSA. In 1999, 79% of movies contained smoking, and movies with smoking contained 8 scenes of character smoking, with the average duration of a character smoking scene being 81 s. After the MSA, there were significant negative post-MSA changes (p<0.05) for linear trends in proportion of movies with any smoking (which declined to 41% by 2011) and, in movies with smoking, counts of character smoking scenes (which declined to 4 by 2011). Between 1999 and 2000, there was an immediate and dramatic drop in average length of a character smoking scene, which decreased to 19 s, and remained there for the duration of the study. The probability that the drop of −62.5 (95% CI −55.1 to −70.0) seconds was due to chance was p<10−16. Conclusions This study's correlational data suggest that restricting payments for tobacco product placement coincided with profound changes in the duration of smoking depictions in movies. PMID:26822189
Salgado, María V; Pérez, Adriana; Abad-Vivero, Erika N; Thrasher, James F; Sargent, James D; Mejía, Raúl
2016-04-01
Smoking scenes in movies promote adolescent smoking onset; thus, the analysis of the number of images of smoking in movies really reaching adolescents has become a subject of increasing interest. The aim of this study was to estimate the level of exposure to images of smoking in movies watched by adolescents in Argentina and Mexico. First-year secondary school students from Argentina and Mexico were surveyed. One hundred highest-grossing films from each year of the period 2009-2013 (Argentina) and 2010-2014 (Mexico) were analyzed. Each participant was assigned a random sample of 50 of these movies and was asked if he/she had watched them. The total number of adolescents who had watched each movie in each country was estimated and was multiplied by the number of smoking scenes (occurrences) in each movie to obtain the number of gross smoking impressions seen by secondary school adolescents from each country. Four-hundred and twenty-two movies were analyzed in Argentina and 433 in Mexico. Exposure to more than 500 million smoking impressions was estimated for adolescents in each country, averaging 128 and 121 minutes of smoking scenes seen by each Argentine and Mexican adolescent, respectively. Although 15, 16 and 18-rated movies had more smoking scenes in average, movies rated for younger teenagers were responsible for the highest number of smoking scenes watched by the students (67.3% in Argentina and 54.4% in Mexico) due to their larger audience. At the population level, movies aimed at children are responsible for the highest tobacco burden seen by adolescents.
The depiction of protective eyewear use in popular television programs.
Glazier, Robert; Slade, Martin; Mayer, Hylton
2011-04-01
Media portrayal of health related activities may influence health related behaviors in adult and pediatric populations. This study characterizes the depiction of protective eyewear use in the scripted television programs most viewed by the age group that sustains the largest proportion of eye injuries. Viewership ratings data were acquired to assemble a list of the 24 most-watched scripted network broadcast programs for the 13-year-old to 45-year-old age group. The six highest average viewership programs that met the exclusion criteria were selected for analysis. Review of 30 episodes revealed a total of 258 exposure scenes in which an individual was engaged in an activity requiring eye protection (mean, 8.3 exposure scenes per episode; median, 5 exposure scenes per episode). Overall, 66 (26%) of exposure scenes depicted the use of any eye protection, while only 32 (12%) of exposure scenes depicted the use of adequate eye protection. No incidences of eye injuries or infectious exposures were depicted within the exposure scenes in the study set. The depiction of adequate protective eyewear use during eye-risk activities is rare in network scripted broadcast programs. Healthcare professionals and health advocacy groups should continue to work to improve public education about eye injury risks and prevention; these efforts could include working with the television industry to improve the accuracy of the depiction of eye injuries and the proper protective eyewear used for prevention of injuries in scripted programming. Future studies are needed to examine the relationship between media depiction of eye protection use and viewer compliance rates.
A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.
Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron
2017-01-01
Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Combined optimization of image-gathering and image-processing systems for scene feature detection
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.
1987-01-01
The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.
Recent advances in exploring the neural underpinnings of auditory scene perception
Snyder, Joel S.; Elhilali, Mounya
2017-01-01
Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022
Colorimetric consideration of transparencies for a typical LACIE scene
NASA Technical Reports Server (NTRS)
Juday, R. D. (Principal Investigator)
1979-01-01
The production film converter used to produce LACIE imagery is described as well as schemes designed to provide the analyst with operational film products. Two of these products are discussed from the standpoint of color theory. Colorimetric terminology is defined and the mathematical calculations are given. Topics covered include (1) history of product 1 and 3 algorithm development; (2) colorimetric assumptions for product 1 and 3 algorithms; (3) qualitative results from a colorimetric analysis of a typical LACIE scene; and (4) image-to-image color stability.
Kumar, Manoj; Federmeier, Kara D; Fei-Fei, Li; Beck, Diane M
2017-07-15
A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and frontal cortices. This similarity of neural activity patterns across the two input types, for categories that co-activate local brain regions, provides strong evidence of a common semantic code for pictures and words in the brain. Copyright © 2017 Elsevier Inc. All rights reserved.
Fire flame detection based on GICA and target tracking
NASA Astrophysics Data System (ADS)
Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian
2013-04-01
To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.
ANALYSIS AND REDUCTION OF LANDSAT DATA FOR USE IN A HIGH PLAINS GROUND-WATER FLOW MODEL.
Thelin, Gail; Gaydas, Leonard; Donovan, Walter; Mladinich, Carol
1984-01-01
Data obtained from 59 Landsat scenes were used to estimate the areal extent of irrigated agriculture over the High Plains region of the United States for a ground-water flow model. This model provides information on current trends in the amount and distribution of water used for irrigation. The analysis and reduction process required that each Landsat scene be ratioed, interpreted, and aggregated. Data reduction by aggregation was an efficient technique for handling the volume of data analyzed. This process bypassed problems inherent in geometrically correcting and mosaicking the data at pixel resolution and combined the individual Landsat classification into one comprehensive data set.
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto
2017-10-01
Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.
A view not to be missed: Salient scene content interferes with cognitive restoration
Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.
2017-01-01
Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975
A view not to be missed: Salient scene content interferes with cognitive restoration.
Van der Jagt, Alexander P N; Craig, Tony; Brewer, Mark J; Pearson, David G
2017-01-01
Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration.
Chromatic information and feature detection in fast visual analysis
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...
2016-08-01
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less
Chromatic information and feature detection in fast visual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less
NASA Astrophysics Data System (ADS)
Appel, Marius; Lahn, Florian; Buytaert, Wouter; Pebesma, Edzer
2018-04-01
Earth observation (EO) datasets are commonly provided as collection of scenes, where individual scenes represent a temporal snapshot and cover a particular region on the Earth's surface. Using these data in complex spatiotemporal modeling becomes difficult as soon as data volumes exceed a certain capacity or analyses include many scenes, which may spatially overlap and may have been recorded at different dates. In order to facilitate analytics on large EO datasets, we combine and extend the geospatial data abstraction library (GDAL) and the array-based data management and analytics system SciDB. We present an approach to automatically convert collections of scenes to multidimensional arrays and use SciDB to scale computationally intensive analytics. We evaluate the approach in three study cases on national scale land use change monitoring with Landsat imagery, global empirical orthogonal function analysis of daily precipitation, and combining historical climate model projections with satellite-based observations. Results indicate that the approach can be used to represent various EO datasets and that analyses in SciDB scale well with available computational resources. To simplify analyses of higher-dimensional datasets as from climate model output, however, a generalization of the GDAL data model might be needed. All parts of this work have been implemented as open-source software and we discuss how this may facilitate open and reproducible EO analyses.
Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.
Revina, Yulia; Petro, Lucy S; Muckli, Lars
2017-09-22
Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
One to One: Interpersonal Skills for Managers.
ERIC Educational Resources Information Center
Turner, Colin; Andrews, Philippa
This book explores interpersonal skills for college administrators through analysis of fictional, but typical, scenes and dialogues set at a fictional "Elmdale College". The analysis and discussion use transactional analysis, gestalt psychology, and neuro-linguistic programming theories to help the reader understand the underlying…
A comparison of autonomous techniques for multispectral image analysis and classification
NASA Astrophysics Data System (ADS)
Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso
2012-10-01
Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.
Concurrent-scene/alternate-pattern analysis for robust video-based docking systems
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol
1991-01-01
A typical docking target employs a three-point design of retroreflective tape, one at each endpoint of the center-line, and one on the tip of the central post. Scenes, sensed via laser diode illumination, produce pictures with spots corresponding to desired reflection from the retroreflectors and other reflections. Control corrections for each axis of the vehicle can then be properly applied if the desired spots are accurately tracked. However, initial acquisition of these three spots (detection and identification problem) are non-trivial under a severe noise environment. Signal-to-noise enhancement, accomplished by subtracting the non-illuminated scene from the target scene illuminated by laser diodes, can not eliminate every false spot. Hence, minimization of docking failures due to target mistracking would suggest needed inclusion of added processing features pertaining to target locations. In this paper, we present a concurrent processing scheme for a modified docking target scene which could lead to a perfect docking system. Since the non-illuminated target scene is already available, adding another feature to the three-point design by marking two non-reflective lines, one between the two end-points and one from the tip of the central post to the center-line, would allow this line feature to be picked-up only when capturing the background scene (sensor data without laser illumination). Therefore, instead of performing the image subtraction to generate a picture with a high signal-to-noise ratio, a processed line-image based on the robust line detection technique (Hough transform) can be used to fuse with the actively sensed three-point target image to deduce the true locations of the docking target. This dual-channel confirmation scheme is necessary if a fail-safe system is to be realized from both the sensing and processing point-of-views. Detailed algorithms and preliminary results are presented.
NASA Astrophysics Data System (ADS)
Hudson, Douglas J.; Torres, Manuel; Dougherty, Catherine; Rajendran, Natesan; Thompson, Rhoe A.
2003-09-01
The Air Force Research Laboratory (AFRL) Aerothermal Targets Analysis Program (ATAP) is a user-friendly, engineering-level computational tool that features integrated aerodynamics, six-degree-of-freedom (6-DoF) trajectory/motion, convective and radiative heat transfer, and thermal/material response to provide an optimal blend of accuracy and speed for design and analysis applications. ATAP is sponsored by the Kinetic Kill Vehicle Hardware-in-the-Loop Simulator (KHILS) facility at Eglin AFB, where it is used with the CHAMP (Composite Hardbody and Missile Plume) technique for rapid infrared (IR) signature and imagery predictions. ATAP capabilities include an integrated 1-D conduction model for up to 5 in-depth material layers (with options for gaps/voids with radiative heat transfer), fin modeling, several surface ablation modeling options, a materials library with over 250 materials, options for user-defined materials, selectable/definable atmosphere and earth models, multiple trajectory options, and an array of aerodynamic prediction methods. All major code modeling features have been validated with ground-test data from wind tunnels, shock tubes, and ballistics ranges, and flight-test data for both U.S. and foreign strategic and theater systems. Numerous applications include the design and analysis of interceptors, booster and shroud configurations, window environments, tactical missiles, and reentry vehicles.
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
Krogh-Jespersen, Sheila; Woodward, Amanda L
2014-01-01
Previous research has shown that young infants perceive others' actions as structured by goals. One open question is whether the recruitment of this understanding when predicting others' actions imposes a cognitive challenge for young infants. The current study explored infants' ability to utilize their knowledge of others' goals to rapidly predict future behavior in complex social environments and distinguish goal-directed actions from other kinds of movements. Fifteen-month-olds (N = 40) viewed videos of an actor engaged in either a goal-directed (grasping) or an ambiguous (brushing the back of her hand) action on a Tobii eye-tracker. At test, critical elements of the scene were changed and infants' predictive fixations were examined to determine whether they relied on goal information to anticipate the actor's future behavior. Results revealed that infants reliably generated goal-based visual predictions for the grasping action, but not for the back-of-hand behavior. Moreover, response latencies were longer for goal-based predictions than for location-based predictions, suggesting that goal-based predictions are cognitively taxing. Analyses of areas of interest indicated that heightened attention to the overall scene, as opposed to specific patterns of attention, was the critical indicator of successful judgments regarding an actor's future goal-directed behavior. These findings shed light on the processes that support "smart" social behavior in infants, as it may be a challenge for young infants to use information about others' intentions to inform rapid predictions.
Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne
2016-01-01
How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.
Characterizing the LANDSAT Global Long-Term Data Record
NASA Technical Reports Server (NTRS)
Arvidson, T.; Goward, S. N.; Williams, D. L.
2006-01-01
The effects of global climate change are fast becoming politically, sociologically, and personally important: increasing storm frequency and intensity, lengthening cycles of drought and flood, expanding desertification and soil salinization. A vital asset in the analysis of climate change on a global basis is the 34-year record of Landsat imagery. In recognition of its increasing importance, a detailed analysis of the Landsat observation coverage within the US archive was commissioned. Results to date indicate some unexpected gaps in the US-held archive. Fortunately, throughout the Landsat program, data have been downlinked routinely to International Cooperator (IC) ground stations for archival, processing, and distribution. These IC data could be combined with the current US holdings to build a nearly global, annual observation record over this 34-year period. Today, we have inadequate information as to which scenes are available from which IC archives. Our best estimate is that there are over four million digital scenes in the IC archives, compared with the nearly two million scenes held in the US archive. This vast pool of Landsat observations needs to be accurately documented, via metadata, to determine the existence of complementary scenes and to characterize the potential scope of the global Landsat observation record. Of course, knowing the extent and completeness of the data record is but the first step. It will be necessary to assure that the data record is easy to use, internally consistent in terms of calibration and data format, and fully accessible in order to fully realize its potential.
Tomaszewski, Michał; Ruszczak, Bogdan; Michalski, Paweł
2018-06-01
Electrical insulators are elements of power lines that require periodical diagnostics. Due to their location on the components of high-voltage power lines, their imaging can be cumbersome and time-consuming, especially under varying lighting conditions. Insulator diagnostics with the use of visual methods may require localizing insulators in the scene. Studies focused on insulator localization in the scene apply a number of methods, including: texture analysis, MRF (Markov Random Field), Gabor filters or GLCM (Gray Level Co-Occurrence Matrix) [1], [2]. Some methods, e.g. those which localize insulators based on colour analysis [3], rely on object and scene illumination, which is why the images from the dataset are taken under varying lighting conditions. The dataset may also be used to compare the effectiveness of different methods of localizing insulators in images. This article presents high-resolution images depicting a long rod electrical insulator under varying lighting conditions and against different backgrounds: crops, forest and grass. The dataset contains images with visible laser spots (generated by a device emitting light at the wavelength of 532 nm) and images without such spots, as well as complementary data concerning the illumination level and insulator position in the scene, the number of registered laser spots, and their coordinates in the image. The laser spots may be used to support object-localizing algorithms, while the images without spots may serve as a source of information for those algorithms which do not need spots to localize an insulator.
Auditory Scene Analysis: The Sweet Music of Ambiguity
Pressnitzer, Daniel; Suied, Clara; Shamma, Shihab A.
2011-01-01
In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music. PMID:22174701
Informational analysis for compressive sampling in radar imaging.
Zhang, Jingxiong; Yang, Ke
2015-03-24
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.
Emotional and neutral scenes in competition: orienting, efficiency, and identification.
Calvo, Manuel G; Nummenmaa, Lauri; Hyönä, Jukka
2007-12-01
To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on--and shorter saccade latencies to--emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes.
Colour agnosia impairs the recognition of natural but not of non-natural scenes.
Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F
2007-03-01
Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.
SALGADO, MARÍA V.; PÉREZ, ADRIANA; ABAD-VIVERO, ERIKA N.; THRASHER, JAMES F.; SARGENT, JAMES D.; MEJÍA, RAÚL
2016-01-01
Background Smoking scenes in movies promote adolescent smoking onset; thus, the analysis of the number of images of smoking in movies really reaching adolescents has become a subject of increasing interest. Objective The aim of this study was to estimate the level of exposure to images of smoking in movies watched by adolescents in Argentina and Mexico. Methods First-year secondary school students from Argentina and Mexico were surveyed. One hundred highest-grossing films from each year of the period 2009-2013 (Argentina) and 2010-2014 (Mexico) were analyzed. Each participant was assigned a random sample of 50 of these movies and was asked if he/she had watched them. The total number of adolescents who had watched each movie in each country was estimated and was multiplied by the number of smoking scenes (occurrences) in each movie to obtain the number of gross smoking impressions seen by secondary school adolescents from each country. Results Four-hundred and twenty-two movies were analyzed in Argentina and 433 in Mexico. Exposure to more than 500 million smoking impressions was estimated for adolescents in each country, averaging 128 and 121 minutes of smoking scenes seen by each Argentine and Mexican adolescent, respectively. Although 15, 16 and 18-rated movies had more smoking scenes in average, movies rated for younger teenagers were responsible for the highest number of smoking scenes watched by the students (67.3% in Argentina and 54.4% in Mexico) due to their larger audience. Conclusion At the population level, movies aimed at children are responsible for the highest tobacco burden seen by adolescents. PMID:27354756
Did limits on payments for tobacco placements in US movies affect how movies are made?
Morgenstern, Matthis; Stoolmiller, Mike; Bergamini, Elaina; Sargent, James D
2017-01-01
To compare how smoking was depicted in Hollywood movies before and after an intervention limiting paid product placement for cigarette brands. Correlational analysis. Top box office hits released in the USA primarily between 1988 and 2011 (n=2134). The Master Settlement Agreement (MSA), implemented in 1998. This study analyses trends for whether or not movies depicted smoking, and among movies with smoking, counts for character smoking scenes and average smoking scene duration. There was no detectable trend for any measure prior to the MSA. In 1999, 79% of movies contained smoking, and movies with smoking contained 8 scenes of character smoking, with the average duration of a character smoking scene being 81 s. After the MSA, there were significant negative post-MSA changes (p<0.05) for linear trends in proportion of movies with any smoking (which declined to 41% by 2011) and, in movies with smoking, counts of character smoking scenes (which declined to 4 by 2011). Between 1999 and 2000, there was an immediate and dramatic drop in average length of a character smoking scene, which decreased to 19 s, and remained there for the duration of the study. The probability that the drop of -62.5 (95% CI -55.1 to -70.0) seconds was due to chance was p<10 -16 . This study's correlational data suggest that restricting payments for tobacco product placement coincided with profound changes in the duration of smoking depictions in movies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Anticipatory scene representation in preschool children's recall and recognition memory.
Kreindel, Erica; Intraub, Helene
2017-09-01
Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4-5-year-olds and adults. In Experiment 2, a new, forced-choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide-angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3-5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age-related differences observed. © 2016 John Wiley & Sons Ltd.
Image Enhancement for Astronomical Scenes
2013-09-01
address this problem in the context of natural scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also...scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also mostly empty space. We compare two classes of
Brockmole, James R; Henderson, John M
2006-07-01
When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.
Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-01-01
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information. PMID:29513219
Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I
2018-03-07
Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.
Remembering faces and scenes: The mixed-category advantage in visual working memory.
Jiang, Yuhong V; Remington, Roger W; Asaad, Anthony; Lee, Hyejin J; Mikkalson, Taylor C
2016-09-01
We examined the mixed-category memory advantage for faces and scenes to determine how domain-specific cortical resources constrain visual working memory. Consistent with previous findings, visual working memory for a display of 2 faces and 2 scenes was better than that for a display of 4 faces or 4 scenes. This pattern was unaffected by manipulations of encoding duration. However, the mixed-category advantage was carried solely by faces: Memory for scenes was not better when scenes were encoded with faces rather than with other scenes. The asymmetry between faces and scenes was found when items were presented simultaneously or sequentially, centrally, or peripherally, and when scenes were drawn from a narrow category. A further experiment showed a mixed-category advantage in memory for faces and bodies, but not in memory for scenes and objects. The results suggest that unique category-specific interactions contribute significantly to the mixed-category advantage in visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Neural correlates of contextual cueing are modulated by explicit learning.
Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A
2011-10-01
Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.
Neural correlates of contextual cueing are modulated by explicit learning
Westerberg, Carmen E.; Miller, Brennan B.; Reber, Paul J.; Cohen, Neal J.; Paller, Ken A.
2011-01-01
Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer’s knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. PMID:21889947
NASA Astrophysics Data System (ADS)
Mayr, Andreas; Rutzinger, Martin; Bremer, Magnus; Geitner, Clemens
2016-06-01
In the Alps as well as in other mountain regions steep grassland is frequently affected by shallow erosion. Often small landslides or snow movements displace the vegetation together with soil and/or unconsolidated material. This results in bare earth surface patches within the grass covered slope. Close-range and remote sensing techniques are promising for both mapping and monitoring these eroded areas. This is essential for a better geomorphological process understanding, to assess past and recent developments, and to plan mitigation measures. Recent developments in image matching techniques make it feasible to produce high resolution orthophotos and digital elevation models from terrestrial oblique images. In this paper we propose to delineate the boundary of eroded areas for selected scenes of a study area, using close-range photogrammetric data. Striving for an efficient, objective and reproducible workflow for this task, we developed an approach for automated classification of the scenes into the classes grass and eroded. We propose an object-based image analysis (OBIA) workflow which consists of image segmentation and automated threshold selection for classification using the Excess Green Vegetation Index (ExG). The automated workflow is tested with ten different scenes. Compared to a manual classification, grass and eroded areas are classified with an overall accuracy between 90.7% and 95.5%, depending on the scene. The methods proved to be insensitive to differences in illumination of the scenes and greenness of the grass. The proposed workflow reduces user interaction and is transferable to other study areas. We conclude that close-range photogrammetry is a valuable low-cost tool for mapping this type of eroded areas in the field with a high level of detail and quality. In future, the output will be used as ground truth for an area-wide mapping of eroded areas in coarser resolution aerial orthophotos acquired at the same time.
A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.
Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan
2016-07-01
Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.
NASA Technical Reports Server (NTRS)
Cao, Chang-Yong; Blonski, Slawomir; Ryan, Robert; Gasser, Jerry; Zanoni, Vicki
1999-01-01
The verification and validation (V&V) target range developed at Stennis Space Center is a useful test site for the calibration of remote sensing systems. In this paper, we present a simple algorithm for generating synthetic radiance scenes or digital models of this target range. The radiation propagation for the target in the solar reflective and thermal infrared spectral regions is modeled using the atmospheric radiative transfer code MODTRAN 4. The at-sensor, in-band radiance and spectral radiance for a given sensor at a given altitude is predicted. Software is developed to generate scenes with different spatial and spectral resolutions using the simulated at-sensor radiance values. The radiometric accuracy of the simulation is evaluated by comparing simulated with AVIRIS acquired radiance values. The results show that in general there is a good match between AVIRIS sensor measured and MODTRAN predicted radiance values for the target despite the fact that some anomalies exist. Synthetic scenes provide a cost-effective way for in-flight validation of the spatial and radiometric accuracy of the data. Other applications include mission planning, sensor simulation, and trade-off analysis in sensor design.
Situational awareness for unmanned ground vehicles in semi-structured environments
NASA Astrophysics Data System (ADS)
Goodsell, Thomas G.; Snorrason, Magnus; Stevens, Mark R.
2002-07-01
Situational Awareness (SA) is a critical component of effective autonomous vehicles, reducing operator workload and allowing an operator to command multiple vehicles or simultaneously perform other tasks. Our Scene Estimation & Situational Awareness Mapping Engine (SESAME) provides SA for mobile robots in semi-structured scenes, such as parking lots and city streets. SESAME autonomously builds volumetric models for scene analysis. For example, a SES-AME equipped robot can build a low-resolution 3-D model of a row of cars, then approach a specific car and build a high-resolution model from a few stereo snapshots. The model can be used onboard to determine the type of car and locate its license plate, or the model can be segmented out and sent back to an operator who can view it from different viewpoints. As new views of the scene are obtained, the model is updated and changes are tracked (such as cars arriving or departing). Since the robot's position must be accurately known, SESAME also has automated techniques for deter-mining the position and orientation of the camera (and hence, robot) with respect to existing maps. This paper presents an overview of the SESAME architecture and algorithms, including our model generation algorithm.
Color constancy in a scene with bright colors that do not have a fully natural surface appearance.
Fukuda, Kazuho; Uchikawa, Keiji
2014-04-01
Theoretical and experimental approaches have proposed that color constancy involves a correction related to some average of stimulation over the scene, and some of the studies showed that the average gives greater weight to surrounding bright colors. However, in a natural scene, high-luminance elements do not necessarily carry information about the scene illuminant when the luminance is too high for it to appear as a natural object color. The question is how a surrounding color's appearance mode influences its contribution to the degree of color constancy. Here the stimuli were simple geometric patterns, and the luminance of surrounding colors was tested over the range beyond the luminosity threshold. Observers performed perceptual achromatic setting on the test patch in order to measure the degree of color constancy and evaluated the surrounding bright colors' appearance mode. Broadly, our results support the assumption that the visual system counts only the colors in the object-color appearance for color constancy. However, detailed analysis indicated that surrounding colors without a fully natural object-color appearance had some sort of influence on color constancy. Consideration of this contribution of unnatural object color might be important for precise modeling of human color constancy.
MTF Analysis of LANDSAT-4 Thematic Mapper
NASA Technical Reports Server (NTRS)
Schowengerdt, R.
1984-01-01
A research program to measure the LANDSAT 4 Thematic Mapper (TM) modulation transfer function (MTF) is described. Measurement of a satellite sensor's MTF requires the use of a calibrated ground target, i.e., the spatial radiance distribution of the target must be known to a resolution at least four to five times greater than that of the system under test. A small reflective mirror or a dark light linear pattern such as line or edge, and relatively high resolution underflight imagery are used to calibrate the target. A technique that utilizes an analytical model for the scene spatial frequency power spectrum will be investigated as an alternative to calibration of the scene. The test sites and analysis techniques are also described.
Smelling time: a neural basis for olfactory scene analysis
Ache, Barry W.; Hein, Andrew M.; Bobkov, Yuriy V.; Principe, Jose C.
2016-01-01
Behavioral evidence from phylogenetically diverse animals and humans suggests that olfaction could be much more involved in interpreting space and time than heretofore imagined by extracting temporal information inherent in the olfactory signal. If this is the case, the olfactory system must have neural mechanisms capable of encoding time at intervals relevant to the turbulent odor world in which many animals live. We review evidence that animals can use populations of rhythmically active or ‘bursting’ olfactory receptor neurons (bORNs) to extract and encode temporal information inherent in natural olfactory signals. We postulate that bORNs represent an unsuspected neural mechanism through which time can be accurately measured, and that ‘smelling time’ completes the requirements for true olfactory scene analysis. PMID:27594700
Landsat-4 MSS and Thematic Mapper data quality and information content analysis
NASA Technical Reports Server (NTRS)
Anuta, P. E.; Bartolucci, L. A.; Dean, M. E.; Lozano, D. F.; Malaret, E.; Mcgillem, C. D.; Valdes, J. A.; Valenzuela, C. R.
1984-01-01
Landsat-4 Thematic Mapper and Multispectral Scanner data were analyzed to obtain information on data quality and information content. Geometric evaluations were performed to test band-to-band registration accuracy. Thematic Mapper overall system resolution was evaluated using scene objects which demonstrated sharp high contrast edge responses. Radiometric evaluation included detector relative calibration, effects of resampling, and coherent noise effects. Information content evaluation was carried out using clustering, principal components, transformed divergence separability measure, and numerous supervised classifiers on data from Iowa and Illinois. A detailed spectral class analysis (multispectral classification) was carried out on data from the Des Moines, IA area to compare the information content of the MSS and TM for a large number of scene classes.
NASA Astrophysics Data System (ADS)
Cunningham, Cindy C.; Peloquin, Tracy D.
1999-02-01
Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.
Scene incongruity and attention.
Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John
2017-02-01
Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.
ERBE Geographic Scene and Monthly Snow Data
NASA Technical Reports Server (NTRS)
Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.
1997-01-01
The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.
Bag of Lines (BoL) for Improved Aerial Scene Representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sridharan, Harini; Cheriyadat, Anil M.
2014-09-22
Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less
Updating representations of learned scenes.
Finlay, Cory A; Motes, Michael A; Kozhevnikov, Maria
2007-05-01
Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0 degrees -360 degrees in 36 degrees increments) around the scene, and participants judged whether the objects' positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.
Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory
NASA Technical Reports Server (NTRS)
Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.
2005-01-01
Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.
Cornelissen, Tim H W; Võ, Melissa L-H
2017-01-01
People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics-let alone their semantic congruity-processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.
Visual search for changes in scenes creates long-term, incidental memory traces.
Utochkin, Igor S; Wolfe, Jeremy M
2018-05-01
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Neural mechanism for sensing fast motion in dim light.
Li, Ran; Wang, Yi
2013-11-07
Luminance is a fundamental property of visual scenes. A population of neurons in primary visual cortex (V1) is sensitive to uniform luminance. In natural vision, however, the retinal image often changes rapidly. Consequently the luminance signals visual cells receive are transiently varying. How V1 neurons respond to such luminance changes is unknown. By applying large static uniform stimuli or grating stimuli altering at 25 Hz that resemble the rapid luminance changes in the environment, we show that approximately 40% V1 cells responded to rapid luminance changes of uniform stimuli. Most of them strongly preferred luminance decrements. Importantly, when tested with drifting gratings, the preferred speeds of these cells were significantly higher than cells responsive to static grating stimuli but not to uniform stimuli. This responsiveness can be accounted for by the preferences for low spatial frequencies and high temporal frequencies. These luminance-sensitive cells subserve the detection of fast motion under the conditions of dim illumination.
Kang, Yahui; Cappella, Joseph N; Fishbein, Martin
2009-09-01
This study explored the possible negative impact of a specific ad feature-marijuana scenes-on adolescents' perceptions of ad effectiveness. A secondary data analysis was conducted on adolescents' evaluations of 60 anti-marijuana public service announcements that were a part of national and state anti-drug campaigns directed at adolescents. The major finding of the study was that marijuana scenes in anti-marijuana public service announcements negatively affected ad liking and thought valence toward the ads among adolescents who were at higher levels of risk for marijuana use. This negative impact was not reversed in the presence of strong anti-marijuana arguments. The results may be used to partially explain the lack of effectiveness of the anti-drug media campaign. It may also help researchers design more effective anti-marijuana ads by isolating adverse elements in the ads that may elicit boomerang effects in the target population.
Nature and place of crime scene management within forensic sciences.
Crispino, Frank
2008-03-01
This short paper presents the preliminary results of a recent study aimed at appreciating the relevant parameters required to qualify forensic science as a science through an epistemological analysis. The reader is invited to reflect upon references within a historical and logical framework which assert that forensic science is based upon two fundamental principles (those of Locard and Kirk). The basis of the assertion that forensic science is indeed a science should be appreciated not only on one epistemological criteria (as Popper's falsification raised by the Daubert hearing was), but also on the logical frameworks used by the individuals involved (investigator, expert witness and trier of fact) from the crime scene examination to the final interpretation of the evidence. Hence, it can be argued that the management of the crime scene should be integrated into the scientific way of thinking rather than remain as a technical discipline as recently suggested by Harrison.
Short-term Power Load Forecasting Based on Balanced KNN
NASA Astrophysics Data System (ADS)
Lv, Xianlong; Cheng, Xingong; YanShuang; Tang, Yan-mei
2018-03-01
To improve the accuracy of load forecasting, a short-term load forecasting model based on balanced KNN algorithm is proposed; According to the load characteristics, the historical data of massive power load are divided into scenes by the K-means algorithm; In view of unbalanced load scenes, the balanced KNN algorithm is proposed to classify the scene accurately; The local weighted linear regression algorithm is used to fitting and predict the load; Adopting the Apache Hadoop programming framework of cloud computing, the proposed algorithm model is parallelized and improved to enhance its ability of dealing with massive and high-dimension data. The analysis of the household electricity consumption data for a residential district is done by 23-nodes cloud computing cluster, and experimental results show that the load forecasting accuracy and execution time by the proposed model are the better than those of traditional forecasting algorithm.
NASA Technical Reports Server (NTRS)
1982-01-01
A project to develop an effective mobility aid for blind pedestrians which acquires consecutive images of the scenes before a moving pedestrian, which locates and identifies the pedestrian's path and potential obstacles in the path, which presents path and obstacle information to the pedestrian, and which operates in real-time is discussed. The mobility aid has three principal components: an image acquisition system, an image interpretation system, and an information presentation system. The image acquisition system consists of a miniature, solid-state TV camera which transforms the scene before the blind pedestrian into an image which can be received by the image interpretation system. The image interpretation system is implemented on a microprocessor which has been programmed to execute real-time feature extraction and scene analysis algorithms for locating and identifying the pedestrian's path and potential obstacles. Identity and location information is presented to the pedestrian by means of tactile coding and machine-generated speech.
Comprehensive Understanding for Vegetated Scene Radiance Relationships
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Deering, D. W.
1984-01-01
The improvement of our fundamental understanding of the dynamics of directional scattering properties of vegetation canopies through analysis of field data and model simulation data is discussed. Directional reflectance distributions spanning the entire existance hemisphere were measured in two field studies; one using a Mark III 3-band radiometer and one using rapid scanning bidirectional field instrument called PARABOLA. Surfaces measured included corn, soybeans, bare soils, grass lawn, orchard grass, alfalfa, cotton row crops, plowed field, annual grassland, stipa grass, hard wheat, salt plain shrubland, and irrigated wheat. Some structural and optical measurements were taken. Field data show unique reflectance distributions ranging from bare soil to complete vegetation canopies. Physical mechanisms causing these trends are proposed based on scattering properties of soil and vegetation. Soil exhibited a strong backscattering peak toward the Sun. Complete vegetation exhibited a bowl distribution with the minimum reflectance near nadir. Incomplete vegetation canopies show shifting of the minimum reflectance off of nadir in the forward scattering direction because both the scattering properties or the vegetation and soil are observed.
Watch your step! A frustrated total internal reflection approach to forensic footwear imaging
NASA Astrophysics Data System (ADS)
Needham, J. A.; Sharp, J. S.
2016-02-01
Forensic image retrieval and processing are vital tools in the fight against crime e.g. during fingerprint capture. However, despite recent advances in machine vision technology and image processing techniques (and contrary to the claims of popular fiction) forensic image retrieval is still widely being performed using outdated practices involving inkpads and paper. Ongoing changes in government policy, increasing crime rates and the reduction of forensic service budgets increasingly require that evidence be gathered and processed more rapidly and efficiently. A consequence of this is that new, low-cost imaging technologies are required to simultaneously increase the quality and throughput of the processing of evidence. This is particularly true in the burgeoning field of forensic footwear analysis, where images of shoe prints are being used to link individuals to crime scenes. Here we describe one such approach based upon frustrated total internal reflection imaging that can be used to acquire images of regions where shoes contact rigid surfaces.
Security Event Recognition for Visual Surveillance
NASA Astrophysics Data System (ADS)
Liao, W.; Yang, C.; Yang, M. Ying; Rosenhahn, B.
2017-05-01
With rapidly increasing deployment of surveillance cameras, the reliable methods for automatically analyzing the surveillance video and recognizing special events are demanded by different practical applications. This paper proposes a novel effective framework for security event analysis in surveillance videos. First, convolutional neural network (CNN) framework is used to detect objects of interest in the given videos. Second, the owners of the objects are recognized and monitored in real-time as well. If anyone moves any object, this person will be verified whether he/she is its owner. If not, this event will be further analyzed and distinguished between two different scenes: moving the object away or stealing it. To validate the proposed approach, a new video dataset consisting of various scenarios is constructed for more complex tasks. For comparison purpose, the experiments are also carried out on the benchmark databases related to the task on abandoned luggage detection. The experimental results show that the proposed approach outperforms the state-of-the-art methods and effective in recognizing complex security events.
Watch your step! A frustrated total internal reflection approach to forensic footwear imaging.
Needham, J A; Sharp, J S
2016-02-16
Forensic image retrieval and processing are vital tools in the fight against crime e.g. during fingerprint capture. However, despite recent advances in machine vision technology and image processing techniques (and contrary to the claims of popular fiction) forensic image retrieval is still widely being performed using outdated practices involving inkpads and paper. Ongoing changes in government policy, increasing crime rates and the reduction of forensic service budgets increasingly require that evidence be gathered and processed more rapidly and efficiently. A consequence of this is that new, low-cost imaging technologies are required to simultaneously increase the quality and throughput of the processing of evidence. This is particularly true in the burgeoning field of forensic footwear analysis, where images of shoe prints are being used to link individuals to crime scenes. Here we describe one such approach based upon frustrated total internal reflection imaging that can be used to acquire images of regions where shoes contact rigid surfaces.
Comparison of Four Saliva Detection Methods to Identify Expectorated Blood Spatter.
Park, Hee-Yeon; Son, Bu-Nam; Seo, Young-Il; Lim, Si-Keun
2015-11-01
Blood spatter analysis is an important step for crime scene reconstruction. The presence of saliva in blood spatter could indicate expectorated blood which is difficult to distinguish from impact spatter. In this study, four saliva test methods (SALIgAE(®) , Phadebas(®) sheet, RSID(™) -Saliva kit, and starch gel diffusion) were compared to identify the best method for detecting expectorated blood spatter. The RSID(™) -Saliva kit showed the highest sensitivity even when saliva was mixed with blood, and was not inhibited by the presence of blood. The SALIgAE(®) test provided easy and rapid results, but the yellow color of a positive reaction was overwhelmed by the red color of the blood. The starch gel diffusion method and the Phadebas(®) sheet exhibited relatively low sensitivity and the assay took a long time. When using the RSID(™) -Saliva kit for identifying saliva in blood, results should be read within 10 min. © 2015 American Academy of Forensic Sciences.
16th iWoRiD scientific summary and personal impressions
NASA Astrophysics Data System (ADS)
Vacchi, Andrea
2015-08-01
The development of Radiation Imaging Detectors dedicated to specific front edge use is a rapidly expanding galaxy taking force from a healthy mixture of growing ambitions within basic science targets and applicative use in many contiguous fields. The growth of the high technology imaging tools market is a relevant motivating element. On the scene of this conference a flow of very accurate and informative presentations has allowed to gather a vivid image on the almost frantic target oriented work, the results shown have often surpassed the most visionary expectations.
A cost effective hydrogel test kit for pre and post blast trinitrotoluene.
Choodum, Aree; Malathong, Khanitta; NicDaeid, Niamh; Limsakul, Wadcharawadee; Wongniramaikul, Worawit
2016-09-01
A cost effective hydrogel test kit was successfully developed for the detection of pre- and post-blast trinitrotoluene (TNT). A polyvinyl alcohol (PVA) hydrogel matrix was used to entrap the potassium hydroxide (KOH) colourimetric reagent. The easily portable test kit was fabricated in situ in a small tube to which the sample could be added directly. The test kit was used in conjunction with digital image colourimetry (DIC) to demonstrate the rapid quantitative analysis of TNT in a test soil sample. The built-in digital camera of an iPhone was used to capture digital images of the colourimetric products from the test kit. Red-Green-Blue (RGB) colour data from the digital images of TNT standard solutions were used to establish a calibration graph. The validation of the DIC method indicated excellent inter day precision (0.12-3.60%RSD) and accuracy (93-108% relative accuracy). Post-blast soil samples containing TNT were analysed using the test kit and were in good agreement with spectrophotometric analysis. The intensity of the RGB data from the TNT complex deviated by +6.3%, +5.1%, and -4.9% after storage of the test kits in a freezer for 3 months. The test kit was also reusable for up to 12 times with only -5.4%, +0.3%, and +4.0% deviations. The hydrogel test kit was applied in the detection of trace explosive residues at the scene of the recent Bangkok bombing at the Ratchaprasong intersection and produced positive results for TNT demonstrating its operational field application as a rapid and cost effective quantitative tool for explosive residue analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gorelick, Noel
2013-04-01
The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.
NASA Astrophysics Data System (ADS)
Gorelick, N.
2012-12-01
The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.
2012-01-01
Background In research on event-related potentials (ERP) to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention) is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN) and the late positive potential (LPP). Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes. PMID:22607397
Are car daytime running lights detrimental to motorcycle conspicuity?
Cavallo, Viola; Pinto, Maria
2012-11-01
For a long time, motorcycles were the only vehicles with daytime running lights (DRLs), but this conspicuity advantage has been questioned due to the rapidly increasing introduction of DRLs on cars as well. The present experiment was designed to assess effects of car DRLs on motorcycle perception in a situation that specifically brought attentional conspicuity to bear. Photographs representing complex urban traffic scenes were displayed briefly (250 ms) to 24 participants who had to detect vulnerable road users (motorcyclists, cyclists, pedestrians) appearing at different locations and distances. Car DRLs hampered motorcycle perception compared to conditions where car lights were not on, especially when the motorcycle was at a greater distance from the observer and when it was located in the central part of the visual scene. Car DRLs also hampered the perception of cyclists and pedestrians. Although the globally positive safety effect of car DRLs is generally acknowledged, our study suggests that more attention should be paid to motorcyclists and other vulnerable road users when introducing car DRLs. Several means of improving motorcycle conspicuity in car DRL environments are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Agricultural mapping using Support Vector Machine-Based Endmember Extraction (SVM-BEE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archibald, Richard K; Filippi, Anthony M; Bhaduri, Budhendra L
Extracting endmembers from remotely sensed images of vegetated areas can present difficulties. In this research, we applied a recently developed endmember-extraction algorithm based on Support Vector Machines (SVMs) to the problem of semi-autonomous estimation of vegetation endmembers from a hyperspectral image. This algorithm, referred to as Support Vector Machine-Based Endmember Extraction (SVM-BEE), accurately and rapidly yields a computed representation of hyperspectral data that can accommodate multiple distributions. The number of distributions is identified without prior knowledge, based upon this representation. Prior work established that SVM-BEE is robustly noise-tolerant and can semi-automatically and effectively estimate endmembers; synthetic data and a geologicmore » scene were previously analyzed. Here we compared the efficacies of the SVM-BEE and N-FINDR algorithms in extracting endmembers from a predominantly agricultural scene. SVM-BEE was able to estimate vegetation and other endmembers for all classes in the image, which N-FINDR failed to do. Classifications based on SVM-BEE endmembers were markedly more accurate compared with those based on N-FINDR endmembers.« less
Li, Xiaohua; Zhang, Zhujun; Tao, Liang
2013-09-15
Triacetone triperoxide (TATP) is relatively easy to make and has been used in various terrorist acts. Early but easy detection of TATP is highly desired. We designed a new type sensor array for H2O2. The unique CL sensor array was based on CeO2 nanoparticles' membranes, which have an excellent catalytic effect on the luminol-H2O2 CL reaction in alkaline medium. It exhibits a linear range for the detection of H2O2 from 1.0×10(-8) to 5.0×10(-5)M (R(2)=0.9991) with a 1s response time. The detection limit is 1.0×10(-9)M. Notably, the present approach allows the design of CL sensor array assays in a more simple, time-saving, long-lifetime, high-throughput, and economical approach when compared with conventional CL sensor. It is conceptually different from conventional CL sensor assays. The novel sensor array has been successfully applied for the detection of TATP at the scene. Copyright © 2013 Elsevier B.V. All rights reserved.
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
Analysis of the Influence of Construction Insulation Systems on Public Safety in China
Zhang, Guowei; Zhu, Guoqing; Zhao, Guoxiang
2016-01-01
With the Government of China’s proposed Energy Efficiency Regulations (GB40411-2007), the implementation of external insulation systems will be mandatory in China. The frequent external insulation system fires cause huge numbers of casualties and extensive property damage and have rapidly become a new hot issue in construction evacuation safety in China. This study attempts to reconstruct an actual fire scene and propose a quantitative risk assessment method for upward insulation system fires using thermal analysis tests and large eddy simulations (using the Fire Dynamics Simulator (FDS) software). Firstly, the pyrolysis and combustion characteristics of Extruded polystyrene board (XPS panel), such as ignition temperature, combustion heat, limiting oxygen index, thermogravimetric analysis and thermal radiation analysis were studied experimentally. Based on these experimental data, large eddy simulation was then applied to reconstruct insulation system fires. The results show that upward insulation system fires could be accurately reconstructed by using thermal analysis test and large eddy simulation. The spread of insulation material system fires in the vertical direction is faster than that in the horizontal direction. Moreover, we also find that there is a possibility of flashover in enclosures caused by insulation system fires as the smoke temperature exceeds 600 °C. The simulation methods and experimental results obtained in this paper could provide valuable references for fire evacuation, hazard assessment and fire resistant construction design studies. PMID:27589774
Analysis of the Influence of Construction Insulation Systems on Public Safety in China.
Zhang, Guowei; Zhu, Guoqing; Zhao, Guoxiang
2016-08-30
With the Government of China's proposed Energy Efficiency Regulations (GB40411-2007), the implementation of external insulation systems will be mandatory in China. The frequent external insulation system fires cause huge numbers of casualties and extensive property damage and have rapidly become a new hot issue in construction evacuation safety in China. This study attempts to reconstruct an actual fire scene and propose a quantitative risk assessment method for upward insulation system fires using thermal analysis tests and large eddy simulations (using the Fire Dynamics Simulator (FDS) software). Firstly, the pyrolysis and combustion characteristics of Extruded polystyrene board (XPS panel), such as ignition temperature, combustion heat, limiting oxygen index, thermogravimetric analysis and thermal radiation analysis were studied experimentally. Based on these experimental data, large eddy simulation was then applied to reconstruct insulation system fires. The results show that upward insulation system fires could be accurately reconstructed by using thermal analysis test and large eddy simulation. The spread of insulation material system fires in the vertical direction is faster than that in the horizontal direction. Moreover, we also find that there is a possibility of flashover in enclosures caused by insulation system fires as the smoke temperature exceeds 600 °C. The simulation methods and experimental results obtained in this paper could provide valuable references for fire evacuation, hazard assessment and fire resistant construction design studies.
Seek and you shall remember: Scene semantics interact with visual search to build better memories
Draschkow, Dejan; Wolfe, Jeremy M.; Võ, Melissa L.-H.
2014-01-01
Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. PMID:25015385
An analysis of Landsat-4 Thematic Mapper geometric properties
NASA Technical Reports Server (NTRS)
Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gohkman, B.; Friedman, S. Z.; Logan, T. L.
1984-01-01
Landsat-4 Thematic Mapper data of Washington, DC, Harrisburg, PA, and Salton Sea, CA were analyzed to determine geometric integrity and conformity of the data to known earth surface geometry. Several tests were performed. Intraband correlation and interband registration were investigated. No problems were observed in the intraband analysis, and aside from indications of slight misregistration between bands of the primary versus bands of the secondary focal planes, interband registration was well within the specified tolerances. A substantial number of ground control points were found and used to check the images' conformity to the Space Oblique Mercator (SOM) projection of their respective areas. The means of the residual offsets, which included nonprocessing related measurement errors, were close to the one pixel level in the two scenes examined. The Harrisburg scene residual mean was 28.38 m (0.95 pixels) with a standard deviation of 19.82 m (0.66 pixels), while the mean and standard deviation for the Salton Sea scene were 40.46 (1.35 pixels) and 30.57 m (1.02 pixels), respectively. Overall, the data were judged to be a high geometric quality with errors close to those targeted by the TM sensor design specifications.
A novel scene management technology for complex virtual battlefield environment
NASA Astrophysics Data System (ADS)
Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan
2018-04-01
The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608
NASA Technical Reports Server (NTRS)
Farrand, W. H.; Bell, J. F., III; Johnson, J. R.; Squyres, S. W.; Soderblom, J.; Ming, D. W.
2006-01-01
Visible and Near Infrared (VNIR) multispectral observations of rocks made by the Mars Exploration Rover Spirit s Panoramic camera (Pancam) have been analysed using a spectral mixture analysis (SMA) methodology. Scenes have been examined from the Gusev crater plains into the Columbia Hills. Most scenes on the plains and in the Columbia Hills could be modeled as three endmember mixtures of a bright material, rock, and shade. Scenes of rocks disturbed by the rover s Rock Abrasion Tool (RAT) required additional endmembers. In the Columbia Hills there were a number of scenes in which additional rock endmembers were required. The SMA methodology identified relatively dust-free areas on undisturbed rock surfaces, as well as spectrally unique areas on RAT abraded rocks. Spectral parameters from these areas were examined and six spectral classes were identified. These classes are named after a type rock or area and are: Adirondack, Lower West Spur, Clovis, Wishstone, Peace, and Watchtower. These classes are discriminable based, primarily, on near-infrared (NIR) spectral parameters. Clovis and Watchtower class rocks appear more oxidized than Wishstone class rocks and Adirondack basalts based on their having higher 535 nm band depths. Comparison of the spectral parameters of these Gusev crater rocks to parameters of glass-dominated basaltic tuffs indicates correspondence between measurements of Clovis and Watchtower classes, but divergence for the Wishstone class rocks which appear to have a higher fraction of crystalline ferrous iron bearing phases. Despite a high sulfur content, the rock Peace has NIR properties resembling plains basalts.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
Visual wetness perception based on image color statistics.
Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya
2017-05-01
Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.
Hashida, Masahiro; Kamezaki, Ryousuke; Goto, Makoto; Shiraishi, Junji
2017-03-01
The ability to predict hazards in possible situations in a general X-ray examination room created for Kiken-Yochi training (KYT) is quantified by use of free-response receiver-operating characteristics (FROC) analysis for determining whether the total number of years of clinical experience, involvement in general X-ray examinations, occupation, and training each have an impact on the hazard prediction ability. Twenty-three radiological technologists (RTs) (years of experience: 2-28), four nurses (years of experience: 15-19), and six RT students observed 53 scenes of KYT: 26 scenes with hazardous points (hazardous points are those that might cause injury to patients) and 27 scenes without points. Based on the results of these observations, we calculated the alternative free-response receiver-operating characteristic (AFROC) curve and the figure of merit (FOM) to quantify the hazard prediction ability. The results showed that the total number of years of clinical experience did not have any impact on hazard prediction ability, whereas recent experience with general X-ray examinations greatly influenced this ability. In addition, the hazard prediction ability varied depending on the occupations of the observers while they were observing the same scenes in KYT. The hazard prediction ability of the radiologic technology students was improved after they had undergone patient safety training. This proposed method with FROC observer study enabled the quantification and evaluation of the hazard prediction capability, and the application of this approach to clinical practice may help to ensure the safety of examinations and treatment in the radiology department.
Talving, Peep; Pålstedt, Joakim; Riddez, Louis
2005-01-01
Few previous studies have been conducted on the prehospital management of hypotensive trauma patients in Stockholm County. The aim of this study was to describe the prehospital management of hypotensive trauma patients admitted to the largest trauma center in Sweden, and to assess whether prehospital trauma life support (PHTLS) guidelines have been implemented regarding prehospital time intervals and fluid therapy. In addition, the effects of the age, type of injury, injury severity, prehospital time interval, blood pressure, and fluid therapy on outcome were investigated. This is a retrospective, descriptive study on consecutive, hypotensive trauma patients (systolic blood pressure < or = 90 mmHg on the scene of injury) admitted to Karolinska University Hospital in Stockholm, Sweden, during 2001-2003. The reported values are medians with interquartile ranges. Basic demographics, prehospital time intervals and interventions, injury severity scores (ISS), type and volumes of prehospital fluid resuscitation, and 30-day mortality were abstracted. The effects of the patient's age, gender, prehospital time interval, type of injury, injury severity, on-scene and emergency department blood pressure, and resuscitation fluid volumes on mortality were analyzed using the exact logistic regression model. In 102 (71 male) adult patients (age > or = 15 years) recruited, the median age was 35.5 years (range: 27-55 years) and 77 patients (75%) had suffered blunt injury. The predominant trauma mechanisms were falls between levels (24%) and motor vehicle crashes (22%) with an ISS of 28.5 (range: 16-50). The on-scene time interval was 19 minutes (range: 12-24 minutes). Fluid therapy was initiated at the scene of injury in the majority of patients (73%) regardless of the type of injury (77 blunt [75%] / 25 penetrating [25%]) or injury severity (ISS: 0-20; 21-40; 41-75). Age (odds ratio (OR) = 1.04), male gender (OR = 3.2), ISS 21-40 (OR = 13.6), and ISS >40 (OR = 43.6) were the significant factors affecting outcome in the exact logistic regression analysis. The time interval at the scene of injury exceeded PHTLS guidelines. The vast majority of the hypotensive trauma patients were fluid-resuscitated on-scene regardless of the type, mechanism, or severity of injury. A predefined fluid resuscitation regimen is not employed in hypotensive trauma victims with different types of injuries. The outcome was worsened by male gender, progressive age, and ISS > 20 in the exact multiple regression analysis.
IR characteristic simulation of city scenes based on radiosity model
NASA Astrophysics Data System (ADS)
Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu
2013-09-01
Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.
Scene-Based Contextual Cueing in Pigeons
Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.
2014-01-01
Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098
ERIC Educational Resources Information Center
Henderson, John M.; Nuthmann, Antje; Luke, Steven G.
2013-01-01
Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation…
Initial Scene Representations Facilitate Eye Movement Guidance in Visual Search
ERIC Educational Resources Information Center
Castelhano, Monica S.; Henderson, John M.
2007-01-01
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a…
Iconic memory for the gist of natural scenes.
Clarke, Jason; Mack, Arien
2014-11-01
Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.
How many pixels make a memory? Picture memory for small pictures.
Wolfe, Jeremy M; Kuzmova, Yoana I
2011-06-01
Torralba (Visual Neuroscience, 26, 123-131, 2009) showed that, if the resolution of images of scenes were reduced to the information present in very small "thumbnail images," those scenes could still be recognized. The objects in those degraded scenes could be identified, even though it would be impossible to identify them if they were removed from the scene context. Can tiny and/or degraded scenes be remembered, or are they like brief presentations, identified but not remembered. We report that memory for tiny and degraded scenes parallels the recognizability of those scenes. You can remember a scene to approximately the degree to which you can classify it. Interestingly, there is a striking asymmetry in memory when scenes are not the same size on their initial appearance and subsequent test. Memory for a large, full-resolution stimulus can be tested with a small, degraded stimulus. However, memory for a small stimulus is not retrieved when it is tested with a large stimulus.
A multi-temporal analysis approach for land cover mapping in support of nuclear incident response
NASA Astrophysics Data System (ADS)
Sah, Shagan; van Aardt, Jan A. N.; McKeown, Donald M.; Messinger, David W.
2012-06-01
Remote sensing can be used to rapidly generate land use maps for assisting emergency response personnel with resource deployment decisions and impact assessments. In this study we focus on constructing accurate land cover maps to map the impacted area in the case of a nuclear material release. The proposed methodology involves integration of results from two different approaches to increase classification accuracy. The data used included RapidEye scenes over Nine Mile Point Nuclear Power Station (Oswego, NY). The first step was building a coarse-scale land cover map from freely available, high temporal resolution, MODIS data using a time-series approach. In the case of a nuclear accident, high spatial resolution commercial satellites such as RapidEye or IKONOS can acquire images of the affected area. Land use maps from the two image sources were integrated using a probability-based approach. Classification results were obtained for four land classes - forest, urban, water and vegetation - using Euclidean and Mahalanobis distances as metrics. Despite the coarse resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. The classifications were augmented using this fused approach, with few supplementary advantages such as correction for cloud cover and independence from time of year. We concluded that this method would generate highly accurate land maps, using coarse spatial resolution time series satellite imagery and a single date, high spatial resolution, multi-spectral image.
Background Characterization Techniques For Pattern Recognition Applications
NASA Astrophysics Data System (ADS)
Noah, Meg A.; Noah, Paul V.; Schroeder, John W.; Kessler, Bernard V.; Chernick, Julian A.
1989-08-01
The Department of Defense has a requirement to investigate technologies for the detection of air and ground vehicles in a clutter environment. The use of autonomous systems using infrared, visible, and millimeter wave detectors has the potential to meet DOD's needs. In general, however, the hard-ware technology (large detector arrays with high sensitivity) has outpaced the development of processing techniques and software. In a complex background scene the "problem" is as much one of clutter rejection as it is target detection. The work described in this paper has investigated a new, and innovative, methodology for background clutter characterization, target detection and target identification. The approach uses multivariate statistical analysis to evaluate a set of image metrics applied to infrared cloud imagery and terrain clutter scenes. The techniques are applied to two distinct problems: the characterization of atmospheric water vapor cloud scenes for the Navy's Infrared Search and Track (IRST) applications to support the Infrared Modeling Measurement and Analysis Program (IRAMMP); and the detection of ground vehicles for the Army's Autonomous Homing Munitions (AHM) problems. This work was sponsored under two separate Small Business Innovative Research (SBIR) programs by the Naval Surface Warfare Center (NSWC), White Oak MD, and the Army Material Systems Analysis Activity at Aberdeen Proving Ground MD. The software described in this paper will be available from the respective contract technical representatives.
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
Multiple scene attitude estimator performance for LANDSAT-1
NASA Technical Reports Server (NTRS)
Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.
1979-01-01
Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.
Tulsa Oklahoma Oktoberfest Tent Collapse Report
Deal, Kelly E.; Synovitz, Carolyn K.; Goodloe, Jeffrey M.; King, Brandi; Stewart, Charles E.
2012-01-01
Background. On October 17, 2007, a severe weather event collapsed two large tents and several smaller tents causing 23 injuries requiring evacuation to emergency departments in Tulsa, OK. Methods. This paper is a retrospective analysis of the regional health system's response to this event. Data from the Tulsa Fire Department, The Emergency Medical Services Authority (EMSA), receiving hospitals and coordinating services were reviewed and analyzed. EMS patient care reports were reviewed and analyzed using triage designators assigned in the field, injury severity scores, and critical mortality. Results. EMT's and paramedics from Tulsa Fire Department and EMSA provided care at the scene under unified incident command. Of the 23 patients transported by EMS, four were hospitalized, one with critical spinal injury and one with critical head injury. One patient is still in ongoing rehabilitation. Discussion. Analysis of the 2007 Tulsa Oktoberfest mass casualty incident revealed rapid police/fire/EMS response despite challenges of operations at dark under severe weather conditions and the need to treat a significant number of injured victims. There were no fatalities. Of the patients transported by EMS, a minority sustained critical injuries, with most sustaining injuries amenable to discharge after emergency department care. PMID:22649732
NASA Technical Reports Server (NTRS)
Wharton, S. W.
1980-01-01
An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. The algorithm interfaces the rapid numerical processing capacity of a computer with the human ability to integrate qualitative information. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters and the analyst, who evaluate and elect to modify the cluster structure. Clusters can be deleted or lumped pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The ICAP was implemented in APL (A Programming Language), an interactive computer language. The flexibility of the algorithm was evaluated using data from different LANDSAT scenes to simulate two situations: one in which the analyst is assumed to have no prior knowledge about the data and wishes to have the clusters formed more or less automatically; and the other in which the analyst is assumed to have some knowledge about the data structure and wishes to use that information to closely supervise the clustering process. For comparison, an existing clustering method was also applied to the two data sets.
Application of Visual Attention in Seismic Attribute Analysis
NASA Astrophysics Data System (ADS)
He, M.; Gu, H.; Wang, F.
2016-12-01
It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.
Color visual simulation applications at the Defense Mapping Agency
NASA Astrophysics Data System (ADS)
Simley, J. D.
1984-09-01
The Defense Mapping Agency (DMA) produces the Digital Landmass System data base to provide culture and terrain data in support of numerous aircraft simulators. In order to conduct data base and simulation quality control and requirements analysis, DMA has developed the Sensor Image Simulator which can rapidly generate visual and radar static scene digital simulations. The use of color in visual simulation allows the clear portrayal of both landcover and terrain data, whereas the initial black and white capabilities were restricted in this role and thus found limited use. Color visual simulation has many uses in analysis to help determine the applicability of current and prototype data structures to better meet user requirements. Color visual simulation is also significant in quality control since anomalies can be more easily detected in natural appearing forms of the data. The realism and efficiency possible with advanced processing and display technology, along with accurate data, make color visual simulation a highly effective medium in the presentation of geographic information. As a result, digital visual simulation is finding increased potential as a special purpose cartographic product. These applications are discussed and related simulation examples are presented.
Grant, Ashleigh; Wilkinson, T J; Holman, Derek R; Martin, Michael C
2005-09-01
Analysis of fingerprints has predominantly focused on matching the pattern of ridges to a specific person as a form of identification. The present work focuses on identifying extrinsic materials that are left within a person's fingerprint after recent handling of such materials. Specifically, we employed infrared spectromicroscopy to locate and positively identify microscopic particles from a mixture of common materials in the latent human fingerprints of volunteer subjects. We were able to find and correctly identify all test substances based on their unique infrared spectral signatures. Spectral imaging is demonstrated as a method for automating recognition of specific substances in a fingerprint. We also demonstrate the use of attenuated total reflectance (ATR) and synchrotron-based infrared spectromicroscopy for obtaining high-quality spectra from particles that were too thick or too small, respectively, for reflection/absorption measurements. We believe the application of this rapid, nondestructive analytical technique to the forensic study of latent human fingerprints has the potential to add a new layer of information available to investigators. Using fingerprints to not only identify who was present at a crime scene, but also to link who was handling key materials, will be a powerful investigative tool.
When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes
ERIC Educational Resources Information Center
Vo, Melissa L. -H.; Wolfe, Jeremy M.
2012-01-01
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…
Effects of memory colour on colour constancy for unknown coloured objects.
Granzier, Jeroen J M; Gegenfurtner, Karl R
2012-01-01
The perception of an object's colour remains constant despite large variations in the chromaticity of the illumination-colour constancy. Hering suggested that memory colours, the typical colours of objects, could help in estimating the illuminant's colour and therefore be an important factor in establishing colour constancy. Here we test whether the presence of objects with diagnostical colours (fruits, vegetables, etc) within a scene influence colour constancy for unknown coloured objects in the scene. Subjects matched one of four Munsell papers placed in a scene illuminated under either a reddish or a greenish lamp with the Munsell book of colour illuminated by a neutral lamp. The Munsell papers were embedded in four different scenes-one scene containing diagnostically coloured objects, one scene containing incongruent coloured objects, a third scene with geometrical objects of the same colour as the diagnostically coloured objects, and one scene containing non-diagnostically coloured objects (eg, a yellow coffee mug). All objects were placed against a black background. Colour constancy was on average significantly higher for the scene containing the diagnostically coloured objects compared with the other scenes tested. We conclude that the colours of familiar objects help in obtaining colour constancy for unknown objects.
Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.
Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng
2013-10-24
Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.
Smith, Tim J; Mital, Parag K
2013-07-17
Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.