Sample records for actual scene note

  1. A comparison of viewer reactions to outdoor scenes and photographs of those scenes

    Treesearch

    Elwood, Jr. Shafer; Thomas A. Richards; Thomas A. Richards

    1974-01-01

    A color-slide projection or photograph can be used to determine reactions to an actual scene if the presentation adequately includes most of the elements in the scene. Eight kinds of scenes were subjected to three different types of presentation: (A) viewing. the actual scenes, (B) viewing color slides of the scenes, and (C) viewing color photographs of the scenes. For...

  2. iss01e5107

    NASA Image and Video Library

    2000-12-01

    ISS01-E-5107 (December 2000) --- This nadir view of a Chilean glaciated area was provided by one of the early December digital still camera images down linked from the International Space Station (ISS) to ground controllers in Houston. The remote headwaters of the Rio de la Colonia are located on the eastern flank of the Cerro Pared Norte, a high, coastal range of the Andes in southern Chile. This is but a portion of a larger glaciated region of the Chilean coast located at only 47 degrees south latitude. The river actually begins its flow just off the top of this scene at the foot of the two large, converging, valley glaciers near the center. Some of the numerous lakes visible are tinted by the fine glacial sediments suspended in their waters. Note the shards of ice that have calved from the glaciers into the lakes on the left. Also note the shadows of the crest of the over 14,000-foot mountains (lower center). The remote headwaters of the Rio de la Colonia are located on the eastern flank of the Cerro Pared Norte, a high, coastal range of the Andes in southern Chile. This is a but a portion of a larger glaciated region of the Chilean coast located at only 47 degrees south latitude. The river actually begins its flow just off the top of this scene at the foot of the two large, converging, valley glaciers near the center. Some of the numerous lakes visible are tinted by the fine glacial sediments suspended in their waters. Note the shards of ice that have calved from the glaciers into the lakes on the left. Also note the shadows of the crest of the over 14,000-foot mountains (lower center).

  3. Guilty by his fibers: suspect confession versus textile fibers reconstructed simulation.

    PubMed

    Suzuki, Shinichi; Higashikawa, Yoshiyasu; Sugita, Ritsuko; Suzuki, Yasuhiro

    2009-08-10

    In one particular criminal case involving murder and theft, the arrested suspect admitted to the theft, but denied responsibility for the murder of the inhabitant of the crime scene. In his confession, the suspect stated that he found the victim's body when he broke into the crime scene to commit theft. For this report, the actual crime scene was reconstructed in accordance with the confession obtained during the interrogation of the suspect, and suspect behavior was simulated in accord to the suspect confession. The number of characteristic fibers retrieved from the simulated crime scene was compared with those of retrieved from the actual crime scene. By comparing the distribution and number of characteristic fibers collected in the simulation experiments and the actual investigation, the reliability of the suspect's confession was evaluated. The characteristic dark yellowish-green woolen fibers of the garment that the suspect wore when he entered the crime scene were selected as the target fiber in the reconstruction. The experimental simulations were conducted four times. The distributed target fibers were retrieved using the same type of adhesive tape and the same protocol by the same police officers who conducted the retrieval of the fibers at the actual crime scene. The fibers were identified both through morphological observation and by color comparisons of their ultaviolet-visible transmittance spectra measured with a microspectrophotometer. The fibers collected with the adhesive tape were counted for each area to compare with those collected in the actual crime scene investigation. The numbers of fibers found at each area of the body, mattress and blankets were compared between the simulated experiments and the actual investigation, and a significant difference was found. In particular, the numbers of fibers found near the victim's head were significantly different. As a result, the suspect's confession was not considered to be reliable, as a stronger contact with the victim was demonstrated by our simulations. During the control trial, traditional forensic traces like DNA or fingerprints were mute regarding the suspect's says. At the opposite, the fiber intelligence was highly significant to explain the suspect's behavior at the crime scene. The fiber results and simulations were presented at the court and the man was subsequently found guilty not only of theft and trespassing but also murder.

  4. Navigating the auditory scene: an expert role for the hippocampus.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.

  5. Constructing, Perceiving, and Maintaining Scenes: Hippocampal Activity and Connectivity

    PubMed Central

    Zeidman, Peter; Mullally, Sinéad L.; Maguire, Eleanor A.

    2015-01-01

    In recent years, evidence has accumulated to suggest the hippocampus plays a role beyond memory. A strong hippocampal response to scenes has been noted, and patients with bilateral hippocampal damage cannot vividly recall scenes from their past or construct scenes in their imagination. There is debate about whether the hippocampus is involved in the online processing of scenes independent of memory. Here, we investigated the hippocampal response to visually perceiving scenes, constructing scenes in the imagination, and maintaining scenes in working memory. We found extensive hippocampal activation for perceiving scenes, and a circumscribed area of anterior medial hippocampus common to perception and construction. There was significantly less hippocampal activity for maintaining scenes in working memory. We also explored the functional connectivity of the anterior medial hippocampus and found significantly stronger connectivity with a distributed set of brain areas during scene construction compared with scene perception. These results increase our knowledge of the hippocampus by identifying a subregion commonly engaged by scenes, whether perceived or constructed, by separating scene construction from working memory, and by revealing the functional network underlying scene construction, offering new insights into why patients with hippocampal lesions cannot construct scenes. PMID:25405941

  6. A validation of ground ambulance pre-hospital times modeled using geographic information systems.

    PubMed

    Patel, Alka B; Waters, Nigel M; Blanchard, Ian E; Doig, Christopher J; Ghali, William A

    2012-10-03

    Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7-8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area.

  7. Visual memory for moving scenes.

    PubMed

    DeLucia, Patricia R; Maldia, Maria M

    2006-02-01

    In the present study, memory for picture boundaries was measured with scenes that simulated self-motion along the depth axis. The results indicated that boundary extension (a distortion in memory for picture boundaries) occurred with moving scenes in the same manner as that reported previously for static scenes. Furthermore, motion affected memory for the boundaries but this effect of motion was not consistent with representational momentum of the self (memory being further forward in a motion trajectory than actually shown). We also found that memory for the final position of the depicted self in a moving scene was influenced by properties of the optical expansion pattern. The results are consistent with a conceptual framework in which the mechanisms that underlie boundary extension and representational momentum (a) process different information and (b) both contribute to the integration of successive views of a scene while the scene is changing.

  8. Voluntary Complications

    ERIC Educational Resources Information Center

    Tribbensee, Nancy

    2008-01-01

    In a student production of "Dracula" at Texas A&M University some years ago, the final scene was exceptionally dramatic. One actor stabbed another, who was playing the vampire, in the chest with a real knife. A volunteer director from the community, who was assisting the drama club, had decided that the scene required the actual weapon, not the…

  9. How do visual and postural cues combine for self-tilt perception during slow pitch rotations?

    PubMed

    Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L

    2014-11-01

    Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Writing a Movie.

    ERIC Educational Resources Information Center

    Hoffner, Helen

    2003-01-01

    Explains a reading and writing assignment called "Writing a Movie" in which students view a short film segment and write a script in which they describe the scene. Notes that this assignment uses films to develop fluency and helps students understand the reading and writing connections. Concludes that students learn to summarize a scene from film,…

  11. A validation of ground ambulance pre-hospital times modeled using geographic information systems

    PubMed Central

    2012-01-01

    Background Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. Methods The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. Results There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. Conclusions The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area. PMID:23033894

  12. The cognitive structural approach for image restoration

    NASA Astrophysics Data System (ADS)

    Mardare, Igor; Perju, Veacheslav; Casasent, David

    2008-03-01

    It is analyzed the important and actual problem of the defective images of scenes restoration. The proposed approach provides restoration of scenes by a system on the basis of human intelligence phenomena reproduction used for restoration-recognition of images. The cognitive models of the restoration process are elaborated. The models are realized by the intellectual processors constructed on the base of neural networks and associative memory using neural network simulator NNToolbox from MATLAB 7.0. The models provides restoration and semantic designing of images of scenes under defective images of the separate objects.

  13. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture.

    PubMed

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy.

  14. Astronauts Gardner and Allen during loading of Palapa B-2 in payload bay

    NASA Image and Video Library

    1984-11-12

    51A-41-058 (12 November 1984) --- Astronaut Joseph P. Allen IV appears to be lifting weights. Astronaut Dale A. Gardner holding on. Actually, Dr. Allen is the sole anchor for the top portion (and most of) the captured Palapa B-2 communications satellite during the Nov. 12 retrieval extravehicular activity (EVA) of the two mission specialists. This scene came near the end of the long-duration task. Gardner used a torque wrench to tighten clamps on an adapter used to secure the Palapa to its "parking place" in Discovery's cargo bay. Note the difference between the two stinger devices stowed on Challenger's port side (right side of frame). The one nearer the spacecraft's vertical stabilizer is spent, having been inserted by Allen earlier in the day to stabilize the communications satellite. The one nearer the camera awaited duty in two days when it would aid in the capture of the Westar VI satellite.

  15. Fires in Philippines

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Roughly a dozen fires (red pixels) dotted the landscape on the main Philippine island of Luzon on April 1, 2002. This true-color image was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra spacecraft. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of this scene at the sensor's fullest resolution, visit the MODIS Rapidfire site.

  16. Strait of Gibraltar

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The Strait of Gibraltar as seen from the south (36.0N, 5.5W). This scene shows the actual Rock of Gibraltar under cloud cover but most of the Strait of Gibraltar, at the mouth of the Mediterranaen Sea and the Atlantic Ocean, can be seen in good detail. Despite the obliquity of the scene, much of the beauty of the Spanish and Moroccan countryside can still be appreciated.

  17. How children remember neutral and emotional pictures: boundary extension in children's scene memories.

    PubMed

    Candel, Ingrid; Merckelbach, Harald; Houben, Katrijn; Vandyck, Inne

    2004-01-01

    Boundary extension is the tendency to remember more of a scene than was actually shown. The dominant interpretation of this memory illusion is that it originates from schemata that people construct when viewing a scene. Evidence of boundary extension has been obtained primarily with adult participants who remember neutral pictures. The current study addressed the developmental stability of this phenomenon. Therefore, we investigated whether children aged 10-12 years display boundary extension for neutral pictures. Moreover, we examined emotional scene memory. Eighty-seven children drew pictures from memory after they had seen either neutral or emotional pictures. Both their neutral and emotional drawings revealed boundary extension. Apparently, the schema construction that underlies boundary extension is a robust and ubiquitous process.

  18. Smoking scenes in popular Japanese serial television dramas: descriptive analysis during the same 3-month period in two consecutive years.

    PubMed

    Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu

    2006-06-01

    Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.

  19. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.

  20. Color Constancy in Two-Dimensional and Three-Dimensional Scenes: Effects of Viewing Methods and Surface Texture

    PubMed Central

    Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L.

    2017-01-01

    There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy. PMID:29238513

  1. Matching optical flow to motor speed in virtual reality while running on a treadmill

    PubMed Central

    Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564

  2. Matching optical flow to motor speed in virtual reality while running on a treadmill.

    PubMed

    Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.

  3. Are fixations in static natural scenes a useful predictor of attention in the real world?

    PubMed

    Foulsham, Tom; Kingstone, Alan

    2017-06-01

    Research investigating scene perception normally involves laboratory experiments using static images. Much has been learned about how observers look at pictures of the real world and the attentional mechanisms underlying this behaviour. However, the use of static, isolated pictures as a proxy for studying everyday attention in real environments has led to the criticism that such experiments are artificial. We report a new study that tests the extent to which the real world can be reduced to simpler laboratory stimuli. We recorded the gaze of participants walking on a university campus with a mobile eye tracker, and then showed static frames from this walk to new participants, in either a random or sequential order. The aim was to compare the gaze of participants walking in the real environment with fixations on pictures of the same scene. The data show that picture order affects interobserver fixation consistency and changes looking patterns. Critically, while fixations on the static images overlapped significantly with the actual real-world eye movements, they did so no more than a model that assumed a general bias to the centre. Remarkably, a model that simply takes into account where the eyes are normally positioned in the head-independent of what is actually in the scene-does far better than any other model. These data reveal that viewing patterns to static scenes are a relatively poor proxy for predicting real world eye movement behaviour, while raising intriguing possibilities for how to best measure attention in everyday life. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Fires and Heavy Smoke in Alaska

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On May 28, 2002, the Moderate Resolution Imaging Spectroradiometer (MODIS) captured this image of fires that continue to burn in central Alaska. Alaska is very dry and warm for this time of year, and has experienced over 230 wildfires so far this season. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of the scene at the sensor's fullest resolution, visit the MODIS Rapid Response Image Gallery.

  5. Historical note: Drumine--a new Australian local anaesthetic.

    PubMed

    Bailey, R J

    1977-02-01

    An article in the Australiasian Medical Gazette of October, 1886 indicates the method of extraction, experimentation and therapeutic application of an active principle, prepared from Euphorbia Drummondii. Further correspondence is noted, refining the method of extraction, reporting cases, answering criticisms, and announcing eventually, drumine's commercial preparation. Despite enthusiastic support, the drug soon disappears from the therapeutic scene.

  6. Landsat 4 results and their implications for agricultural surveys

    NASA Technical Reports Server (NTRS)

    Erickson, J. D.; Bizzell, R. M.; Pitts, D. E.; Thompson, D. R.

    1983-01-01

    Progress on defining the minimum Landsat-4 data characteristics needed for agricultural information in the U.S. and assessing the value-added capability of current technology to extract that level of information is reported. Emphasis is laid on the thematic mapper (TM) data and the ground processing facilities. TM data from all 7 bands for a rural Arkansas scene were examined in terms of radiometric, spatial, and geometric fidelity characteristics. Another scene sensed over Iowa was analyzed using three two-channel data sets. Although the TM data were an improvement over MSS data, no value differential was perceived. However, the development of further analysis techniques is still necessary to determine the actual worth of the improved sensor capabilities available with the TM, which actually has an MSS within itself.

  7. The Effect of Consistency on Short-Term Memory for Scenes.

    PubMed

    Gong, Mingliang; Xuan, Yuming; Xu, Xinwen; Fu, Xiaolan

    2017-01-01

    Which is more detectable, the change of a consistent or an inconsistent object in a scene? This question has been debated for decades. We noted that the change of objects in scenes might simultaneously be accompanied with gist changes. In the present study we aimed to examine how the alteration of gist, as well as the consistency of the changed objects, modulated change detection. In Experiment 1, we manipulated the semantic content by either keeping or changing the consistency of the scene. Results showed that the changes of consistent and inconsistent scenes were equally detected. More importantly, the changes were more accurately detected when scene consistency changed than when the consistency remained unchanged, regardless of the consistency of the memory scenes. A phase-scrambled version of stimuli was adopted in Experiment 2 to decouple the possible confounding effect of low-level factors. The results of Experiment 2 demonstrated that the effect found in Experiment 1 was indeed due to the change of high-level semantic consistency rather than the change of low-level physical features. Together, the study suggests that the change of consistency plays an important role in scene short-term memory, which might be attributed to the sensitivity to the change of semantic content.

  8. The Effect of Consistency on Short-Term Memory for Scenes

    PubMed Central

    Gong, Mingliang; Xuan, Yuming; Xu, Xinwen; Fu, Xiaolan

    2017-01-01

    Which is more detectable, the change of a consistent or an inconsistent object in a scene? This question has been debated for decades. We noted that the change of objects in scenes might simultaneously be accompanied with gist changes. In the present study we aimed to examine how the alteration of gist, as well as the consistency of the changed objects, modulated change detection. In Experiment 1, we manipulated the semantic content by either keeping or changing the consistency of the scene. Results showed that the changes of consistent and inconsistent scenes were equally detected. More importantly, the changes were more accurately detected when scene consistency changed than when the consistency remained unchanged, regardless of the consistency of the memory scenes. A phase-scrambled version of stimuli was adopted in Experiment 2 to decouple the possible confounding effect of low-level factors. The results of Experiment 2 demonstrated that the effect found in Experiment 1 was indeed due to the change of high-level semantic consistency rather than the change of low-level physical features. Together, the study suggests that the change of consistency plays an important role in scene short-term memory, which might be attributed to the sensitivity to the change of semantic content. PMID:29046654

  9. [Suicidal single intraoral shooting by a shotgun--risk of misinterpretation at the crime scene].

    PubMed

    Woźniak, Krzysztof; Pohl, Jerzy

    2003-01-01

    The authors presented two cases of suicidal single intraoral shooting by a shotgun. The first case relates to a victim found near the peak of Swinica in the Tatra mountains. When the circumstances could have suggested fatal fall from a height and minute, insignificant external injuries were found, the pistol found at the scene has been the most important indicator leading to the actual cause of death. The second case relates to a 38-year-old male found in this family house in a village. Severe internal cranial injury (bone fragmentation) was diagnosed at the scene. A self-made weapon was previously removed and hidden from the scene by a relative of the victim. Before regular forensic autopsy X-ray examination was conducted which revealed multiple intracranial foreign bodies of a shape of a shot. After the results of the autopsy the relative of the deceased indicated the location of the weapon.

  10. Phytoplankton Bloom Off Portugal

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Turquoise and greenish swirls marked the presence of a large phytoplankton bloom off the coast of Portugal on April 23, 2002. This true-color image was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite. There are also several fires burning in northwest Spain, near the port city of A Coruna. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of this scene at the sensor's fullest resolution, visit the MODIS Rapidfire site.

  11. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    PubMed

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

  12. Lack of Privileged Access to Awareness for Rewarding Social Scenes in Autism Spectrum Disorder.

    PubMed

    Gray, Katie L H; Haffey, Anthony; Mihaylova, Hristina L; Chakrabarti, Bhismadev

    2018-05-04

    Reduced social motivation is hypothesised to underlie social behavioural symptoms of Autism Spectrum Disorder (ASD). The extent to which rewarding social stimuli are granted privileged access to awareness in ASD is currently unknown. We use continuous flash suppression to investigate whether individuals with and without ASD show privileged access to awareness for social over nonsocial rewarding scenes that are closely matched for stimulus features. Strong evidence for a privileged access to awareness for rewarding social over nonsocial scenes was observed in neurotypical adults. No such privileged access was seen in ASD individuals, and moderate support for the null model was noted. These results suggest that the purported deficits in social motivation in ASD may extend to early processing mechanisms.

  13. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  14. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    PubMed Central

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  15. A knowledge-based machine vision system for space station automation

    NASA Technical Reports Server (NTRS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-01-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  16. Three-dimensional measurement system for crime scene documentation

    NASA Astrophysics Data System (ADS)

    Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert

    2017-10-01

    Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.

  17. Real-time synchronized multiple-sensor IR/EO scene generation utilizing the SGI Onyx2

    NASA Astrophysics Data System (ADS)

    Makar, Robert J.; O'Toole, Brian E.

    1998-07-01

    An approach to utilize the symmetric multiprocessing environment of the Silicon Graphics Inc.R (SGI) Onyx2TM has been developed to support the generation of IR/EO scenes in real-time. This development, supported by the Naval Air Warfare Center Aircraft Division (NAWC/AD), focuses on high frame rate hardware-in-the-loop testing of multiple sensor avionics systems. In the past, real-time IR/EO scene generators have been developed as custom architectures that were often expensive and difficult to maintain. Previous COTS scene generation systems, designed and optimized for visual simulation, could not be adapted for accurate IR/EO sensor stimulation. The new Onyx2 connection mesh architecture made it possible to develop a more economical system while maintaining the fidelity needed to stimulate actual sensors. An SGI based Real-time IR/EO Scene Simulator (RISS) system was developed to utilize the Onyx2's fast multiprocessing hardware to perform real-time IR/EO scene radiance calculations. During real-time scene simulation, the multiprocessors are used to update polygon vertex locations and compute radiometrically accurate floating point radiance values. The output of this process can be utilized to drive a variety of scene rendering engines. Recent advancements in COTS graphics systems, such as the Silicon Graphics InfiniteRealityR make a total COTS solution possible for some classes of sensors. This paper will discuss the critical technologies that apply to infrared scene generation and hardware-in-the-loop testing using SGI compatible hardware. Specifically, the application of RISS high-fidelity real-time radiance algorithms on the SGI Onyx2's multiprocessing hardware will be discussed. Also, issues relating to external real-time control of multiple synchronized scene generation channels will be addressed.

  18. Cyclone Chris Hits Australia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    This false-color image shows Cyclone Chris shortly after it hit Australia's northwestern coast on February 6, 2002. This scene was acquired by the Moderate-resolution Imaging Spectroradiometer (MODIS), flying aboard NASA's Terra satellite. (Please note that this scene has not been reprojected.) Cyclone Chris is one of the most powerful storms ever to hit Australia. Initially, the storm contained wind gusts of up to 200 km per hour (125 mph), but shortly after making landfall it weakened to a Category 4 storm. Meteorologists expect the cyclone to weaken quickly as it moves further inland.

  19. Spatial detection of tv channel logos as outliers from the content

    NASA Astrophysics Data System (ADS)

    Ekin, Ahmet; Braspenning, Ralph

    2006-01-01

    This paper proposes a purely image-based TV channel logo detection algorithm that can detect logos independently from their motion and transparency features. The proposed algorithm can robustly detect any type of logos, such as transparent and animated, without requiring any temporal constraints whereas known methods have to wait for the occurrence of large motion in the scene and assume stationary logos. The algorithm models logo pixels as outliers from the actual scene content that is represented by multiple 3-D histograms in the YC BC R space. We use four scene histograms corresponding to each of the four corners because the content characteristics change from one image corner to another. A further novelty of the proposed algorithm is that we define image corners and the areas where we compute the scene histograms by a cinematic technique called Golden Section Rule that is used by professionals. The robustness of the proposed algorithm is demonstrated over a dataset of representative TV content.

  20. Exocentric direction judgements in computer-generated displays and actual scenes

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Smith, Stephen; Mcgreevy, Michael W.; Grunwald, Arthur J.

    1989-01-01

    One of the most remarkable perceptual properties of common experience is that the perceived shapes of known objects are constant despite movements about them which transform their projections on the retina. This perceptual ability is one aspect of shape constancy (Thouless, 1931; Metzger, 1953; Borresen and Lichte, 1962). It requires that the viewer be able to sense and discount his or her relative position and orientation with respect to a viewed object. This discounting of relative position may be derived directly from the ranging information provided from stereopsis, from motion parallax, from vestibularly sensed rotation and translation, or from corollary information associated with voluntary movement. It is argued that: (1) errors in exocentric judgements of the azimuth of a target generated on an electronic perspective display are not viewpoint-independent, but are influenced by the specific geometry of their perspective projection; (2) elimination of binocular conflict by replacing electronic displays with actual scenes eliminates a previously reported equidistance tendency in azimuth error, but the viewpoint dependence remains; (3) the pattern of exocentrically judged azimuth error in real scenes viewed with a viewing direction depressed 22 deg and rotated + or - 22 deg with respect to a reference direction could not be explained by overestimation of the depression angle, i.e., a slant overestimation.

  1. PLT Polansky on aft flight deck

    NASA Image and Video Library

    2001-02-10

    STS98-E-5084 (10 February 2001) --- Astronaut Mark L. Polansky, STS-98 pilot, takes notes on the aft flight deck of the Space Shuttle Atlantis. The scene was recorded with a digital still camera during Flight Day 4 activities.

  2. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-01-01

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320

  3. Evaluation of pre-hospital transport time of stroke patients to thrombolytic treatment.

    PubMed

    Simonsen, Sofie Amalie; Andresen, Morten; Michelsen, Lene; Viereck, Søren; Lippert, Freddy K; Iversen, Helle Klingenberg

    2014-11-13

    Effective treatment of stroke is time dependent. Pre-hospital management is an important link in reducing the time from occurrence of stroke symptoms to effective treatment. The aim of this study was to evaluate time used by emergency medical services (EMS) for stroke patients during a five-year period in order to identify potential delays and evaluate the reorganization of EMS in Copenhagen in 2009. We performed a retrospective analysis of ambulance records from stroke patients suitable for thrombolysis from 1 January 2006 to 7 July 2011. We noted response time from dispatch of the ambulance to arrival at the scene, on-scene time and transport time to the hospital-in total, alarm-to-door time. In addition, we noted baseline characteristics. We reviewed 481 records (58% male, median age 66 years). The median (IQR) alarm-to-door time in minutes was 41 (33-52), of which 18 (12-24) minutes were spent on scene. Response time was reduced from the period before to the period after reorganization (7 vs. 5 minutes, p <0.001). In a linear multiple regression model, higher patient age and longer distance to the hospital correlated with significantly longer transportation time (p <0.001). This study shows an unchanged alarm-to-door time of 41 minutes over a five-year period. Response time, but not total alarm-to-door time, was reduced during the five years. On-scene time constituted nearly half of the total alarm-to-door time and is thus a point of focus for improvement.

  4. Paramedic Checklists do not Accurately Identify Post-ictal or Hypoglycaemic Patients Suitable for Discharge at the Scene.

    PubMed

    Tohira, Hideo; Fatovich, Daniel; Williams, Teresa A; Bremner, Alexandra; Arendts, Glenn; Rogers, Ian R; Celenza, Antonio; Mountain, David; Cameron, Peter; Sprivulis, Peter; Ahern, Tony; Finn, Judith

    2016-06-01

    The objective of this study was to assess the accuracy and safety of two pre-defined checklists to identify prehospital post-ictal or hypoglycemic patients who could be discharged at the scene. A retrospective cohort study of lower acuity, adult patients attended by paramedics in 2013, and who were either post-ictal or hypoglycemic, was conducted. Two self-care pathway assessment checklists (one each for post-ictal and hypoglycemia) designed as clinical decision tools for paramedics to identify patients suitable for discharge at the scene were used. The intention of the checklists was to provide paramedics with justification to not transport a patient if all checklist criteria were met. Actual patient destination (emergency department [ED] or discharge at the scene) and subsequent events (eg, ambulance requests) were compared between patients who did and did not fulfill the checklists. The performance of the checklists against the destination determined by paramedics was also assessed. Totals of 629 post-ictal and 609 hypoglycemic patients were identified. Of these, 91 (14.5%) and 37 (6.1%) patients fulfilled the respective checklist. Among those who fulfilled the checklist, 25 (27.5%) post-ictal and 18 (48.6%) hypoglycemic patients were discharged at the scene, and 21 (23.1%) and seven (18.9%) were admitted to hospital after ED assessment. Amongst post-ictal patients, those fulfilling the checklist had more subsequent ambulance requests (P=.01) and ED attendances with seizure-related conditions (P=.04) within three days than those who did not. Amongst hypoglycemic patients, there were no significant differences in subsequent events between those who did and did not meet the criteria. Paramedics discharged five times more hypoglycemic patients at the scene than the checklist predicted with no significant differences in the rate of subsequent events. Four deaths (0.66%) occurred within seven days in the hypoglycemic cohort, and none of them were attributed directly to hypoglycemia. The checklists did not accurately identify patients suitable for discharge at the scene within the Emergency Medical Service. Patients who fulfilled the post-ictal checklist made more subsequent health care service requests within three days than those who did not. Both checklists showed similar occurrence of subsequent events to paramedics' decision, but the hypoglycemia checklist identified fewer patients who could be discharged at the scene than paramedics actually discharged. Reliance on these checklists may increase transportations to ED and delay initiation of appropriate treatment at a hospital. Tohira H , Fatovich D , Williams TA , Bremner A , Arendts G , Rogers IR , Celenza A , Mountain D , Cameron P , Sprivulis P , Ahern T , Finn J . Paramedic checklists do not accurately identify post-ictal or hypoglycaemic patients suitable for discharge at the scene. Prehosp Disaster Med. 2016;31(3):282-293.

  5. An examination of driver distraction as recorded in NHTSA databases

    DOT National Transportation Integrated Search

    2009-09-01

    The purpose of this research note is to provide fatality, injury, on-scene crash investigation, and survey data associated with distracted driving and to summarize recent data from NHTSA and other DOT modes pertaining to distracted-driving crashes.

  6. Two Perspectives on Psychoactive Drugs: Commentary on Wolfensberger (1994).

    ERIC Educational Resources Information Center

    Levitas, Andrew; And Others

    1994-01-01

    This commentary on a 1994 article by Wolfensberger on the current mental retardation scene, in which he describes prescription psychoactive drugs as health destroying and life destroying, criticizes Wolfensberger's comments on "psychoactive medications," noting "elementary errors,""apparently concocted figures," and…

  7. Effect of a concurrent auditory task on visual search performance in a driving-related image-flicker task.

    PubMed

    Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John

    2002-01-01

    The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.

  8. Pattern association--a key to recognition of shark attacks.

    PubMed

    Cirillo, G; James, H

    2004-12-01

    Investigation of a number of shark attacks in South Australian waters has lead to recognition of pattern similarities on equipment recovered from the scene of such attacks. Six cases are presented in which a common pattern of striations has been noted.

  9. The polymorphism of crime scene investigation: An exploratory analysis of the influence of crime and forensic intelligence on decisions made by crime scene examiners.

    PubMed

    Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin

    2015-12-01

    A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Improving AIRS Radiance Spectra in High Contrast Scenes Using MODIS

    NASA Technical Reports Server (NTRS)

    Pagano, Thomas S.; Aumann, Hartmut H.; Manning, Evan M.; Elliott, Denis A.; Broberg, Steven E.

    2015-01-01

    The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on May 4, 2002. AIRS acquires hyperspectral infrared radiances in 2378 channels ranging in wavelength from 3.7-15.4 microns with spectral resolution of better than 1200, and spatial resolution of 13.5 km with global daily coverage. The AIRS is designed to measure temperature and water vapor profiles for improvement in weather forecast accuracy and improved understanding of climate processes. As with most instruments, the AIRS Point Spread Functions (PSFs) are not the same for all detectors. When viewing a non-uniform scene, this causes a significant radiometric error in some channels that is scene dependent and cannot be removed without knowledge of the underlying scene. The magnitude of the error depends on the combination of non-uniformity of the AIRS spatial response for a given channel and the non-uniformity of the scene, but is typically only noticeable in about 1% of the scenes and about 10% of the channels. The current solution is to avoid those channels when performing geophysical retrievals. In this effort we use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument to provide information on the scene uniformity that is used to correct the AIRS data. For the vast majority of channels and footprints the technique works extremely well when compared to a Principal Component (PC) reconstruction of the AIRS channels. In some cases where the scene has high inhomogeneity in an irregular pattern, and in some channels, the method can actually degrade the spectrum. Most of the degraded channels appear to be slightly affected by random noise introduced in the process, but those with larger degradation may be affected by alignment errors in the AIRS relative to MODIS or uncertainties in the PSF. Despite these errors, the methodology shows the ability to correct AIRS radiances in non-uniform scenes under some of the worst case conditions and improves the ability to match AIRS and MODIS radiances in non-uniform scenes.

  11. Framework of passive millimeter-wave scene simulation based on material classification

    NASA Astrophysics Data System (ADS)

    Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun

    2006-05-01

    Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting using actual PMMW sensors. With the reliable PMMW scene simulator, it will be more efficient to apply the PMMW sensor to various applications.

  12. Colours in black and white: the depiction of lightness and brightness in achromatic engravings before the invention of photography.

    PubMed

    Zavagno, Daniele; Massironi, Manfredo

    2006-01-01

    What is it like to see the world in black and white? In the pioneer days of cinema, when movies displayed grey worlds, was it true that no 'colours' were actually seen? Did every object seen in those projections appear grey in the same way? The answer is obviously no--people in those glorious days were seeing a world full of light, shadows, and objects in which colours were expressed in terms of lightness. But the marvels of grey worlds have not always been so richly displayed. Before the invention of photography, the depiction of scenes in black-and-white had to face some technical and perceptual challenges. We have studied the technical and perceptual constraints that XV-XVIII century engravers had to face in order to translate actual colours into shades of grey. An indeterminacy principle is considered, according to which artists had to prefer the representation of some object or scene features over others (for example brightness over lightness). The reasons for this lay between the kind of grey scale technically available and the kind of information used in the construction of 3-D scenes. With the invention of photography, photomechanical reproductions, and new printing solutions, artists had at their disposal a continuous grey scale that greatly reduces the constraints of the indeterminacy principle.

  13. Trauma Simulation Training Increases Confidence Levels in Prehospital Personnel Performing Life-Saving Interventions in Trauma Patients

    PubMed Central

    Patel, Archita D.; Meurer, David A.; Shuster, Jonathan J.

    2016-01-01

    Introduction. Limited evidence is available on simulation training of prehospital care providers, specifically the use of tourniquets and needle decompression. This study focused on whether the confidence level of prehospital personnel performing these skills improved through simulation training. Methods. Prehospital personnel from Alachua County Fire Rescue were enrolled in the study over a 2- to 3-week period based on their availability. Two scenarios were presented to them: a motorcycle crash resulting in a leg amputation requiring a tourniquet and an intoxicated patient with a stab wound, who experienced tension pneumothorax requiring needle decompression. Crews were asked to rate their confidence levels before and after exposure to the scenarios. Timing of the simulation interventions was compared with actual scene times to determine applicability of simulation in measuring the efficiency of prehospital personnel. Results. Results were collected from 129 participants. Pre- and postexposure scores increased by a mean of 1.15 (SD 1.32; 95% CI, 0.88–1.42; P < 0.001). Comparison of actual scene times with simulated scene times yielded a 1.39-fold difference (95% CI, 1.25–1.55) for Scenario 1 and 1.59 times longer for Scenario 2 (95% CI, 1.43–1.77). Conclusion. Simulation training improved prehospital care providers' confidence level in performing two life-saving procedures. PMID:27563467

  14. CHAMP: a locally adaptive unmixing-based hyperspectral anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Crist, Eric P.; Thelen, Brian J.; Carrara, David A.

    1998-10-01

    Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.

  15. An Overview of American Publishing for Librarians.

    ERIC Educational Resources Information Center

    Facente, Gary

    1986-01-01

    A financial survey of the American publishing scene (estimated net book sales, expenses for publishing and marketing professional books) is followed by descriptions of the editorial and marketing processes. Practices relating to contracts, imprints, distribution arrangements, and remainders are described noting changes in contemporary publishing…

  16. Scheme for Terminal Guidance Utilizing Acousto-Optic Correlator.

    DTIC Science & Technology

    longitudinally extending acousto - optic device as index of refraction variation pattern signals. Real time signals corresponding to the scene actually being viewed...by the vehicle are propagated across the stored signals, and the results of an acousto - optic correlation are utilized to determine X and Y error

  17. Analysis of the Operational Test and Evaluation of the CBRNE Crime Scene Modeller (C2SM)

    DTIC Science & Technology

    2014-07-09

    International Society for Optical Engineering, vol.7305, 730509 (10 pp), 2009. 2 It should be noted that these two projects are somewhat unique in...effectiveness of the capability as well as an opportunity to receive an arm’s length peer evaluation by an audience of International expert LE personnel with...Operators. Proceedings of the SPIE - The International Society for Optical Engineering, vol.7666, 76660N (8 pp.), 2010. 8 Note: the scenario is

  18. The simulation of automatic ladar sensor control during flight operations using USU LadarSIM software

    NASA Astrophysics Data System (ADS)

    Pack, Robert T.; Saunders, David; Fullmer, Rees; Budge, Scott

    2006-05-01

    USU LadarSIM Release 2.0 is a ladar simulator that has the ability to feed high-level mission scripts into a processor that automatically generates scan commands during flight simulations. The scan generation depends on specified flight trajectories and scenes consisting of terrain and targets. The scenes and trajectories can either consist of simulated or actual data. The first modeling step produces an outline of scan footprints in xyz space. Once mission goals have been analyzed and it is determined that the scan footprints are appropriately distributed or placed, specific scans can then be chosen for the generation of complete radiometry-based range images and point clouds. The simulation is capable of quickly modeling ray-trace geometry associated with (1) various focal plane arrays and scanner configurations and (2) various scene and trajectories associated with particular maneuvers or missions.

  19. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed.

    PubMed

    Slavich, George M; Zimbardo, Philip G

    2013-12-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person's past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness , a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior.

  20. Modern Approaches to the Computation of the Probability of Target Detection in Cluttered Environments

    NASA Astrophysics Data System (ADS)

    Meitzler, Thomas J.

    The field of computer vision interacts with fields such as psychology, vision research, machine vision, psychophysics, mathematics, physics, and computer science. The focus of this thesis is new algorithms and methods for the computation of the probability of detection (Pd) of a target in a cluttered scene. The scene can be either a natural visual scene such as one sees with the naked eye (visual), or, a scene displayed on a monitor with the help of infrared sensors. The relative clutter and the temperature difference between the target and background (DeltaT) are defined and then used to calculate a relative signal -to-clutter ratio (SCR) from which the Pd is calculated for a target in a cluttered scene. It is shown how this definition can include many previous definitions of clutter and (DeltaT). Next, fuzzy and neural -fuzzy techniques are used to calculate the Pd and it is shown how these methods can give results that have a good correlation with experiment. The experimental design for actually measuring the Pd of a target by observers is described. Finally, wavelets are applied to the calculation of clutter and it is shown how this new definition of clutter based on wavelets can be used to compute the Pd of a target.

  1. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  2. Image Manipulation: Then and Now.

    ERIC Educational Resources Information Center

    Sutton, Ronald E.

    The images of photography have been manipulated almost from the moment of their discovery. The blending together in the studio and darkroom of images not found in actual scenes from life has been a regular feature of modern photography in both art and advertising. Techniques of photograph manipulation include retouching; blocking out figures or…

  3. The signature of undetected change: an exploratory electrotomographic investigation of gradual change blindness.

    PubMed

    Kiat, John E; Dodd, Michael D; Belli, Robert F; Cheadle, Jacob E

    2018-05-01

    Neuroimaging-based investigations of change blindness, a phenomenon in which seemingly obvious changes in visual scenes fail to be detected, have significantly advanced our understanding of visual awareness. The vast majority of prior investigations, however, utilize paradigms involving visual disruptions (e.g., intervening blank screens, saccadic movements, "mudsplashes"), making it difficult to isolate neural responses toward visual changes cleanly. To address this issue in this present study, high-density EEG data (256 channel) were collected from 25 participants using a paradigm in which visual changes were progressively introduced into detailed real-world scenes without the use of visual disruption. Oscillatory activity associated with undetected changes was contrasted with activity linked to their absence using standardized low-resolution brain electromagnetic tomography (sLORETA). Although an insufficient number of detections were present to allow for analysis of actual change detection, increased beta-2 activity in the right inferior parietal lobule (rIPL), a region repeatedly associated with change blindness in disruption paradigms, followed by increased theta activity in the right superior temporal gyrus (rSTG) was noted in undetected visual change responses relative to the absence of change. We propose the rIPL beta-2 activity to be associated with orienting attention toward visual changes, with the subsequent rise in rSTG theta activity being potentially linked with updating preconscious perceptual memory representations. NEW & NOTEWORTHY This study represents the first neuroimaging-based investigation of gradual change blindness, a visual phenomenon that has significant potential to shed light on the processes underlying visual detection and conscious perception. The use of gradual change materials is reflective of real-world visual phenomena and allows for cleaner isolation of signals associated with the neural registration of change relative to the use of abrupt change transients.

  4. Computer-generated, calligraphic, full-spectrum color system for visual simulation landing approach maneuvers

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1975-01-01

    The calligraphic chromatic projector described was developed to improve the perceived realism of visual scene simulation ('out-the-window visuals'). The optical arrangement of the projector is illustrated and discussed. The device permits drawing 2000 vectors in as many as 500 colors, all above critical flicker frequencies, and use of high scene resolution and brightness at an acceptable level to the pilot, with the maximum system capabilities of 1000 lines and 1000 fL. The device for generating the colors is discussed, along with an experiment conducted to demonstrate potential improvements in performance and pilot opinion. Current research work and future research plans are noted.

  5. "A cool little buzz": alcohol intoxication in the dance club scene.

    PubMed

    Hunt, Geoffrey; Moloney, Molly; Fazio, Adam

    2014-06-01

    In recent years, there has been increasing concern about youthful "binge" drinking and intoxication. Yet the meaning of intoxication remains under-theorized. This paper examines intoxication in a young adult nightlife scene, using data from a 2005-2008 National Institute on Drug Abuse-funded project on Asian American youth and nightlife. Analyzing in-depth qualitative interview data with 250 Asian American young adults in the San Francisco area, we examine their narratives about alcohol intoxication with respect to sociability, stress, and fun, and their navigation of the fine line between being "buzzed" and being "wasted." Finally, limitations of the study and directions for future research are noted.

  6. Surreal Scene Part of Lives.

    ERIC Educational Resources Information Center

    Freeman, Christina

    1999-01-01

    Describes a school newspaper editor's attempts to cover the devastating tornado that severely damaged her school--North Hall High School in Gainesville, Georgia. Notes that the 16-page special edition she and the staff produced included first-hand accounts, tributes to victims, tales of survival, and pictures of the tragedy. (RS)

  7. The Unheroic Side of Leadership: Notes from the Swamp.

    ERIC Educational Resources Information Center

    Murphy, Jerome T.

    1988-01-01

    Administrators who "lionize" leadership miss important behind-the-scenes aspects of daily management. They depict the grand design without niggling problems and assume that leadership belongs exclusiveley to the heroic boss. Leaders often achieve best results by being skilled listeners, acting like followers, and depending on followers…

  8. Advanced interactive display formats for terminal area traffic control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.

    1995-01-01

    The basic design considerations for perspective Air Traffic Control displays are described. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. The MVPS system is based on indirect manipulation of the viewing parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of screen. This arrangement has been chosen, in order to preserve the correspondence between the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer generated scene. Current, ongoing efforts deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the Air Traffic Control scene can be viewed, for a given traffic situation.

  9. Hierarchical, Three-Dimensional Measurement System for Crime Scene Scanning.

    PubMed

    Marcin, Adamczyk; Maciej, Sieniło; Robert, Sitnik; Adam, Woźniak

    2017-07-01

    We present a new generation of three-dimensional (3D) measuring systems, developed for the process of crime scene documentation. This measuring system facilitates the preparation of more insightful, complete, and objective documentation for crime scenes. Our system reflects the actual requirements for hierarchical documentation, and it consists of three independent 3D scanners: a laser scanner for overall measurements, a situational structured light scanner for more minute measurements, and a detailed structured light scanner for the most detailed parts of tscene. Each scanner has its own spatial resolution, of 2.0, 0.3, and 0.05 mm, respectively. The results of interviews we have conducted with technicians indicate that our developed 3D measuring system has significant potential to become a useful tool for forensic technicians. To ensure the maximum compatibility of our measuring system with the standards that regulate the documentation process, we have also performed a metrological validation and designated the maximum permissible length measurement error E MPE for each structured light scanner. In this study, we present additional results regarding documentation processes conducted during crime scene inspections and a training session. © 2017 American Academy of Forensic Sciences.

  10. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    PubMed

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes.

  11. Remotely Sensed Thermal Anomalies in Western Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains the areas identified as areas of anomalous surface temperature from Landsat satellite imagery in Western Colorado. Data was obtained for two different dates. The digital numbers of each Landsat scene were converted to radiance and the temperature was calculated in degrees Kelvin and then converted to degrees Celsius for each land cover type using the emissivity of that cover type. And this process was repeated for each of the land cover types (open water, barren, deciduous forest and evergreen forest, mixed forest, shrub/scrub, grassland/herbaceous, pasture hay, and cultivated crops). The temperature of each pixel within each scene was calculated using the thermal band. In order to calculate the temperature an average emissivity value was used for each land cover type within each scene. The NLCD 2001 land cover classification raster data of the zones that cover Colorado were downloaded from USGS site and used to identify the land cover types within each scene. Areas that had temperature residual greater than 2o, and areas with temperature equal to 1o to 2o, were considered Landsat modeled very warm and warm surface exposures (thermal anomalies), respectively. Note: 'o' is used in this description to represent lowercase sigma.

  12. Language-guided visual processing affects reasoning: the role of referential and spatial anchoring.

    PubMed

    Dumitru, Magda L; Joergensen, Gitte H; Cruickshank, Alice G; Altmann, Gerry T M

    2013-06-01

    Language is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. ERTS-1 anomalous dark patches

    NASA Technical Reports Server (NTRS)

    Strong, A. E. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Through combined use of imagery from ERTS-1 and NOAA-2 satellites was found that when the sun elevation exceeds 55 degrees, the ERTS-1 imagery is subject to considerable contamination by sunlight even though the actual specular point is nearly 300 nautical miles from nadir. Based on sea surface wave slope information, a wind speed of 10 knots will theoretically provide approximately 0.5 percent incident solar reflectance under observed ERTS multispectral scanner detectors. This reflectance nearly doubles under the influence of a 20 knot wind. The most pronounced effect occurs in areas of calm water where anomalous dark patches are observed. Calm water at distances from the specular point found in ERTS scenes will reflect no solar energy to the multispectral scanner, making these regions stand out as dark areas in all bands in an ocean scene otherwise comprosed by a general diffuse sunlight from rougher ocean surfaces. Anomalous dark patches in the outer parts of the glitter zones may explain the unusual appearance of some scenes.

  15. Colored Chaos

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 7 May 2004 This daytime visible color image was collected on May 30, 2002 during the Southern Fall season in Atlantis Chaos.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -34.5, Longitude 183.6 East (176.4 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  16. Continuing Through Iani Chaos

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image continues the northward trend through the Iani Chaos region. Compare this image to Monday's and Tuesday's. This image was collected during the Southern Fall season.

    Image information: VIS instrument. Latitude -0.1 Longitude 342.6 East (17.4 West). 19 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  17. Aureum Chaos: Another View

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image is located in a different part of Aureum Chaos. Compare the surface textures with yesterday's image. This image was collected during the Southern Fall season.

    Image information: VIS instrument. Latitude -4.1, Longitude 333.9 East (26.1 West). 35 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  18. Iani Chaos in False Color

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image of a portion of the Iani Chaos region was collected during the Southern Fall season.

    Image information: VIS instrument. Latitude -2.6 Longitude 342.4 East (17.6 West). 36 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  19. Auream Chaos

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image was collected during Southern Fall and shows part of the Aureum Chaos.

    Image information: VIS instrument. Latitude -3.6, Longitude 332.9 East (27.1 West). 35 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  20. Mawrth Valles

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image of an old channel floor and surrounding highlands is located in the lower reach of Mawrth Valles. This image was collected during the Northern Spring season.

    Image information: VIS instrument. Latitude 25.7, Longitude 341.2 East (18.8 West). 35 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  1. Northern Polar Cap

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 13 May 2004 This nighttime visible color image was collected on November 26, 2002 during the Northern Summer season near the North Polar Cap Edge.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude 80, Longitude 43.2 East (316.8 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  2. Polar Cap Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 12 May 2004 This daytime visible color image was collected on June 6, 2003 during the Southern Spring season near the South Polar Cap Edge.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -77.8, Longitude 195 East (165 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  3. White Rock in False Color

    NASA Technical Reports Server (NTRS)

    2005-01-01

    [figure removed for brevity, see original site]

    The THEMIS VIS camera is capable of capturing color images of the Martian surface using five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from using multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    This false color image shows the wind eroded deposit in Pollack Crater called 'White Rock'. This image was collected during the Southern Fall Season.

    Image information: VIS instrument. Latitude -8, Longitude 25.2 East (334.8 West). 0 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  4. The structure of red-infrared scattergrams of semivegetated landscapes

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1988-01-01

    A physically based linear stochastic geometric canopy soil reflectance model is presented for characterizing spatial variability of semivegetated landscapes at subpixel and regional scales. Landscapes are conceptualized as stochastic geometric surfaces, incorporating not only the variability in geometric elements, but also the variability in vegetation and soil background reflectance which can be important in some scenes. The model is used to investigate several possible mechanisms which contribute to the often observed characteristic triangular shape of red-infrared scattergrams of semivegetated landscapes. Scattergrams of simulated and semivegetated scenes are analyzed with respect to the scales of the satellite pixel and subpixel components. Analysis of actual aerial radiometric data of a pecan orchard is presented in comparison with ground observations as preliminary confirmation of the theoretical results.

  5. The structure of red-infrared scattergrams of semivegetated landscapes

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1989-01-01

    A physically based linear stochastic geometric canopy soil reflectance model is presented for characterizing spatial variability of semivegetated landscapes at subpixel and regional scales. Landscapes are conceptualized as stochastic geometric surfaces, incorporating not only the variability in geometric elements, but also the variability in vegetation and soil background reflectance which can be important in some scenes. The model is used to investigate several possible mechanisms which contribute to the often observed characteristic triangular shape of red-infrared scattergrams of semivegetated landscapes. Scattergrams of simulated semivegetated scenes are analyzed with respect to the scales of the satellite pixel and subpixel components. Analysis of actual aerial radiometric data of a pecan orchard is presented in comparison with ground observations as preliminary confirmation of the theoretical results.

  6. Analyzing crime scene videos

    NASA Astrophysics Data System (ADS)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  7. Drivers' and non-drivers' performance in a change detection task with static driving scenes: is there a benefit of experience?

    PubMed

    Zhao, Nan; Chen, Wenfeng; Xuan, Yuming; Mehler, Bruce; Reimer, Bryan; Fu, Xiaolan

    2014-01-01

    The 'looked-but-failed-to-see' phenomenon is crucial to driving safety. Previous research utilising change detection tasks related to driving has reported inconsistent effects of driver experience on the ability to detect changes in static driving scenes. Reviewing these conflicting results, we suggest that drivers' increased ability to detect changes will only appear when the task requires a pattern of visual attention distribution typical of actual driving. By adding a distant fixation point on the road image, we developed a modified change blindness paradigm and measured detection performance of drivers and non-drivers. Drivers performed better than non-drivers only in scenes with a fixation point. Furthermore, experience effect interacted with the location of the change and the relevance of the change to driving. These results suggest that learning associated with driving experience reflects increased skill in the efficient distribution of visual attention across both the central focus area and peripheral objects. This article provides an explanation for the previously conflicting reports of driving experience effects in change detection tasks. We observed a measurable benefit of experience in static driving scenes, using a modified change blindness paradigm. These results have translational opportunities for picture-based training and testing tools to improve driver skill.

  8. Steering and positioning targets for HWIL IR testing at cryogenic conditions

    NASA Astrophysics Data System (ADS)

    Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.

    2006-05-01

    In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.

  9. Using Stories to Reframe the Social Construction of Reality: A Trio of Activities

    ERIC Educational Resources Information Center

    Morgan, Sandra; Dennehy, Robert F.

    2004-01-01

    This article first presents the theoretical grounding for both storytelling and the social construction of reality. A sequence of classroom-tested tools for combining stories with reality construction is then described. Two tools for framing reality are offered: One is an actual frame that students take out of the classroom to frame a scene in…

  10. Geometric correction of satellite data using curvilinear features and virtual control points

    NASA Technical Reports Server (NTRS)

    Algazi, V. R.; Ford, G. E.; Meyer, D. I.

    1979-01-01

    A simple, yet effective procedure for the geometric correction of partial Landsat scenes is described. The procedure is based on the acquisition of actual and virtual control points from the line printer output of enhanced curvilinear features. The accuracy of this method compares favorably with that of the conventional approach in which an interactive image display system is employed.

  11. Photojournalism Issues for the 1990s: Concerns for All Teachers of Journalism Courses.

    ERIC Educational Resources Information Center

    Lester, Paul

    Journalism instructors are concerned that the credibility of images and consequently of words will suffer if the image content, as the photographer took the picture at the time, is altered by a computer operator far removed from the actual scene. Any discussion of picture manipulation ethics must take into account where and why a picture was…

  12. Scenes from Day Care: How Teachers Teach and What Children Learn.

    ERIC Educational Resources Information Center

    Platt, Elizabeth Balliett

    This book describes the results of film study of every day events in day care. It focuses on teacher and child behavior as they interact at meals, naps, and play, and proposes that minute examination of what actually happens to children in specific situations is necessary to identify the kinds of positive behaviors caregivers want to build on, as…

  13. Injury prevention practices as depicted in G and PG rated movies: the sequel

    PubMed Central

    Ramsey, L; Ballesteros, M; Pelletier, A; Wolf, J

    2005-01-01

    Objective: To determine whether the depiction of injury prevention practices in children's movies released during 1998–2002 is different from an earlier study, which found that characters were infrequently depicted practicing recommended safety behaviors. Methods: The top 25 G (general audience) and PG (parental guidance suggested) rated movies per year from 1998–2002 comprised the study sample. Movies or scenes not set in the present day, animated, documentary, or not in English were excluded; fantasy scenes were also excluded. Injury prevention practices of motor vehicle occupants, pedestrians, bicyclists, and boaters were recorded for characters with speaking roles. Results: Compared with the first study, the proportion of scenes with characters wearing safety belts increased (27% v 35%, p<0.01), the proportion of scenes with characters wearing personal flotation devices decreased (17% v 0%, p<0.05), and no improvement was noted in pedestrian behavior or use of bicycle helmets. Conclusions: Despite a modest increase in safety belt usage, appropriate injury prevention practices are still infrequently shown in top grossing G and PG rated movies. The authors recommend that the entertainment industry incorporate safe practices into children's movies. Parents should call attention to the depiction of unsafe behaviors in movies and educate children to follow recommended safety practices. PMID:16326770

  14. The Social History of Open Education: Austrian and Soviet Schools in the 1920s

    ERIC Educational Resources Information Center

    Hein, George E.

    1975-01-01

    Discusses how open education arose in the United States, what its relations are to the society around it, and what it has to offer to the American scene by examining past attempts to institute it in two other countries, noting that the two examples each present graphic examples of the interrelationship between education and politics. (Author/JM)

  15. Considerations for the Composition of Visual Scene Displays: Potential Contributions of Information from Visual and Cognitive Sciences (Forum Note)

    PubMed Central

    Wilkinson, Krista M.; Light, Janice; Drager, Kathryn

    2013-01-01

    Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing – that is, how a user attends, perceives, and makes sense of the visual information on the display – therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations. PMID:22946989

  16. Transport-aware imaging

    NASA Astrophysics Data System (ADS)

    Kutulakos, Kyros N.; O'Toole, Matthew

    2015-03-01

    Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.

  17. Yellow River, China

    NASA Image and Video Library

    1994-09-30

    STS068-220-033 (30 September-11 October 1994) --- Photographed through the Space Shuttle Endeavour's flight deck windows, this 70mm frame shows a small section of China's Yellow River (Huang Ho) highlighted by sunglint reflection off the surface of the water. The river flows northeastward toward the village of Tung-lin-tzu. The low dissected mountains that cover more than half of this scene rise some 2,000 feet (on the average) above the valley floor. A major east-west transportation corridor (both railway and automobile) is observed traversing the landscape north of the river. This entire region is considered to be part of the Ordos Desert, actually part of the greater Gobi located just north of this area. Approximate center coordinates of this scene are 37.5 degrees north latitude and 105.0 degrees east longitude.

  18. Video im Anfaengerunterricht. Modell: Vorgabe und Einuebung von Dialogsituationen und Sprechintentionen (Video in Teaching Beginners. Model: Example and Practice in Dialog Situations and Topics for Oral Practice)

    ERIC Educational Resources Information Center

    Bauer, Hans L.

    1977-01-01

    Describes the production, at the Goethe Institute in Osaka, of video programs for teaching beginners in German. Learning goals, actualization, sample topics and variation scenes are presented; the teaching process (in ten points) is discussed, theoretically and on the basis of experience. (Text is in German.) (IFS/WGA)

  19. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian

    2018-02-01

    For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.

  20. Advanced interactive display formats for terminal area traffic control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.

    1996-01-01

    This report describes the basic design considerations for perspective air traffic control displays. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. Two distinct modes of MVPS operations are considered, both of which utilize manipulation pointers imbedded in the three-dimensional scene: (1) direct manipulation of the viewing parameters -- in this mode the manipulation pointers act like the control-input device, through which the viewing parameter changes are made. Part of the parameters are rate controlled, and part of them position controlled. This mode is intended for making fast, iterative small changes in the parameters. (2) Indirect manipulation of the viewing parameters -- this mode is intended primarily for introducing large, predetermined changes in the parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of the screen. This arrangement has been chosen in order to preserve the correspondence between the spatial lay-outs of the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer-generated scene. The proposed, continued research efforts will deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the air traffic control scene can be viewed for a given traffic situation. They determine whether a change in viewing parameter setting is required and determine the dynamic path along which the change to the new viewing parameter setting should take place.

  1. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  2. Apollo 16 Lunar Module 'Orion' at the Descartes landing site

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Apollo 16 Lunar Module 'Orion' is part of the lunar scene at the Descartes landing site, as seen in the reproduction taken from a color television transmission made by the color TV camera mounted on the Lunar Roving Vehicle. Note the U.S. flag deployed on the left. This picture was made during the second Apollo 16 extravehicular activity (EVA-2).

  3. Lunar Roving Vehicle parked in lunar depression on slope of Stone Mountain

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The Lunar Roving Vehicle appears to be parked in a deep lunar depression on the slope of Stone Mountain in this photograph of the lunar scene at Station no. 4, taken during the second Apollo 16 extravehicular activity (EVA-2) at the Descartes landing site. A sample collection bag is in the right foreground. Note field of small boulders at upper right.

  4. Characteristics of the Self-Actualized Person: Visions from the East and West.

    ERIC Educational Resources Information Center

    Chang, Raylene; Page, Richard C.

    1991-01-01

    Compares and contrasts the ways that Chinese Taoism and Zen Buddhism view the development of human potential with the ways that the self-actualization theories of Rogers and Maslow describe the human potential movement. Notes many similarities between the ways that Taoism, Zen Buddhism, and the self-actualization theories of Rogers and Maslow…

  5. Describing, using 'recognition cones'. [parallel-series model with English-like computer program

    NASA Technical Reports Server (NTRS)

    Uhr, L.

    1973-01-01

    A parallel-serial 'recognition cone' model is examined, taking into account the model's ability to describe scenes of objects. An actual program is presented in an English-like language. The concept of a 'description' is discussed together with possible types of descriptive information. Questions regarding the level and the variety of detail are considered along with approaches for improving the serial representations of parallel systems.

  6. Chicago, Illinois, USA

    NASA Image and Video Library

    1990-03-04

    In this late winter scene of Chicago, Illinois, USA (41.5N, 87.0W) the light dusting of snow has actually enhanced the determination of the cities street pattern, parks and other cultural features. Sited at the south end of Lake Michigan, Chicago has long served as an industrial, transportation and communications center for the midwest. The obvious snowline on the ground enables meteorologists to trace the regional groundtracks of winter storms.

  7. Devolution and Choice in Education: The School, the State and the Market. Australian Education Review No. 41.

    ERIC Educational Resources Information Center

    Whitty, Geoff; Power, Sally; Halpin, David

    This book examines recent school reforms in England and Wales, the U.S.A., Australia, New Zealand and Sweden. It suggests that, at the same time as appearing to devolve power to individual schools and parents, governments have actually been increasing their own capacity to "steer" the system at a distance. Section 1 sets the scene by outlining and…

  8. South Polar Cap

    NASA Technical Reports Server (NTRS)

    2004-01-01

    [figure removed for brevity, see original site]

    Released 28 May 2004 This image was collected February 29, 2004 during the end of southern summer season. The local time at the location of the image was about 2 pm. The image shows an area in the South Polar region.

    The THEMIS VIS camera is capable of capturing color images of the martian surface using its five different color filters. In this mode of operation, the spatial resolution and coverage of the image must be reduced to accommodate the additional data volume produced from the use of multiple filters. To make a color image, three of the five filter images (each in grayscale) are selected. Each is contrast enhanced and then converted to a red, green, or blue intensity image. These three images are then combined to produce a full color, single image. Because the THEMIS color filters don't span the full range of colors seen by the human eye, a color THEMIS image does not represent true color. Also, because each single-filter image is contrast enhanced before inclusion in the three-color image, the apparent color variation of the scene is exaggerated. Nevertheless, the color variation that does appear is representative of some change in color, however subtle, in the actual scene. Note that the long edges of THEMIS color images typically contain color artifacts that do not represent surface variation.

    Image information: VIS instrument. Latitude -84.7, Longitude 9.3 East (350.7 West). 38 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  9. Mission control activity during STS-61 EVA-2

    NASA Image and Video Library

    1993-12-05

    Harry Black, at the Integrated Communications Officer's console in the Mission Control Center (MCC), monitors the second extravehicular activity (EVA-2) of the STS-61 Hubble Space Telescope (HST) servicing mission. Others pictured, left to right, are Judy Alexander, Kathy Morrison and Linda Thomas. Note monitor scene of one of HST's original solar array panels floating in space moments after being tossed away by Astronaut Kathryn C. Thornton.

  10. Himalayan Foothills, Bangladesh

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This remarkably clear, pre-monsoon view of the Himalayan foothills of Bangladesh (26.0N, 89.5E) shows the deforestation of the lower slopes for agriculture and pasture lands. The cleared lower slopes are generally used for tea cultivation. The intensity of agricultural land use, mostly in the form of small, family subsistance farms on the Ganges Plain is evident over most of the scene. Note also, the aircraft contrail and Tista River.

  11. Secondary Forest Age and Tropical Forest Biomass Estimation Using TM

    NASA Technical Reports Server (NTRS)

    Nelson, R. F.; Kimes, D. S.; Salas, W. A.; Routhier, M.

    1999-01-01

    The age of secondary forests in the Amazon will become more critical with respect to the estimation of biomass and carbon budgets as tropical forest conversion continues. Multitemporal Thematic Mapper data were used to develop land cover histories for a 33,000 Square kM area near Ariquemes, Rondonia over a 7 year period from 1989-1995. The age of the secondary forest, a surrogate for the amount of biomass (or carbon) stored above-ground, was found to be unimportant in terms of biomass budget error rates in a forested TM scene which had undergone a 20% conversion to nonforest/agricultural cover types. In such a situation, the 80% of the scene still covered by primary forest accounted for over 98% of the scene biomass. The difference between secondary forest biomass estimates developed with and without age information were inconsequential relative to the estimate of biomass for the entire scene. However, in futuristic scenarios where all of the primary forest has been converted to agriculture and secondary forest (55% and 42% respectively), the ability to age secondary forest becomes critical. Depending on biomass accumulation rate assumptions, scene biomass budget errors on the order of -10% to +30% are likely if the age of the secondary forests are not taken into account. Single-date TM imagery cannot be used to accurately age secondary forests into single-year classes. A neural network utilizing TM band 2 and three TM spectral-texture measures (bands 3 and 5) predicted secondary forest age over a range of 0-7 years with an RMSE of 1.59 years and an R(Squared) (sub actual vs predicted) = 0.37. A proposal is made, based on a literature review, to use satellite imagery to identify general secondary forest age groups which, within group, exhibit relatively constant biomass accumulation rates.

  12. Effects of distribution density and cell dimension of 3D vegetation model on canopy NDVI simulation base on DART

    NASA Astrophysics Data System (ADS)

    Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei

    2017-09-01

    The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.

  13. Age Differences in Selective Memory of Goal-Relevant Stimuli Under Threat.

    PubMed

    Durbin, Kelly A; Clewett, David; Huang, Ringo; Mather, Mara

    2018-02-01

    When faced with threat, people often selectively focus on and remember the most pertinent information while simultaneously ignoring any irrelevant information. Filtering distractors under arousal requires inhibitory mechanisms, which take time to recruit and often decline in older age. Despite the adaptive nature of this ability, relatively little research has examined how both threat and time spent preparing these inhibitory mechanisms affect selective memory for goal-relevant information across the life span. In this study, 32 younger and 31 older adults were asked to encode task-relevant scenes, while ignoring transparent task-irrelevant objects superimposed onto them. Threat levels were increased on some trials by threatening participants with monetary deductions if they later forgot scenes that followed threat cues. We also varied the time between threat induction and a to-be-encoded scene (i.e., 2 s, 4 s, 6 s) to determine whether both threat and timing effects on memory selectivity differ by age. We found that age differences in memory selectivity only emerged after participants spent a long time (i.e., 6 s) preparing for selective encoding. Critically, this time-dependent age difference occurred under threatening, but not neutral, conditions. Under threat, longer preparation time led to enhanced memory for task-relevant scenes and greater memory suppression of task-irrelevant objects in younger adults. In contrast, increased preparation time after threat induction had no effect on older adults' scene memory and actually worsened memory suppression of task-irrelevant objects. These findings suggest that increased time to prepare top-down encoding processes benefits younger, but not older, adults' selective memory for goal-relevant information under threat. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Mosaic of Apollo 16 Descartes landing site taken from TV transmission

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A 360 degree field of view of the Apollo 16 Descartes landing site area composed of individual scenes taken from a color transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle. This panorama was made while the LRV was parked at the rim of North Ray crater (Stations 11 and 12) during the third Apollo 16 lunar surface extravehicular activity (EVA-3) by Astronauts John W. Young and Charles M. Duke Jr. The overlay identifies the directions and the key lunar terrain features. The camera panned across the rear portion of the LRV in its 360 degree sweep. Note Young and Duke walking along the edge of the crater in one of the scenes. The TV camera was remotely controlled from a console in the Mission Control Center.

  15. Optical system for object detection and delineation in space

    NASA Astrophysics Data System (ADS)

    Handelman, Amir; Shwartz, Shoam; Donitza, Liad; Chaplanov, Loran

    2018-01-01

    Object recognition and delineation is an important task in many environments, such as in crime scenes and operating rooms. Marking evidence or surgical tools and attracting the attention of the surrounding staff to the marked objects can affect people's lives. We present an optical system comprising a camera, computer, and small laser projector that can detect and delineate objects in the environment. To prove the optical system's concept, we show that it can operate in a hypothetical crime scene in which a pistol is present and automatically recognize and segment it by various computer-vision algorithms. Based on such segmentation, the laser projector illuminates the actual boundaries of the pistol and thus allows the persons in the scene to comfortably locate and measure the pistol without holding any intermediator device, such as an augmented reality handheld device, glasses, or screens. Using additional optical devices, such as diffraction grating and a cylinder lens, the pistol size can be estimated. The exact location of the pistol in space remains static, even after its removal. Our optical system can be fixed or dynamically moved, making it suitable for various applications that require marking of objects in space.

  16. Self-adaptive calibration for staring infrared sensors

    NASA Astrophysics Data System (ADS)

    Kendall, William B.; Stocker, Alan D.

    1993-10-01

    This paper presents a new, self-adaptive technique for the correlation of non-uniformities (fixed-pattern noise) in high-density infrared focal-plane detector arrays. We have developed a new approach to non-uniformity correction in which we use multiple image frames of the scene itself, and take advantage of the aim-point wander caused by jitter, residual tracking errors, or deliberately induced motion. Such wander causes each detector in the array to view multiple scene elements, and each scene element to be viewed by multiple detectors. It is therefore possible to formulate (and solve) a set of simultaneous equations from which correction parameters can be computed for the detectors. We have tested our approach with actual images collected by the ARPA-sponsored MUSIC infrared sensor. For these tests we employed a 60-frame (0.75-second) sequence of terrain images for which an out-of-date calibration was deliberately used. The sensor was aimed at a point on the ground via an operator-assisted tracking system having a maximum aim point wander on the order of ten pixels. With these data, we were able to improve the calibration accuracy by a factor of approximately 100.

  17. Seatbelt and helmet depiction on the big screen blockbuster injury prevention messages?

    PubMed

    Cowan, John A; Dubosh, Nicole; Hadley, Craig

    2009-03-01

    Injuries from vehicle crashes are a major cause of death among American youth. Many of these injuries are worsened because of noncompliant safety practices. Messages delivered by mass media are omnipresent in young peoples' lives and influence their behavior patterns. In this investigation, we analyzed seat belt and helmet messages from a sample of top-grossing motion pictures with emphasis on scene context and character demographics. Content analysis of 50 top-grossing motion pictures for years 2000 to 2004, with coding for seat belt and helmet usage by trained media coders. In 48 of 50 movies (53% PG-13; 33% R; 10% PG; 4% G) with vehicle scenes, 518 scenes (82% car/truck; 7% taxi/limo; 7% motorcycle; 4% bicycle/skateboard) were coded. Overall, seat belt and helmet usage rates were 15.4% and 33.3%, respectively, with verbal indications for seat belt or helmet use found in 1.0% of scenes. Safety compliance rates varied by character race (18.3% white; 6.5% black; p = 0.036). No differences in compliance rates were noted for high-speed or unsafe vehicle operation. The injury rate for noncompliant characters involved in crashes was 10.7%. A regression model demonstrated black character race and escape scenes most predictive of noncompliant safety behavior. Safety compliance messages and images are starkly absent in top-grossing motion pictures resulting in, at worst, a deleterious effect on vulnerable populations and public health initiatives, and, at minimum, a lost opportunity to prevent injury and death. Healthcare providers should call on the motion picture industry to improve safety compliance messages and images in their products delivered for mass consumption.

  18. Discrepancies Between Planned and Actual Operating Room Turnaround Times at a Large Rural Hospital in Germany

    PubMed Central

    Morgenegg, Regula; Heinze, Franziska; Wieferich, Katharina; Schiffer, Ralf; Stueber, Frank; Luedi, Markus M.; Doll, Dietrich

    2017-01-01

    Objectives While several factors have been shown to influence operating room (OR) turnaround times, few comparisons of planned and actual OR turnaround times have been performed. This study aimed to compare planned and actual OR turnaround times at a large rural hospital in Northern Germany. Methods This retrospective study examined the OR turnaround data of 875 elective surgery cases scheduled at the Marienhospital, Vechta, Germany, between July and October 2014. The frequency distributions of planned and actual OR turnaround times were compared and correlations between turnaround times and various factors were established, including the time of day of the procedure, patient age and the planned duration of the surgery. Results There was a significant difference between mean planned and actual OR turnaround times (0.32 versus 0.64 hours; P <0.001). In addition, significant correlations were noted between actual OR turnaround times and the time of day of the surgery, patient age, actual duration of the procedure and staffing changes affecting the surgeon or the medical specialty of the surgery (P <0.001 each). The quotient of actual/planned OR turnaround times ranged from 1.733–3.000. Conclusion Significant discrepancies between planned and actual OR turnaround times were noted during the study period. Such findings may be potentially used in future studies to establish a tool to improve OR planning, measure OR management performance and enable benchmarking. PMID:29372083

  19. Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing

    NASA Astrophysics Data System (ADS)

    Sedlar, Michael F.; Griffith, Jerry A.

    1988-07-01

    This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.

  20. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  1. Summary of the Validation of the Second Version of the Aster Gdem

    NASA Astrophysics Data System (ADS)

    Meyer, D. J.; Tachikawa, T.; Abrams, M.; Crippen, R.; Krieger, T.; Gesch, D.; Carabajal, C.

    2012-07-01

    On October 17, 2011, NASA and the Ministry of Economy, Trade and Industry (METI) of Japan released the second version of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) to users worldwide at no charge as a contribution to the Global Earth Observing System of Systems (GEOSS). The first version of the ASTER GDEM, released on June 29, 2009, was compiled from over 1.2 million scene-based DEMs covering land surfaces between 83°N and 83°S latitudes. The second version (GDEM2) incorporates 260,000 additional scenes to improve coverage, a smaller correlation kernel to yield higher spatial resolution, and improved water masking. As with GDEM1, US and Japanese partners collaborated to validate GDEM2. Its absolute accuracy was within -0.20 meters on average when compared against 18,000 geodetic control points over the conterminous US (CONUS), with an accuracy of 17 meters at the 95% confidence level. The Japan study noted the GDEM2 differed from the 10-meter national elevation grid by -0.7 meters over bare areas, and by 7.4 meters over forested areas. The CONUS study noted a similar result, with the GDEM2 determined to be about 8 meters above the 1 arc-second US National Elevation Database (NED) over most forested areas, and more than a meter below NED over bare areas. A global ICESat study found the GDEM2 to be on average within 3 meters of altimeter-derived control. The Japan study noted a horizontal displacement of 0.23 pixels in GDEM2. A study from the US National Geospatial Intelligence Agency also determined horizontal displacement and vertical accuracy as compared to the 1 arc-second Shuttle Radar Topography Mission DEM. US and Japanese studies estimated the horizontal resolution of the GDEM2 to be between 71 and 82 meters. Finally, the number of voids and artifacts noted in GDEM1 were substantially reduced in GDEM2.

  2. To what extent do clinical notes by general practitioners reflect actual medical performance? A study using simulated patients.

    PubMed Central

    Rethans, J J; Martin, E; Metsemakers, J

    1994-01-01

    BACKGROUND. Review of clinical notes is used extensively as an indirect method of assessing doctors' performance. However, to be acceptable it must be valid. AIM. This study set out to examine the extent to which clinical notes in medical records of general practice consultations reflected doctors' actual performance during consultations. METHOD. Thirty nine general practitioners in the Netherlands were consulted by four simulated patients who were indistinguishable from real patients and who reported on the consultations. The complaints presented by the simulated patients were tension headache, acute diarrhoea and pain in the shoulder, and one presented for a check up for non-insulin dependent diabetes. Later, the doctors forwarded their medical records of these patients to the researchers. Content of consultations was measured against accepted standards for general practice and then compared with content of clinical notes. An index, or content score, was calculated as the measure of agreement between actions which had actually been recorded and actions which could have been recorded in the clinical notes. A high content score reflected a consultation which had been recorded well in the medical record. The correlation between number of actions across the four complaints recorded in the clinical notes and number of actions taken during the consultations was also calculated. RESULTS. The mean content score (interquartile range) for the four types of complaint was 0.32 (0.27-0.37), indicating that of all actions undertaken, only 32% had been recorded. However, mean content scores for the categories 'medication and therapy' and 'laboratory examination' were much higher than for the categories 'history' and 'guidance and advice' (0.68 and 0.64, respectively versus 0.29 and 0.22, respectively). The correlation between number of actions across the four complaints recorded in the clinical notes and number of actions taken during the consultations was 0.54 (P < 0.05). CONCLUSION. The use of clinical notes to audit doctors' performance in Dutch general practice is invalid. However, the use of clinical notes to rank doctors according to those who perform many or a few actions in a consultation may be justified. PMID:8185988

  3. Foggy perception slows us down.

    PubMed

    Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H

    2012-10-30

    Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog-that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system.DOI:http://dx.doi.org/10.7554/eLife.00031.001.

  4. Transient cardio-respiratory responses to visually induced tilt illusions

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.

    2000-01-01

    Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.

  5. Polarimetric Interferometry and Differential Interferometry

    DTIC Science & Technology

    2005-02-01

    example of the entropy or phase stability of a mixed scene, being the Oberpfaffenhofen area as collected by the DLR L-Band ESAR system. We note that...robust ratios of scattering elements as shown for example in table I. [10,11,12,13,14,15] The urban areas (upper right corner) in figure 2 show...height and biomass estimation, but there are many other application areas where this technology is being considered. Table I provides a selective

  6. The use of liquid latex for soot removal from fire scenes and attempted fingerprint development with Ninhydrin.

    PubMed

    Clutter, Susan Wright; Bailey, Robert; Everly, Jeff C; Mercer, Karl

    2009-11-01

    Throughout the United States, clearance rates for arson cases remain low due to fire's destructive nature, subsequent suppression, and a misconception by investigators that no forensic evidence remains. Recent research shows that fire scenes can yield fingerprints if soot layers are removed prior to using available fingerprinting processes. An experiment applying liquid latex to sooted surfaces was conducted to assess its potential to remove soot and yield fingerprints after the dried latex was peeled. Latent fingerprints were applied to glass and drywall surfaces, sooted in a controlled burn, and cooled. Liquid latex was sprayed on, dried, and peeled. Results yielded usable prints within the soot prior to removal techniques, but no further fingerprint enhancement was noted with Ninhydrin. Field studies using liquid latex will be continued by the (US) Virginia Fire Marshal Academy but it appears that liquid latex application is a suitable soot removal method for forensic applications.

  7. Microcounseling Skill Discrimination Scale: A Methodological Note

    ERIC Educational Resources Information Center

    Stokes, Joseph; Romer, Daniel

    1977-01-01

    Absolute ratings on the Microcounseling Skill Discrimination Scale (MSDS) confound the individual's use of the rating scale and actual ability to discriminate effective and ineffective counselor behaviors. This note suggests methods of scoring the MSDS that will eliminate variability attributable to response language and improve the validity of…

  8. Earth Observation taken during the Expedition 37 mission

    NASA Image and Video Library

    2013-10-30

    ISS037-E-022828 (30 Oct. 2013) --- This isn?t someone?s frame grab of a decorative Halloween scene, although it was photographed on Halloween eve. It is actually a picture of the Aurora Australis or Southern Lights, photographed by one of the Expedition 37 crew members on the International Space Station as the orbital complex flew over Tasmania on Oct. 30. The human-produced hardware in the picture is part of the outpost?s robotic arm system.

  9. Actual Readers versus Implied Readers: Role Conflicts in Office 97.

    ERIC Educational Resources Information Center

    Shroyer, Roberta

    2000-01-01

    Explains the controversy surrounding the Office Assistant ("Paper-Clip") in Microsoft's Office 97. Discusses why actual readers rejected the default Office Assistant's role as implied writer and rebelled against the reader role implied for them. Notes users resented its intrusive behavior, rejected its implied writer role, and refused to…

  10. Shape Matching and Image Segmentation Using Stochastic Labeling

    DTIC Science & Technology

    1981-08-01

    hierarchique d’Etiquetage Probabiliste," To be presented at AFCET, 3 eme Congres, Reconnaissance Des Formes et Intelligence Artificielle , Sept. 16-18...Tenenbaum, "MSYS: A System for Reasoning About Scenes," Tech. Note 121, Artificial Intelligence Center, SRI Intl., Menlo Park, CA, 1976. [1-6] D. Marr, T...Analysis and Machine Intelligence . [1-10] O.D. Faugeras and M. Berthod, "Using Context in the Global Recognition of a Set of Objects: An Optimization

  11. Correlates of Prescription Drug Market Involvement among Young Adults

    PubMed Central

    Vuolo, Mike; Kelly, Brian C.; Wells, Brooke E.; Parsons, Jeffrey T.

    2014-01-01

    Background While a significant minority of prescription drug misusers report purchasing prescription drugs, little is known about prescription drug selling. We build upon past research on illicit drug markets, which increasingly recognizes networks and nightlife as influential, by examining prescription drug market involvement. Methods We use data from 404 young adult prescription drug misusers sampled from nightlife scenes. Using logistic regression, we examine recent selling of and being approached to sell prescription drugs, predicted using demographics, misuse, prescription access, and nightlife scene involvement. Results Those from the wealthiest parental class and heterosexuals had higher odds (OR=6.8) of selling. Higher sedative and stimulant misuse (ORs=1.03), having a stimulant prescription (OR=4.14), and having sold other illegal drugs (OR=6.73) increased the odds of selling. College bar scene involvement increased the odds of selling (OR=2.73) and being approached to sell (OR=2.09). Males (OR=1.93), stimulant users (OR=1.03), and sedative prescription holders (OR=2.11) had higher odds of being approached. Discussion College bar scene involvement was the only site associated with selling and being approached; such participation may provide a network for prescription drug markets. There were also differences between actual selling and being approached. Males were more likely to be approached, but not more likely to sell than females, while the opposite held for those in the wealthiest parental class relative to lower socioeconomic statuses. Given that misuse and prescriptions of sedatives and stimulants were associated with prescription drug market involvement, painkiller misusers may be less likely to sell their drugs given the associated physiological dependence. PMID:25175544

  12. Relative spectral response corrected calibration inter-comparison of S-NPP VIIRS and Aqua MODIS thermal emissive bands

    NASA Astrophysics Data System (ADS)

    Efremova, Boryana; Wu, Aisheng; Xiong, Xiaoxiong

    2014-09-01

    The S-NPP Visible Infrared Imaging Radiometer Suite (VIIRS) instrument is built with strong heritage from EOS MODIS, and has very similar thermal emissive bands (TEB) calibration algorithm and on-board calibrating source - a V-grooved blackbody. The calibration of the two instruments can be assessed by comparing the brightness temperatures retrieved from VIIRS and Aqua MODIS simultaneous nadir observations (SNO) from their spectrally matched TEB. However, even though the VIIRS and MODIS bands are similar there are still relative spectral response (RSR) differences and thus some differences in the retrieved brightness temperatures are expected. The differences depend on both the type and the temperature of the observed scene, and contribute to the bias and the scatter of the comparison. In this paper we use S-NPP Cross-track Infrared Sounder (CrIS) data taken simultaneously with the VIIRS data to derive a correction for the slightly different spectral coverage of VIIRS and MODIS TEB bands. An attempt to correct for RSR differences is also made using MODTRAN models, computed with physical parameters appropriate for each scene, and compared to the value derived from actual CrIS spectra. After applying the CrIS-based correction for RSR differences we see an excellent agreement between the VIIRS and Aqua MODIS measurements in the studied band pairs M13-B23, M15-B31, and M16- B32. The agreement is better than the VIIRS uncertainty at cold scenes, and improves with increasing scene temperature up to about 290K.

  13. Visual wetness perception based on image color statistics.

    PubMed

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  14. Realism and Perspectivism: a Reevaluation of Rival Theories of Spatial Vision.

    NASA Astrophysics Data System (ADS)

    Thro, E. Broydrick

    1990-01-01

    My study reevaluates two theories of human space perception, a trigonometric surveying theory I call perspectivism and a "scene recognition" theory I call realism. Realists believe that retinal image geometry can supply no unambiguous information about an object's size and distance--and that, as a result, viewers can locate objects in space only by making discretionary interpretations based on familiar experience of object types. Perspectivists, in contrast, think viewers can disambiguate object sizes/distances on the basis of retinal image information alone. More specifically, they believe the eye responds to perspective image geometry with an automatic trigonometric calculation that not only fixes the directions and shapes, but also roughly fixes the sizes and distances of scene elements in space. Today this surveyor theory has been largely superceded by the realist approach, because most vision scientists believe retinal image geometry is ambiguous about the scale of space. However, I show that there is a considerable body of neglected evidence, both past and present, tending to call this scale ambiguity claim into question. I maintain that this evidence against scale ambiguity could hardly be more important, if one considers its subversive implications for the scene recognition theory that is not only today's reigning approach to spatial vision, but also the foundation for computer scientists' efforts to create space-perceiving robots. If viewers were deemed to be capable of automatic surveying calculations, the discretionary scene recognition theory would lose its main justification. Clearly, it would be difficult for realists to maintain that we viewers rely on scene recognition for space perception in spite of our ability to survey. And in reality, as I show, the surveyor theory does a much better job of describing the everyday space we viewers actually see--a space featuring stable, unambiguous relationships among scene elements, and a single horizon and vanishing point for (meter-scale) receding objects. In addition, I argue, the surveyor theory raises fewer philosophical difficulties, because it is more in harmony with our everyday concepts of material objects, human agency and the self.

  15. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1.

    PubMed

    Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu

    2014-04-23

    How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.

  16. Physical Mechanism, Spectral Detection, and Potential Mitigation of 3D Cloud Effects on OCO-2 Radiances and Retrievals

    NASA Astrophysics Data System (ADS)

    Cochrane, S.; Schmidt, S.; Massie, S. T.; Iwabuchi, H.; Chen, H.

    2017-12-01

    Analysis of multiple partially cloudy scenes as observed by OCO-2 in nadir and target mode (published previously and reviewed here) revealed that XCO2 retrievals are systematically biased in presence of scattered clouds. The bias can only partially be removed by applying more stringent filtering, and it depends on the degree of scene inhomogeneity as quantified with collocated MODIS/Aqua imagery. The physical reason behind this effect was so far not well understood because in contrast to cloud-mediated biases in imagery-derived aerosol retrievals, passive gas absorption spectroscopy products do not depend on the absolute radiance level and should therefore be less sensitive to 3D cloud effects and surface albedo variability. However, preliminary evidence from 3D radiative transfer calculations suggested that clouds in the vicinity of an OCO-2 footprint not only offset the reflected radiance spectrum, but introduce a spectrally dependent perturbation that affects absorbing channels disproportionately, and therefore bias the spectroscopy products. To understand the nature of this effect for a variety of scenes, we developed the OCO-2 radiance simulator, which uses the available information on a scene (e.g., MODIS-derived surface albedo, cloud distribution, and other parameters) as the basis for 3D radiative transfer calculations that can predict the radiances observed by OCO-2. We present this new tool and show examples of its utility for a few specific scenes. More importantly, we draw conclusions about the physical mechanism behind this 3D cloud effect on radiances and ultimately OCO-2 retrievals, which involves not only the clouds themselves but also the surface. Harnessed with this understanding, we can now detect cloud vicinity effects in the OCO-2 spectra directly, without actually running the 3D radiance simulator. Potentially, it is even possible to mitigate these effects and thus increase data harvest in regions with ubiquitous cloud cover such as the Amazon. We will discuss some of the hurdles one faces when using only OCO-2 spectra to accomplish this goal, but also that scene context from the other A-Train instruments and the new radiance simulator tool can help overcome some of them.

  17. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1

    PubMed Central

    2014-01-01

    Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246

  18. Development of Simulated Disturbing Source for Isolation Switch

    NASA Astrophysics Data System (ADS)

    Cheng, Lin; Liu, Xiang; Deng, Xiaoping; Pan, Zhezhe; Zhou, Hang; Zhu, Yong

    2018-01-01

    In order to simulate the substation in the actual scene of the harsh electromagnetic environment, and then research on electromagnetic compatibility testing of electronic instrument transformer, On the basis of the original isolation switch as a harassment source of the electronic instrument transformer electromagnetic compatibility test system, an isolated switch simulation source system was developed, to promote the standardization of the original test. In this paper, the circuit breaker is used to control the opening and closing of the gap arc to simulate the operating of isolating switch, and the isolation switch simulation harassment source system is designed accordingly. Comparison with the actual test results of the isolating switch, it is proved that the system can meet the test requirements, and the simulation harassment source system has good stability and high reliability.

  19. Overview of the EarthCARE simulator and its applications

    NASA Astrophysics Data System (ADS)

    van Zadelhoff, G.; Donovan, D. P.; Lajas, D.

    2011-12-01

    The EarthCARE Simulator (ECSIM) was initially developed in 2004 as a scientific tool to simulate atmospheric scenes, radiative transfer and instrument models for the four instruments of the EarthCARE mission. ECSIM has subsequently been significantly further enhanced and is evolving into a tool for both mission performance assessment and L2 retrieval development. It is an ESA requirement that all L2 retrieval algorithms foreseen for the ground segment will be integrated and tested in ECSIM. It is furthermore envisaged, that the (retrieval part of) ECSIM will be the tool for scientists to work with on updates and new L2 algorithms during the EarthCARE Commissioning phase and beyond. ECSIM is capable of performing 'end to end' simulations of single, or any combination of the EarthCARE instruments. That is, ECSIM starts with an input atmospheric ``scene'', then uses various radiative transfer and instrument models in order to generate synthetic observations which can be subsequently inverted. The results of the inversions may then be compared to the input "truth". ECSIM consists of a modular general framework populated by various models. The models within ECSIM are grouped according to the following scheme: 1) Scene creation models (3D atmospheric scene definition) 2) Orbit models (orbit and orientation of the platform as it overflies the scene) 3) Forward models (calculate the signal impinging on the telescope/antenna of the instrument(s) in question) 4) Instrument models (calculate the instrument response to the signals calculated by the Forward models) 5) Retrieval models (invert the instrument signals to recover relevant geophysical information) Within the default ECSIM models crude instrument specific parameterizations (i.e. empirically based radar reflectivity vs. IWC relationships) are avoided. Instead, the radiative transfer forward models are kept separate (as possible) from the instrument models. In order to accomplish this, the atmospheric scenes are specified in high detail (i.e. bin resolved [cloud] size distributions) and the relevant wavelength dependent optical properties are specified in a separate database. This helps insure that all the instruments involved in the simulation are treated consistently and that the physical relationships between the various measurements are realistically captured. ECSIM is mainly used as an algorithm development platform for EarthCARE. However, it has also been used for simulating Calipso, CloudSAT, future multi-wavelength HSRL satellite missions and airborne HSRL data, showing the versatility of the tool. Validating L2 retrieval algorithms require the creation of atmospheric scenes ranging in complexity from very simple (blocky) to 'realistic' (high resolution) scenes. Recent work on the evaluation of aerosol retrieval algorithms from satellite lidar data (e.g. ATLID) required these latter scenes, which were created based on HSRL and in-situ measurements from the DLR FALCON aircraft. The synthetic signals were subsequently evaluated by comparing to the original measured signals. In this presentation an overview of the EarthCARE Simulator, its philosophy and the construction of realistic "scenes'' based on actual campaign observations is presented.

  20. Space Shuttle food tray

    NASA Image and Video Library

    1983-11-28

    STS009-05-0153 (28 Nov. - 8 Dec. 1983) --- Though STS-9 was the space shuttle Columbia's sixth spaceflight, it was the first opportunity for an onboard galley, some of the results of which are shown in this 35mm scene on the flight deck. The metal tray makes for easy preparation and serving of in-space meals for crew members. This crewman is seated at the pilot's station on the flight deck. The actual galley is located in the middeck. Photo credit: NASA

  1. Multispectral scanner system for ERTS: Four band scanner system. Volume 2: Engineering model panoramic pictures and engineering tests

    NASA Technical Reports Server (NTRS)

    1972-01-01

    This document is Volume 2 of three volumes of the Final Report for the four band Multispectral Scanner System (MSS). The results are contained of an analysis of pictures of actual outdoor scenes imaged by the engineering model MSS for spectral response, resolution, noise, and video correction. Also included are the results of engineering tests on the MSS for reflectance and saturation from clouds. Finally, two panoramic pictures of Yosemite National Park are provided.

  2. Diode Lasers and Light Emitting Diodes Operating at Room Temperature with Wavelengths Above 3 Micrometers

    DTIC Science & Technology

    2011-11-29

    as an active region of mid - infrared LEDs. It should be noted that active region based on interband transition is equally useful for both laser and...IR LED technology for infrared scene projectors”, Dr. E. Golden, Air Force Research Laboratory, Eglin Air Force Base .  “A stable mid -IR, GaSb...multimode lasers. Single spatial mode 3-3.2 J.lm diode lasers were developed. LEDs operate at wavelength above 4 J.lm at RT. Dual color mid - infrared

  3. Vegetation Versus Man-Made Object Detection from Imagery for Unmanned Vehicles in Off-Road Environments

    DTIC Science & Technology

    2013-05-01

    saliency, natural scene statistics 1. INTRODUCTION Research into the area of autonomous navigation for unmanned ground vehicles (UGV) has accelerated in...recent years. This is partly due to the success of programs such as the DARPA Grand Challenge1 and the dream of driverless cars ,2 but is also due to the...NOTES 14. ABSTRACT There have been several major advances in autonomous navigation for unmanned ground vehicles in controlled urban environments in

  4. Semiannual Report to the Congress. October 1, 2012 to March 31, 2013

    DTIC Science & Technology

    2013-03-01

    were matched to fibers from the specialist’s blanket, pillowcase and shirt. Handwriting analysis of a note found at the scene of the suicide...by promising not to report him. Wilt released her, saying she was “lucky” he forgot his hatchet, and instructed her to meet him again in 20 min...child pushed Rosales-Lopez away. Rosales-Lopez instructed the child not to say anything or he would lose his job with the Air Force. AFOSI

  5. Guest Editor's introduction: Special issue on distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Lea, Rodger

    1998-09-01

    Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com

  6. Providers' Reported and Actual Use of Coaching Strategies in Natural Environments

    ERIC Educational Resources Information Center

    Salisbury, Christine; Cambray-Engstrom, Elizabeth; Woods, Juliann

    2012-01-01

    This case study examined the agreement between reported and actual use of coaching strategies based on home visit data collected on a diverse sample of providers and families. Paired videotape and contact note data of and from providers during home visits were collected over a six month period and analyzed using structured protocols. Results of…

  7. 78 FR 46425 - Sale and Issue of Marketable Book-Entry Treasury Bills, Notes, and Bonds

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-31

    ... a simple-interest money market yield computed on an actual/360 basis, subject to an appropriate... 13-week Treasury bill auction High Rate (stop out rate) converted into a simple actual/360 interest...-week Treasury bill auction High Rate that has been translated into a simple-interest money market yield...

  8. Teachers' Beliefs, Perceived Practice and Actual Classroom Practice in Relation to Traditional (Teacher-Centered) and Constructivist (Learner-Centered) Teaching (Note 1)

    ERIC Educational Resources Information Center

    Kaymakamoglu, Sibel Ersel

    2018-01-01

    This study explored the EFL teachers' beliefs, perceived practice and actual classroom practice in relation to Traditional (teacher-centered) and Constructivist (learner-centered) teaching in Cyprus Turkish State Secondary Schools context. For this purpose, semi-structured interviews and structured observations were employed with purposively…

  9. Position coding effects in a 2D scenario: the case of musical notation.

    PubMed

    Perea, Manuel; García-Chamorro, Cristina; Centelles, Arnau; Jiménez, María

    2013-07-01

    How does the cognitive system encode the location of objects in a visual scene? In the past decade, this question has attracted much attention in the field of visual-word recognition (e.g., "jugde" is perceptually very close to "judge"). Letter transposition effects have been explained in terms of perceptual uncertainty or shared "open bigrams". In the present study, we focus on note position coding in music reading (i.e., a 2D scenario). The usual way to display music is the staff (i.e., a set of 5 horizontal lines and their resultant 4 spaces). When reading musical notation, it is critical to identify not only each note (temporal duration), but also its pitch (y-axis) and its temporal sequence (x-axis). To examine note position coding, we employed a same-different task in which two briefly and consecutively presented staves contained four notes. The experiment was conducted with experts (musicians) and non-experts (non-musicians). For the "different" trials, the critical conditions involved staves in which two internal notes that were switched vertically, horizontally, or fully transposed--as well as the appropriate control conditions. Results revealed that note position coding was only approximate at the early stages of processing and that this encoding process was modulated by expertise. We examine the implications of these findings for models of object position encoding. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Modeling human perception and estimation of kinematic responses during aircraft landing

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.; Silk, Anthony B.

    1988-01-01

    The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.

  11. The Role of the Technical Specialist in Disaster Response and Recovery

    NASA Astrophysics Data System (ADS)

    Curtis, J. C.

    2017-12-01

    Technical Specialists provide scientific expertise for making operational decisions during natural hazards emergencies. Technical Specialists are important members of any Incident Management Team (IMT) as is described in in the National Incident Management System (NIMS) that has been designed to respond to emergencies. Safety for the responders and the threatened population is the foremost consideration in command decisions and objectives, and the Technical Specialist is on scene and in the command post to support and promote safety while aiding decisions for incident objectives. The Technical Specialist's expertise can also support plans, logistics, and even finance as well as operations. This presentation will provide actual examples of the value of on-scene Technical Specialists, using National Weather Service "Decision Support Meteorologists" and "Incident Meteorologists". These examples will demonstrate the critical role of scientists that are trained in advising and presenting life-critical analysis and forecasts during emergencies. A case will be made for local, state, and/or a national registry of trained and deployment-ready scientists that can support emergency response.

  12. Eye movements to audiovisual scenes reveal expectations of a just world.

    PubMed

    Callan, Mitchell J; Ferguson, Heather J; Bindemann, Markus

    2013-02-01

    When confronted with bad things happening to good people, observers often engage reactive strategies, such as victim derogation, to maintain a belief in a just world. Although such reasoning is usually made retrospectively, we investigated the extent to which knowledge of another person's good or bad behavior can also bias people's online expectations for subsequent good or bad outcomes. Using a fully crossed design, participants listened to auditory scenarios that varied in terms of whether the characters engaged in morally good or bad behavior while their eye movements were tracked around concurrent visual scenes depicting good and bad outcomes. We found that the good (bad) behavior of the characters influenced gaze preferences for good (bad) outcomes just prior to the actual outcomes being revealed. These findings suggest that beliefs about a person's moral worth encourage observers to foresee a preferred deserved outcome as the event unfolds. We include evidence to show that this effect cannot be explained in terms of affective priming or matching strategies. 2013 APA, all rights reserved

  13. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  14. Foggy perception slows us down

    PubMed Central

    Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H

    2012-01-01

    Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog—that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system. DOI: http://dx.doi.org/10.7554/eLife.00031.001 PMID:23110253

  15. Mesopause Horizontal wind estimates based on AIM CIPS polar mesospheric cloud pattern matching

    NASA Astrophysics Data System (ADS)

    Rong, P.; Yue, J.; Russell, J. M.; Gong, J.; Wu, D. L.; Randall, C. E.

    2013-12-01

    A cloud pattern matching approach is used to estimate horizontal winds in the mesopause region using Polar Mesospheric Cloud (PMC) albedo data measured by the Cloud Imaging and Particle Size instrument on the AIM satellite. Measurements for all 15 orbits per day throughout July 2007 are used to achieve statistical significance. For each orbit, eighteen out of the twenty-seven scenes are used for the pattern matching operation. Some scenes at the lower latitudes are not included because there is barely any cloud coverage for these scenes. The frame-size chosen is about 12 degrees in longitude and 3 degrees in latitude. There is no strict criterion in choosing the frame size since PMCs are widespread in the polar region and most local patterns do not have a clearly defined boundary. The frame moves at a step of 1/6th of the frame size in both the longitudinal and latitudinal directions to achieve as many 'snap-shots' as possible. A 70% correlation is used as a criterion to define an acceptable match between two patterns at two time frames; in this case the time difference is about 3.6 minutes that spans every 5 'bowtie' scenes. A 70% criterion appears weak if the chosen pattern is expected to act like a tracer. It is known that PMC brightness varies rapidly with a changing temperature and water vapor environment or changing nucleation conditions, especially on smaller spatial scales; therefore PMC patterns are not ideal tracers. Nevertheless, within a short time span such as 3.6 minutes a 70% correlation is sufficient to identify two cloud patterns that come from the same source region, although the two patterns may exhibit a significant difference in the actual brightness. Analysis of a large number of matched cloud patterns indicates that over the 3.6-minute time span about 70% of the patterns remain in the same locations. Given the 25-km2 horizontal resolution of CIPS data, this suggests that the overall magnitude of horizontal wind at PMC altitudes (~80-87 km) in the polar summer cannot exceed 25 m/s. In other words, the wind detection resolution is no better than 25 m/s. There are about 10% of cases in which it appears that an easterly prevails, with a peak value at about 80-100m/s. In another 5% of cases a westerly appears to prevail. The remaining 15% cases are related to either invalid cloud features with poor background correction or the situation in which the matching occurs at the corners of the bowties. The AIM CIPS cloud pattern matching results overall suggest that higher wind speed (25-200 m/s) can be reached occasionally, while in a majority of cases the wind advection caused albedo change is much smaller than the in-situ albedo change. However, we must note that this analysis was a feasibility study and the short period analyzed may not be representative of the winds over a seasonal time scale or the multiple-year average.

  16. Correlates of prescription drug market involvement among young adults.

    PubMed

    Vuolo, Mike; Kelly, Brian C; Wells, Brooke E; Parsons, Jeffrey T

    2014-10-01

    While a significant minority of prescription drug misusers report purchasing prescription drugs, little is known about prescription drug selling. We build upon past research on illicit drug markets, which increasingly recognizes networks and nightlife as influential, by examining prescription drug market involvement. We use data from 404 young adult prescription drug misusers sampled from nightlife scenes. Using logistic regression, we examine recent selling of and being approached to sell prescription drugs, predicted using demographics, misuse, prescription access, and nightlife scene involvement. Those from the wealthiest parental class and heterosexuals had higher odds (OR=6.8) of selling. Higher sedative and stimulant misuse (OR=1.03), having a stimulant prescription (OR=4.14), and having sold other illegal drugs (OR=6.73) increased the odds of selling. College bar scene involvement increased the odds of selling (OR=2.73) and being approached to sell (OR=2.09). Males (OR=1.93), stimulant users (OR=1.03), and sedative prescription holders (OR=2.11) had higher odds of being approached. College bar scene involvement was the only site associated with selling and being approached; such participation may provide a network for prescription drug markets. There were also differences between actual selling and being approached. Males were more likely to be approached, but not more likely to sell than females, while the opposite held for those in the wealthiest parental class relative to lower socioeconomic statuses. Given that misuse and prescriptions of sedatives and stimulants were associated with prescription drug market involvement, painkiller misusers may be less likely to sell their drugs given the associated physiological dependence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Super Typhoon Halong off Taiwan

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On July 14, 2002, Super Typhoon Halong was east of Taiwan (left edge) in the western Pacific Ocean. At the time this image was taken the storm was a Category 4 hurricane, with maximum sustained winds of 115 knots (132 miles per hour), but as recently as July 12, winds were at 135 knots (155 miles per hour). Halong has moved northwards and pounded Okinawa, Japan, with heavy rain and high winds, just days after tropical Storm Chataan hit the country, creating flooding and killing several people. The storm is expected to be a continuing threat on Monday and Tuesday. This image was acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra satellite on July 14, 2002. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of the scene at the sensor's fullest resolution, visit the MODIS Rapid Response Image Gallery. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC

  18. Location of colorectal cancer: colonoscopy versus surgery. Yield of colonoscopy in predicting actual location.

    PubMed

    Blum-Guzman, Juan Pablo; Wanderley de Melo, Silvio

    2017-07-01

     Recent studies suggest that differences in biological characteristics and risk factors across cancer site within the colon and rectum may translate to differences in survival. It can be challenging at times to determine the precise anatomical location of a lesion with a luminal view during colonoscopy. The aim of this study is to determine if there is a significant difference between the location of colorectal cancers described by gastroenterologists in colonoscopies and the actual anatomical location noted on operative and pathology reports after colon surgery.  A single-center retrospective analysis of colonoscopies of patient with reported colonic masses from January 2005 to April 2014 (n = 380) was carried. Assessed data included demography, operative and pathology reports. Findings were compared: between the location of colorectal cancers described by gastroenterologists in colonoscopies and the actual anatomical location noted on operative reports or pathology samples.  We identified 380 colonic masses, 158 were confirmed adenocarcinomas. Of these 123 underwent surgical resection, 27 had to be excluded since no specific location was reported on their operative or pathology report. An absolute difference between endoscopic and surgical location was found in 32 cases (33 %). Of these, 22 (23 %) differed by 1 colonic segment, 8 (8 %) differed by 2 colonic segments and 2 (2 %) differed by 3 colonic segments.  There is a significant difference between the location of colorectal cancers reported by gastroenterologists during endoscopy and the actual anatomical location noted on operative or pathology reports after colon surgery. Endoscopic tattooing should be used when faced with any luminal lesions of interest.

  19. The Economics of the Duration of the Baseball World Series

    ERIC Educational Resources Information Center

    Cassuto, Alexander E.; Lowenthal, Franklin

    2007-01-01

    This note examines some statistical features of the major league baseball World Series. We show that, based upon actual historical data, we cannot reject the hypothesis that the two World Series teams are evenly matched. Yet, we can also calculate the relative strengths of the teams that would best match the actual outcomes, and we find that those…

  20. REKRIATE: A Knowledge Representation System for Object Recognition and Scene Interpretation

    NASA Astrophysics Data System (ADS)

    Meystel, Alexander M.; Bhasin, Sanjay; Chen, X.

    1990-02-01

    What humans actually observe and how they comprehend this information is complex due to Gestalt processes and interaction of context in predicting the course of thinking and enforcing one idea while repressing another. How we extract the knowledge from the scene, what we get from the scene indeed and what we bring from our mechanisms of perception are areas separated by a thin, ill-defined line. The purpose of this paper is to present a system for Representing Knowledge and Recognizing and Interpreting Attention Trailed Entities dubbed as REKRIATE. It will be used as a tool for discovering the underlying principles involved in knowledge representation required for conceptual learning. REKRIATE has some inherited knowledge and is given a vocabulary which is used to form rules for identification of the object. It has various modalities of sensing and has the ability to measure the distance between the objects in the image as well as the similarity between different images of presumably the same object. All sensations received from matrix of different sensors put into an adequate form. The methodology proposed is applicable to not only the pictorial or visual world representation, but to any sensing modality. It is based upon the two premises: a) inseparability of all domains of the world representation including linguistic, as well as those formed by various sensor modalities. and b) representativity of the object at several levels of resolution simultaneously.

  1. Training, Drills Pivotal in Mounting Response to Orlando Shooting.

    PubMed

    Albert, Eric; Bullard, Timothy

    2016-08-01

    Emergency providers at Orlando Regional Medical Center in Orlando. FL, faced multiple challenges in responding to the worst mass shooting in U.S. history. As the scene of the shooting was only three blocks away from the hospital, there was little time to prepare when notified that victims would begin arriving shortly after 2 a.m. on June 12. Also, fears of a gunman near the hospital briefly put the ED on lock down. However, using the incident command system, the hospital was able to mobilize quickly, receiving 44 patients, nine of whom died shortly after arrival. Administrators note that recent training exercises geared toward a mass shooting event facilitated the response and probably saved lives. Patients arrived at the hospital in two waves, with the initial surge occurring right after the hooting took place around 2 a.m., and the second surge occurring about three hours later. At one point, more than 90 patients were in the ED, more than half for reasons unrelated to the shooting. Clinicians contended with a much higher than usual noise level while treating patients, making it hard to hear reports from EMS personnel. Also, treatment had to commence prior to identification for some patients who arrived unconscious or unable to speak. While surgeons and other key specialists were called into the hospital to address identified needs, administrators actually called hospital personnel to tell them not to come in unless they were notified. This prevented added management hurdles.

  2. Some aspects of steam-water flow simulation in geothermal wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shulyupin, Alexander N.

    1996-01-24

    Actual aspects of steam-water simulation in geothermal wells are considered: necessary quality of a simulator, flow regimes, mass conservation equation, momentum conservation equation, energy conservation equation and condition equations. Shortcomings of traditional hydraulic approach are noted. Main questions of simulator development by the hydraulic approach are considered. New possibilities of a simulation with the structure approach employment are noted.

  3. Users Do the Darndest Things: True Stories from the CyLab Usable Privacy and Security Laboratory

    NASA Astrophysics Data System (ADS)

    Cranor, Lorrie Faith

    How can we make security and privacy software more usable? The first step is to study our users. Ideally, we would watch them interacting with security or privacy software in situations where they face actual risk. But everyday computer users don't sit around fiddling with security software, and subjecting users to actual security attacks raises ethical and legal concerns. Thus, it can be difficult to observe users interacting with security and privacy software in their natural habitat. At the CyLab Usable Privacy and Security Laboratory, we've conducted a wide variety of studies aimed at understanding how users think about security and privacy and how they interact with security and privacy software. In this talk I'll give a behind the scenes tour of some of the techniques we've used to study users both in the laboratory and in the wild. I'll discuss the trials and tribulations of designing and carrying out security and privacy user studies, and highlight some of our surprising observations. Find out what privacy-sensitive items you can actually get study participants to purchase, how you can observe users' responses to a man-in-the-middle attack without actually conducting such an attack, why it's hard to get people to use high tech cell phones even when you give them away, and what's actually in that box behind the couch in my office.

  4. Earth Observation

    NASA Image and Video Library

    2014-07-19

    ISS040-E-070412 (19 July 2014) --- One of the Expedition 40 crew members aboard the Earth-orbiting International Space Station recorded this July 19 panorama featuring wildfires which are plaguing the Northwest and causing widespread destruction. (Note: south is at the top of the frame). The orbital outpost was flying 223 nautical miles above Earth at the time of the photo. Parts of Oregon and Washington are included in the scene. Mt. Jefferson, Three Sisters and Mt. St. Helens are all snow-capped and visible in the photo, and the Columbia River can also be delineated.

  5. Suicidal carbon monoxide inhalation of exhaust fumes. Investigation of cases.

    PubMed

    Tsunenari, S; Yonemitsu, K; Kanda, M; Yoshida, S

    1985-09-01

    The inhalation of automobile exhaust gases is a relatively frequent suicidal method. Two such cases of special interest to forensic pathology and toxicology have been introduced. In case 1, a suicide note disclosed the victim's mental state, the inside conditions of the car, and toxic effects of automobile exhaust. In case 2, a reconstruction experiment has revealed important factors for the investigation of the scene, such as the size of a vinyl hose, the conditions of connecting site of the hose with the exhaust pipe, etc.

  6. s95-16445

    NASA Image and Video Library

    2014-08-07

    S95-16445 (13-22 July 1995) --- A wide angle view from the rear shows activity in the new Mission Control Center (MCC), opened for operation and dedicated during the STS-70 mission. The Space Shuttle Discovery was just passing over Florida at the time this photo was taken (note mercator map and TV scene on screens). The new MCC, developed at a cost of about 50 million, replaces the main-frame based, NASA-unique design of the old Mission Control with a standard workstation-based, local area network system commonly in use today.

  7. Heat Shield Construction for NASA InSight Mission

    NASA Image and Video Library

    2015-05-27

    In this February 2015 scene from a clean room at Lockheed Martin Space Systems, Denver, specialists are building the heat shield to protect NASA's InSight spacecraft when it is speeding through the Martian atmosphere. Note: After thorough examination, NASA managers have decided to suspend the planned March 2016 launch of the Interior Exploration using Seismic Investigations Geodesy and Heat Transport (InSight) mission. The decision follows unsuccessful attempts to repair a leak in a section of the prime instrument in the science payload. http://photojournal.jpl.nasa.gov/catalog/PIA19404

  8. An Updated AP2 Beamline TURTLE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gormley, M.; O'Day, S.

    1991-08-23

    This note describes a TURTLE model of the AP2 beamline. This model was created by D. Johnson and improved by J. Hangst. The authors of this note have made additional improvements which reflect recent element and magnet setting changes. The magnet characteristics measurements and survey data compiled to update the model will be presented. A printout of the actual TURTLE deck may be found in appendix A.

  9. Thermospheric Mass Density Specification: Synthesis of Observations and Models

    DTIC Science & Technology

    2013-10-21

    Simulation Experiments (OSSEs) of the column-integrated ratio of atomic oxygen and molecular nitrogen. Note that OSSEs assimilate, for a given...realistic observing system, synthetically generated observational data often sampled from model simulation results, in place of actually observed values...and molecular oxygen mass mixing ratio). Note that in the TIEGCM the molecular nitrogen mass mixing ratio is specified so that the sum of mixing

  10. Modeling the Performance of Direct-Detection Doppler Lidar Systems in Real Atmospheres

    NASA Technical Reports Server (NTRS)

    McGill, Matthew J.; Hart, William D.; McKay, Jack A.; Spinhirne, James D.

    1999-01-01

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems has assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar systems: the double-edge and the multi-channel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only about 10-20% compared to nighttime performance, provided a proper solar filter is included in the instrument design.

  11. Modeling the performance of direct-detection Doppler lidar systems including cloud and solar background variability.

    PubMed

    McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D

    1999-10-20

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.

  12. Separating pitch chroma and pitch height in the human brain

    PubMed Central

    Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.

    2003-01-01

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719

  13. Separating pitch chroma and pitch height in the human brain.

    PubMed

    Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D

    2003-08-19

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.

  14. A note on image degradation, disability glare, and binocular vision

    NASA Astrophysics Data System (ADS)

    Rajaram, Vandana; Lakshminarayanan, Vasudevan

    2013-08-01

    Disability glare due to scattering of light causes a reduction in visual performance due to a luminous veil over the scene. This causes problem such as contrast detection. In this note, we report a study of the effect of this veiling luminance on human stereoscopic vision. We measured the effect of glare on the horopter measured using the apparent fronto-parallel plane (AFPP) criterion. The empirical longitudinal horopter measured using the AFPP criterion was analyzed using the so-called analytic plot. The analytic plot parameters were used for quantitative measurement of binocular vision. Image degradation plays a major effect on binocular vision as measured by the horopter. Under the conditions tested, it appears that if vision is sufficiently degraded then the addition of disability glare does not seem to significantly cause any further compromise in depth perception as measured by the horopter.

  15. Remote sensing depth invariant index parameters in shallow benthic habitats for bottom type classification.

    NASA Astrophysics Data System (ADS)

    Gapper, J.; El-Askary, H. M.; Linstead, E.

    2017-12-01

    Ground cover prediction of benthic habitats using remote sensing imagery requires substantial feature engineering. Artifacts that confound the ground cover characteristics must be severely reduced or eliminated while the distinguishing features must be exposed. In particular, the impact of wavelength attenuation in the water column means that a machine learning algorithm will primarily detect depth. However, the per pixel depths are difficult to know on a grand scale. Previous research has taken an in situ approach to applying depth invariant index on a small area of interest within a Landsat 8 scene. We aim to abstract this process for application to entire Landsat scene as well as other locations in order to study change detection in shallow benthic zones on a global scale. We have developed a methodology and applied it to more than 25 different Landsat 8 scenes. The images were first preprocessed to mask land, clouds, and other distortions then atmospheric correction via dark pixel subtraction was applied. Finally, depth invariant indices were calculated for each location and associated parameters recorded. Findings showed how robust the resulting parameters (deep-water radiance, depth invariant constant, band radiance variance/covariance, and ratio of attenuation) were across all scenes. We then created false color composite images of the depth invariant indices for each location. We noted several artifacts within some sites in the form of patterns or striations that did not appear to be aligned with variations in subsurface ground cover types. Further research into depth surveys for these sites revealed depths consistent with one or more wavelengths fully attenuating. This result showed that our model framework is generalizing well but limited to the penetration depths due to wavelength attenuation. Finally, we compared the parameters associated with the depth invariant calculation which were consistent across most scenes and explained any outliers observed. We concluded that the depth invariant index framework can be deployed on a large scale for ground cover detection in shallow waters (less than 16.8m or 5.2m for three DII measurements).

  16. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  17. Low-Visibility Visual Simulation with Real Fog

    NASA Technical Reports Server (NTRS)

    Chase, Wendell D.

    1982-01-01

    An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that the two major factors must be accounted for in the simulation of low visibility-one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by: comparing actual measurements lo those computed from the Fog equations, and comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the Features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity, low-visibility visual simulation are discussed.

  18. Low-visibility visual simulation with real fog

    NASA Technical Reports Server (NTRS)

    Chase, W. D.

    1981-01-01

    An environmental fog simulation (EFS) attachment was developed to aid in the study of natural low-visibility visual cues and subsequently used to examine the realism effect upon the aircraft simulator visual scene. A review of the basic fog equations indicated that two major factors must be accounted for in the simulation of low visibility - one due to atmospheric attenuation and one due to veiling luminance. These factors are compared systematically by (1) comparing actual measurements to those computed from the fog equations, and (2) comparing runway-visual-range-related visual-scene contrast values with the calculated values. These values are also compared with the simulated equivalent equations and with contrast measurements obtained from a current electronic fog synthesizer to help identify areas in which improvements are needed. These differences in technique, the measured values, the features of both systems, a pilot opinion survey of the EFS fog, and improvements (by combining features of both systems) that are expected to significantly increase the potential as well as flexibility for producing a very high-fidelity low-visibility visual simulation are discussed.

  19. What Is Actually Affected by the Scrambling of Objects When Localizing the Lateral Occipital Complex?

    PubMed

    Margalit, Eshed; Biederman, Irving; Tjan, Bosco S; Shah, Manan P

    2017-09-01

    The lateral occipital complex (LOC), the cortical region critical for shape perception, is localized with fMRI by its greater BOLD activity when viewing intact objects compared with their scrambled versions (resembling texture). Despite hundreds of studies investigating LOC, what the LOC localizer accomplishes-beyond distinguishing shape from texture-has never been resolved. By independently scattering the intact parts of objects, the axis structure defining the relations between parts was no longer defined. This led to a diminished BOLD response, despite the increase in the number of independent entities (the parts) produced by the scattering, thus indicating that LOC specifies interpart relations, in addition to specifying the shape of the parts themselves. LOC's sensitivity to relations is not confined to those between parts but is also readily apparent between objects, rendering it-and not subsequent "place" areas-as the critical region for the representation of scenes. Moreover, that these effects are witnessed with novel as well as familiar intact objects and scenes suggests that the relations are computed on the fly, rather than being retrieved from memory.

  20. Causal Inference for Spatial Constancy across Saccades

    PubMed Central

    Atsma, Jeroen; Maij, Femke; Koppen, Mathieu; Irwin, David E.; Medendorp, W. Pieter

    2016-01-01

    Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability. PMID:26967730

  1. Modeling visual clutter perception using proto-object segmentation

    PubMed Central

    Yu, Chen-Ping; Samaras, Dimitris; Zelinsky, Gregory J.

    2014-01-01

    We introduce the proto-object model of visual clutter perception. This unsupervised model segments an image into superpixels, then merges neighboring superpixels that share a common color cluster to obtain proto-objects—defined here as spatially extended regions of coherent features. Clutter is estimated by simply counting the number of proto-objects. We tested this model using 90 images of realistic scenes that were ranked by observers from least to most cluttered. Comparing this behaviorally obtained ranking to a ranking based on the model clutter estimates, we found a significant correlation between the two (Spearman's ρ = 0.814, p < 0.001). We also found that the proto-object model was highly robust to changes in its parameters and was generalizable to unseen images. We compared the proto-object model to six other models of clutter perception and demonstrated that it outperformed each, in some cases dramatically. Importantly, we also showed that the proto-object model was a better predictor of clutter perception than an actual count of the number of objects in the scenes, suggesting that the set size of a scene may be better described by proto-objects than objects. We conclude that the success of the proto-object model is due in part to its use of an intermediate level of visual representation—one between features and objects—and that this is evidence for the potential importance of a proto-object representation in many common visual percepts and tasks. PMID:24904121

  2. Buffalo, Toronto and Niagara Falls at night as seen from STS-60

    NASA Image and Video Library

    1994-02-09

    STS060-06-037 (3-11 Feb 1994) --- The city lights of Buffalo and Toronto outline the shores of the east end of Lake Erie and the west end of Lake Ontario in this night scene of western New York and southern Ontario. Between the two major cities are the cities of Niagara Falls, New York and Niagara Falls, Canada, which straddle the Niagara River just north of the actual falls. This photograph was taken with a special ASA-1600 film that is normally used for night-time photography of aurora, noctilucent clouds, biomass burning, and city lights.

  3. [Radiation effect on cosmonauts during extravehicular activities in 2008-2009].

    PubMed

    Mitrikas, V G

    2010-01-01

    The geometrical model of suited cosmonaut's phantom was used in mathematical modeling of EVAs performed by cosmonauts with consideration of changes in the ISS Russian segment configuration during 2008-2009 and the dependence of space radiation absorbed dose on EVA scene. Influence of spatial position of cosmonaut on absorbed dose value was evaluated with the EVA dosimeter model reproducing the actually determined weight and dimension. Calculated absorbed dose values are in good agreement with experimental data. Absorbed doses imparted to body organs (skin, lens, hemopoietic system, gastrointestinal tract, central nervous system, gonads) were determined for specific EVA events.

  4. Feature binding, attention and object perception.

    PubMed Central

    Treisman, A

    1998-01-01

    The seemingly effortless ability to perceive meaningful objects in an integrated scene actually depends on complex visual processes. The 'binding problem' concerns the way in which we select and integrate the separate features of objects in the correct combinations. Experiments suggest that attention plays a central role in solving this problem. Some neurological patients show a dramatic breakdown in the ability to see several objects; their deficits suggest a role for the parietal cortex in the binding process. However, indirect measures of priming and interference suggest that more information may be implicitly available than we can consciously access. PMID:9770223

  5. INFLIGHT - APOLLO XVI (CREW)

    NASA Image and Video Library

    1972-04-07

    S72-35971 (21 April 1972) --- A 360-degree field of view of the Apollo 16 Descartes landing site area composed of individual scenes taken from color transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle (LRV). This panorama was made while the LRV was parked at the rim of North Ray Crater (Stations 11 & 12) during the third Apollo 16 lunar surface extravehicular activity (EVA) by astronauts John W. Young and Charles M. Duke Jr. The overlay identifies the directions and the key lunar terrain features. The camera panned across the rear portion of the LRV in its 360-degree sweep. Note Young and Duke walking along the edge of the crater in one of the scenes. The TV camera was remotely controlled from a console in the Mission Control Center (MCC). Astronauts Young, commander; and Duke, lunar module pilot; descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon. Astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.

  6. Considerations for the composition of visual scene displays: potential contributions of information from visual and cognitive sciences.

    PubMed

    Wilkinson, Krista M; Light, Janice; Drager, Kathryn

    2012-09-01

    Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing - that is, how a user attends, perceives, and makes sense of the visual information on the display - therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations.

  7. Agronomic characterization of the Argentina Indicator Region. [U.S. corn belt and Argentine pampas

    NASA Technical Reports Server (NTRS)

    Hicks, D. R. (Principal Investigator)

    1982-01-01

    An overview of the Argentina indicator region including information on topography, climate, soils and vegetation is presented followed by a regionalization of crop livestock land use. Corn/soybean production and exports as well as agricultural practices are discussed. Similarities and differences in the physical agronomic scene, crop livestock land use and agricultural practices between the U.S. corn belt and the Argentine pampa are considered. The Argentine agricultural economy is described. Crop calendars for the Argentina indicator region, an accompanying description, notes on crop-livestock zones, wheat production, field size, and agricultural problems and practices are included.

  8. Bioterrorism versus radiological terrorism: notes from a bio/nuclear epidemiologist.

    PubMed

    Goffman, Thomas E

    2009-01-01

    The antiterrorism and disaster planning communities often speak of the high potential for bioterrorism and possible potential for radioterrorism, specifically the explosion of a fission device on US soil. Information gained from an epidemiologist's work in the national and international scene, which inevitably involves Intel regarding the cultures and subcultures being studied, suggest that bioterrorism is far less likely to be a major threat, that has been over-emphasized at the state level due to warnings from Homeland Security, and that Homeland Security itself appears biased toward bioterrorism of late with very little available rational basis.

  9. Effect of stimulation by foliage plant display images on prefrontal cortex activity: a comparison with stimulation using actual foliage plants.

    PubMed

    Igarashi, Miho; Song, Chorong; Ikei, Harumi; Miyazaki, Yoshifumi

    2015-01-01

    Natural scenes like forests and flowers evoke neurophysiological responses that can suppress anxiety and relieve stress. We examined whether images of natural objects can elicit neural responses similar to those evoked by real objects by comparing the activation of the prefrontal cortex during presentation of real foliage plants with a projected image of the same foliage plants. Oxy-hemoglobin concentrations in the prefrontal cortex were measured using time-resolved near-infrared spectroscopy while the subjects viewed the real plants or a projected image of the same plants. Compared with a projected image of foliage plants, viewing the actual foliage plants significantly increased oxy-hemoglobin concentrations in the prefrontal cortex. However, using the modified semantic differential method, subjective emotional response ratings ("comfortable vs. uncomfortable" and "relaxed vs. awakening") were similar for both stimuli. The frontal cortex responded differently to presentation of actual plants compared with images of these plants even when the subjective emotional response was similar. These results may help explain the physical and mental health benefits of urban, domestic, and workplace foliage. © 2014 The Authors. Journal of Neuroimaging published by the American Society of Neuroimaging.

  10. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.

  11. Exploring eye movements in patients with glaucoma when viewing a driving scene.

    PubMed

    Crabb, David P; Smith, Nicholas D; Rauscher, Franziska G; Chisholm, Catharine M; Barbur, John L; Edgar, David F; Garway-Heath, David F

    2010-03-16

    Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive.

  12. Exploring Eye Movements in Patients with Glaucoma When Viewing a Driving Scene

    PubMed Central

    Crabb, David P.; Smith, Nicholas D.; Rauscher, Franziska G.; Chisholm, Catharine M.; Barbur, John L.; Edgar, David F.; Garway-Heath, David F.

    2010-01-01

    Background Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). Methodology/Principal Findings The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Conclusions/Significance Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive. PMID:20300522

  13. Mid-infrared hyperspectral imaging for the detection of explosive compounds

    NASA Astrophysics Data System (ADS)

    Ruxton, K.; Robertson, G.; Miller, W.; Malcolm, G. P. A.; Maker, G. T.

    2012-10-01

    Active hyperspectral imaging is a valuable tool in a wide range of applications. A developing market is the detection and identification of energetic compounds through analysis of the resulting absorption spectrum. This work presents a selection of results from a prototype mid-infrared (MWIR) hyperspectral imaging instrument that has successfully been used for compound detection at a range of standoff distances. Active hyperspectral imaging utilises a broadly tunable laser source to illuminate the scene with light over a range of wavelengths. While there are a number of illumination methods, this work illuminates the scene by raster scanning the laser beam using a pair of galvanometric mirrors. The resulting backscattered light from the scene is collected by the same mirrors and directed and focussed onto a suitable single-point detector, where the image is constructed pixel by pixel. The imaging instrument that was developed in this work is based around a MWIR optical parametric oscillator (OPO) source with broad tunability, operating at 2.6 μm to 3.7 μm. Due to material handling procedures associated with explosive compounds, experimental work was undertaken initially using simulant compounds. A second set of compounds that was tested alongside the simulant compounds is a range of confusion compounds. By having the broad wavelength tunability of the OPO, extended absorption spectra of the compounds could be obtained to aid in compound identification. The prototype imager instrument has successfully been used to record the absorption spectra for a range of compounds from the simulant and confusion sets and current work is now investigating actual explosive compounds. The authors see a very promising outlook for the MWIR hyperspectral imager. From an applications point of view this format of imaging instrument could be used for a range of standoff, improvised explosive device (IED) detection applications and potential incident scene forensic investigation.

  14. Optimal Grid Size for Inter-Comparability of MODIS And VIIRS Vegetation Indices at Level 2G or Higher

    NASA Astrophysics Data System (ADS)

    Campagnolo, M.; Schaaf, C.

    2016-12-01

    Due to the necessity of time compositing and other user requirements, vegetation indices, as well as many other EOS derived products, are distributed in a gridded format (level L2G or higher) using an equal area sinusoidal grid, at grid sizes of 232 m, 463 m or 926 m. In this process, the actual surface signal suffers somewhat of a degradation, caused by both the sensor's point spread function and this resampling from swath to the regular grid. The magnitude of that degradation depends on a number of factors, such as surface heterogeneity, band nominal resolution, observation geometry and grid size. In this research, the effect of grid size is quantified for MODIS and VIIRS (at five EOS validation sites with distinct land covers), for the full range of view zenith angles, and at grid sizes of 232 m, 253 m, 309 m, 371 m, 397 m and 463 m. This allows us to compare MODIS and VIIRS gridded products for the same scenes, and to determine the grid size at which these products are most similar. Towards that end, simulated MODIS and VIIRS bands are generated from Landsat 8 surface reflectance images at each site and gridded products are then derived by using maximum obscov resampling. Then, for every grid size, the original Landsat 8 NDVI and the derived MODIS and VIIRS NDVI products are compared. This methodology can be applied to other bands and products, to determine which spatial aggregation overall is best suited for EOS to S-NPP product continuity. Results for MODIS (250 m bands) and VIIRS (375 m bands) NDVI products show that finer grid sizes tend to be better at preserving the original signal. Significant degradation for gridded NDVI occurs when grid size is larger then 253 m (MODIS) and 371 m (VIIRS). Our results suggest that current MODIS "500 m" (actually 463 m) grid size is best for product continuity. Note however, that up to that grid size value, MODIS gridded products are somewhat better at preserving the surface signal than VIIRS, except for at very high VZA.

  15. Managing Motivational Needs of the Gifted and Talented.

    ERIC Educational Resources Information Center

    Schilling, Deanna E.

    1986-01-01

    A. Maslow's theory of motivation is described, five levels of needs are identified (physiological, safety, love, esteem, and self-actualization), and implications of each level for parents and teachers of gifted students are noted. (CL)

  16. 29 CFR 2203.7 - Transcripts, recordings and minutes of closed meetings.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... transcription of the recording disclosing the identity of each speaker, with the deletions noted in the preceding sentence, will be furnished to any person at the actual cost of duplication or transcription...

  17. 29 CFR 2203.7 - Transcripts, recordings and minutes of closed meetings.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... transcription of the recording disclosing the identity of each speaker, with the deletions noted in the preceding sentence, will be furnished to any person at the actual cost of duplication or transcription...

  18. Virtual viewpoint generation for three-dimensional display based on the compressive light field

    NASA Astrophysics Data System (ADS)

    Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.

  19. Capability Development in Support of Comprehensive Approaches: Transforming International Civil-Military Interactions

    DTIC Science & Technology

    2011-12-01

    People Off-line (TV, Radio, Newspaper, Landline, Highway Signs, Word of Mouth ) Crisis Mgmt Groups & Apps Peer-Production Information Community, Content...Note that it was a Joint Discussion Note (JDN), that is very important. They used the word “discussion” because they wanted to indicate to the...clear that the UK Government is openly embracing the concept of the CA, it does not actually use the words but simply pres- ents its strategic intent

  20. The appropriateness of emergency medical service responses in the eThekwini district of KwaZulu-Natal, South Africa.

    PubMed

    Newton, P R; Naidoo, R; Brysiewicz, P

    2015-09-19

     Emergency medical services (EMS) are sometimes required to respond to cases that are later found not to be emergencies, resulting in high levels of inappropriate responses. This study evaluated the extent to which this occurs.  All cases dispatched over 72 hours by the eThekwini EMS in Durban, South Africa, were prospectively enrolled in a quantitative descriptive study. Vehicle control forms containing dispatch data were matched and compared with patient report forms containing epidemiological and clinical data to describe the nature and extent of inappropriate responses based on patient need. Data were subjected to simple descriptive analysis, correlations and χ2 testing.  A total of 1 385 cases met the study inclusion criteria. Marked variations existed between dispatch and on-scene priority settings, most notably in the highest priority 'red-code' category, which constituted >56% of cases dispatched yet accounted for <2% at the scene (p<0.001). Conversely, >80% of 'red-code' dispatches required a lower priority response. When comparing resource allocation according to patient interventional needs, >58% of cases required either no intervention or transport only and almost 36% required basic life support intervention only (p<0.001). Moreover, <12% of advanced life support dispatches were for patients found to be 'red code' at the scene.  There is a significant mismatch between the dispatch of EMS resources and actual patient need in the eThekwini district, with significantly high levels of inappropriate emergency responses.

  1. Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma

    NASA Astrophysics Data System (ADS)

    Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira

    2013-02-01

    A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.

  2. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to the phenology, solar-view geometry, and atmospheric condition etc. factors but not actual landcover difference. Finally, we will compare the classification results from screened and unscreened training samples to assess the improvement achieved by cleaning up the training samples. Keywords:

  3. The Pluto Affair: When Professionals talk to Professionals with the Public Watching

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars

    This paper gives a first-hand look behind the scenes of the Press Room at the International Astronomical Union (IAU) XXVIth General Assembly in Prague that was the setting of one of the most discussed stories in 2006 - the much hated and loved International Astronomical Union resolution defining a planet. The vote passing the resolution that - as a side-effect - changed Pluto's status to a "dwarf planet" and resulted in an unprecedented emotional argument about our Solar System. What actually happened in Prague? What were the negative and positive outcomes of the Pluto Affair? What can science communicators learn from this experience?

  4. Smartphone-Based Escalator Recognition for the Visually Impaired

    PubMed Central

    Nakamura, Daiki; Takizawa, Hotaka; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes. The experimental results demonstrate that the proposed method is promising for helping visually impaired individuals use escalators. PMID:28481270

  5. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  6. Toward Self-Referential Autonomous Learning of Object and Situation Models.

    PubMed

    Damerow, Florian; Knoblauch, Andreas; Körner, Ursula; Eggert, Julian; Körner, Edgar

    2016-01-01

    Most current approaches to scene understanding lack the capability to adapt object and situation models to behavioral needs not anticipated by the human system designer. Here, we give a detailed description of a system architecture for self-referential autonomous learning which enables the refinement of object and situation models during operation in order to optimize behavior. This includes structural learning of hierarchical models for situations and behaviors that is triggered by a mismatch between expected and actual action outcome. Besides proposing architectural concepts, we also describe a first implementation of our system within a simulated traffic scenario to demonstrate the feasibility of our approach.

  7. Evaluation of a Passive Nature Viewing Program Set to Music.

    PubMed

    Cadman, Sally J

    2014-09-01

    Research has revealed that passive nature viewing (viewing nature scenes without actually being in nature) has many health benefits but little is known about the best method of offering this complementary modality. The purpose of this pilot program was to evaluate the impact of a passive nature viewing program set to music on stress reduction in adults living in the community. A pre- and postsurvey design along with weekly recordings of stress and relaxation levels were used to evaluate the effect of this passive nature viewing program on stress reduction. Participants watched one of three preselected nature scenes for 5 minutes a day over 1 month and rated their stress and relaxation levels weekly on a 100-mm Visual Analogue Scale before and after viewing the nature DVD. Quantitative analysis were not performed because of the less number of subjects (n = 10) completing the study. Qualitative analysis found five key categories that have an impact on program use: (a) technology, (b) personal preferences, (c) time, (d) immersion, and (e) use of the program. Holistic nurses may consider integrating patient preferences and immersion strategies in the design of future passive nature viewing programs to reduce attrition and improve success. © The Author(s) 2013.

  8. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  9. Honeybee Odometry: Performance in Varying Natural Terrain

    PubMed Central

    Tautz, Juergen; Zhang, Shaowu; Spaethe, Johannes; Brockmann, Axel; Si, Aung

    2004-01-01

    Recent studies have shown that honeybees flying through short, narrow tunnels with visually textured walls perform waggle dances that indicate a much greater flight distance than that actually flown. These studies suggest that the bee's “odometer” is driven by the optic flow (image motion) that is experienced during flight. One might therefore expect that, when bees fly to a food source through a varying outdoor landscape, their waggle dances would depend upon the nature of the terrain experienced en route. We trained honeybees to visit feeders positioned along two routes, each 580 m long. One route was exclusively over land. The other was initially over land, then over water and, finally, again over land. Flight over water resulted in a significantly flatter slope of the waggle-duration versus distance regression, compared to flight over land. The mean visual contrast of the scenes was significantly greater over land than over water. The results reveal that, in outdoor flight, the honeybee's odometer does not run at a constant rate; rather, the rate depends upon the properties of the terrain. The bee's perception of distance flown is therefore not absolute, but scene-dependent. These findings raise important and interesting questions about how these animals navigate reliably. PMID:15252454

  10. Effect of television programming and advertising on alcohol consumption in normal drinkers.

    PubMed

    Sobell, L C; Sobell, M B; Riley, D M; Klajner, F; Leo, G I; Pavan, D; Cancilla, A

    1986-07-01

    The drinking behavior of 96 male normal drinking college students was assessed after they viewed a videotape of a popular prime-time television program complete with advertisements. Different versions of the videotape were used to evaluate the effects of a television program with and without alcohol scenes as crossed with the effects of three different types of advertisements (i.e., beer, nonalcoholic beverages and food). After viewing the videotape, the subjects, who were led to believe that they were participating in two separate and unrelated sets of experimental procedures, were asked to perform a taste rating of light beers, which actually provided an unobtrusive measure of their alcohol consumption. The results provided no support for the widely held assumption that drinking scenes in television programs or televised advertisements for alcoholic beverages precipitate increased drinking by viewers. This finding, however, must be considered in the context of the laboratory setting of the study, and thus may not generalize to real-life television viewing. Further research in this area is clearly needed, including an evaluation of the effects of television program content and advertisements on other populations (e.g., alcohol abusers).

  11. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    PubMed

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.

  12. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  13. Individual predictions of eye-movements with dynamic scenes

    NASA Astrophysics Data System (ADS)

    Barth, Erhardt; Drewes, Jan; Martinetz, Thomas

    2003-06-01

    We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches in two ways: (1) we deal with dynamic scenes, and (2) we provide means of adapting the model to a particular observer. As an indicator for the degree of saliency we evaluate the intrinsic dimension of the image sequence within a geometric approach implemented by using the structure tensor. Out of these candidate saliency-based locations, the currently attended location is selected according to a strategy found by supervised learning. The data are obtained with an eye-tracker and subjects who view video sequences. The selection algorithm receives candidate locations of current and past frames and a limited history of locations attended in the past. We use a linear mapping that is obtained by minimizing the quadratic difference between the predicted and the actually attended location by gradient descent. Being linear, the learned mapping can be quickly adapted to the individual observer.

  14. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  15. Earth observations during Space Shuttle flight STS-27: A high latitude observations opportunity - 2-6 December 1988

    NASA Technical Reports Server (NTRS)

    Whitehead, Victor S.; Helfert, Michael R.; Lulla, Kamlesh P.; Wood, Charles A.; Amsbury, David L.; Gibson, Robert; Gardner, Guy; Mullane, Mike; Ross, Jerry; Shepherd, Bill

    1989-01-01

    The earth observations from the STS-27 mission on December 2-6, 1988 are reported. The film and generic scene characteristics chosen for the mission are given. Results are given from geological observations of the Ruwenzori Mountains between Uganda and Zaire, four rift valley systems in Africa and Asia, and several volcanoes and impact craters. Environmental observations of Africa, the Middle East, South Asia, North America and the Soviet Union, are presented. Also, meteorological and oceanographic observations are discussed. The uniqueness of the high-inclination winter launch of the STS-27 mission for obtaining observations of specific features is noted.

  16. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  17. Upper Texas Gulf Coast, USA

    NASA Image and Video Library

    1989-05-08

    STS030-152-066 (4-8 May 1989) --- The upper Texas and Louisiana Gulf Coast area was clearly represented in this large format frame photographed by the astronaut crew of the Earth-orbiting Space Shuttle Atlantis. The area covered stretches almost 300 miles from Aransas Pass, Texas to Cameron, Louisiana. The sharp detail of both the natural and cultural features noted throughout the scene is especially evident in the Houston area, where highways, major streets, airport runways and even some neighborhood lanes are easily seen. Other major areas seen are Austin, San Antonio and the Golden Triangle. An Aero Linhof camera was used to expose the frame.

  18. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed Central

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-01-01

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637

  19. Urea, sugar, nonesterified fatty acid and cholesterol content of the blood in prolonged weightlessness

    NASA Technical Reports Server (NTRS)

    Balakhovskiy, I. S.; Orlova, T. A.

    1975-01-01

    Biochemical blood composition studies on astronauts during weightlessness flight simulation tests and during actual space flights showed some disturbances of metabolic processes. Increases in blood sugar, fatty acid and cholesterol, and urea content are noted.

  20. Scene recognition following locomotion around a scene.

    PubMed

    Motes, Michael A; Finlay, Cory A; Kozhevnikov, Maria

    2006-01-01

    Effects of locomotion on scene-recognition reaction time (RT) and accuracy were studied. In experiment 1, observers memorized an 11-object scene and made scene-recognition judgments on subsequently presented scenes from the encoded view or different views (ie scenes were rotated or observers moved around the scene, both from 40 degrees to 360 degrees). In experiment 2, observers viewed different 5-object scenes on each trial and made scene-recognition judgments from the encoded view or after moving around the scene, from 36 degrees to 180 degrees. Across experiments, scene-recognition RT increased (in experiment 2 accuracy decreased) with angular distance between encoded and judged views, regardless of how the viewpoint changes occurred. The findings raise questions about conditions in which locomotion produces spatially updated representations of scenes.

  1. 77 FR 16165 - United States Savings Bonds and Notes; Payments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-20

    ... security by imprinting the word ``PAID'' on its face and entering the amount and date of the actual payment... imprint of a payment stamp. The stamp may not exceed 1\\1/8\\ inches in any dimension and must include the...

  2. A note on a phrase in Shakespeare's play King Lear: 'a plague upon your epileptic visage'.

    PubMed

    Betts, T; Betts, H

    1998-10-01

    In Shakespeare's play King Lear the word 'epileptic' appears (used in a derogatory manner). This is held to be the first appearance of the word in the English language (although we have found earlier English references to the word which Shakespeare may have read). Textual analysis of the lines following the use of 'epileptic' suggests that it is actually a reference to the pock-marks of syphilis, endemic in Elizabethan England, and is not actually a reference to epilepsy itself.

  3. Public engagement as a means of restoring public trust in science--hitting the notes, but missing the music?

    PubMed

    Wynne, Brian

    2006-01-01

    This paper analyses the recent widespread moves to 'restore' public trust in science by developing an avowedly two-way, public dialogue with science initiatives. Noting how previously discredited and supposedly abandoned public deficit explanations of 'mistrust' have actually been continually reinvented, it argues that this is a symptom of a continuing failure of scientific and policy institutions to place their own science-policy institutional culture into the frame of dialogue, as possible contributory cause of the public mistrust problem. Copyright 2006 S. Karger AG, Basel.

  4. Hydrological AnthropoScenes

    NASA Astrophysics Data System (ADS)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y., Schumann A., Post D., Taniguchi M., Boegh E., Hubert P., Harman C., Thompson S., Rogger M., Hipsey M., Toth E., Viglione A., Di Baldassarre G., Schaefli B., McMillan H., Schymanski S., Characklis G., Yu B., Pang Z., Belyaev V., 2013. "Panta Rhei - Everything Flows": Change in hydrology and society - The IAHS Scientific Decade 2013-2022. Hydrological Sciences Journal, 58, 6, 1256-1275, DOI: 10.1080/02626667.2013.809088

  5. 40 CFR 1068.501 - How do I report emission-related defects?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... methods for tracking, investigating, reporting, and correcting emission-related defects. In your request... aggregate in tracking, identifying, investigating, evaluating, reporting, and correcting potential and... it is actually defective. Note that this paragraph (b)(2) does not require data-tracking or recording...

  6. The Text's the Thing: Using (Neglected) Issues of Textual Scholarship to Help Students Reimagine Shakespeare

    ERIC Educational Resources Information Center

    Parsons, Scott

    2009-01-01

    Do individuals know what words Shakespeare actually wrote? Exploring these issues can yield dramatic interest. With references to Shakespeare's Quartos and Folios, the author examines key textual issues and discrepancies in classroom studies of "Hamlet." (Contains 8 notes.)

  7. 31 CFR 560.215 - Prohibitions on foreign entities owned or controlled by U.S. persons.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... knowingly means that the person engages in the transaction with actual knowledge or reason to know. (3) For... intelligence activities of the United States Government. Note to § 560.215: A U.S. person is subject to the...

  8. 31 CFR 560.215 - Prohibitions on foreign entities owned or controlled by U.S. persons.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... knowingly means that the person engages in the transaction with actual knowledge or reason to know. (3) For... intelligence activities of the United States Government. Note to § 560.215: A U.S. person is subject to the...

  9. Development in Early Childhood.

    ERIC Educational Resources Information Center

    Elkind, David

    1991-01-01

    Reviews some of the major cognitive, social, and emotional achievements of young children and discusses some of their limitations. Divides description of development into intellectual, language, social, and emotional development. Notes that this division represents adult categories of thought and does not represent young children's actual modes of…

  10. Principles of management in injuries to the cervical spine.

    PubMed

    Krasuski, M; Kiwerski, J E

    2000-03-30

    Thorough and prompt diagnostics and selection of the proper course of treatment are often crucial for desirable outcome in patients with cervical spine trauma complicated by SCI. This is indicated by the fact that even among patients with initial presentation of a complete of a complete cord lesion, a certain percentage (ca. 15 percent) of appropriately managed patients can achieve significant improvement in their neurological status. It frequently happens, however, that poor radiological documentation, careless preliminary examination, improper transport from the accident scene, or treatment ill-suited to the actual lesion render neurological improvement impossible, and at times even bring about a deterioration of neurological status in comparison to the initial examination.

  11. Making Time for Nature: Visual Exposure to Natural Environments Lengthens Subjective Time Perception and Reduces Impulsivity

    PubMed Central

    Berry, Meredith S.; Repke, Meredith A.; Nickerson, Norma P.; Conway, Lucian G.; Odum, Amy L.; Jordan, Kerry E.

    2015-01-01

    Impulsivity in delay discounting is associated with maladaptive behaviors such as overeating and drug and alcohol abuse. Researchers have recently noted that delay discounting, even when measured by a brief laboratory task, may be the best predictor of human health related behaviors (e.g., exercise) currently available. Identifying techniques to decrease impulsivity in delay discounting, therefore, could help improve decision-making on a global scale. Visual exposure to natural environments is one recent approach shown to decrease impulsive decision-making in a delay discounting task, although the mechanism driving this result is currently unknown. The present experiment was thus designed to evaluate not only whether visual exposure to natural (mountains, lakes) relative to built (buildings, cities) environments resulted in less impulsivity, but also whether this exposure influenced time perception. Participants were randomly assigned to either a natural environment condition or a built environment condition. Participants viewed photographs of either natural scenes or built scenes before and during a delay discounting task in which they made choices about receiving immediate or delayed hypothetical monetary outcomes. Participants also completed an interval bisection task in which natural or built stimuli were judged as relatively longer or shorter presentation durations. Following the delay discounting and interval bisection tasks, additional measures of time perception were administered, including how many minutes participants thought had passed during the session and a scale measurement of whether time "flew" or "dragged" during the session. Participants exposed to natural as opposed to built scenes were less impulsive and also reported longer subjective session times, although no differences across groups were revealed with the interval bisection task. These results are the first to suggest that decreased impulsivity from exposure to natural as opposed to built environments may be related to lengthened time perception. PMID:26558610

  12. Fatal intravenous fentanyl abuse: four cases involving extraction of fentanyl from transdermal patches.

    PubMed

    Tharp, Amy M; Winecker, Ruth E; Winston, David C

    2004-06-01

    The transdermal fentanyl system delivers a specific dose at a constant rate. Even after the prescribed application time has elapsed, enough fentanyl remains within a patch to provide a potentially lethal dose. Death due to the intravenous injection of fentanyl extracted from transdermal patches has not been previously reported. We present 4 cases in which the source of fentanyl was transdermal patches and was injected. In all of these cases, the victim was a white male who died at home. Case 1 was a 35-year-old with no known history of drug use, who was found by his wife on the floor of his workshop. Police recovered a fentanyl patch, needle, and syringe at the scene. Case 2 was a 38-year-old with a known history of drug use whose family claimed that he was in a treatment program that used fentanyl patches for unknown reasons. His brother found him dead in bed, and law enforcement officers found a hypodermic needle beside the body; a ligature around his left hand, and apparent needle marks between his first and second digits were also noted. Case 3 was a 42-year-old with a recent attempted suicide via overdose who was found dead at his home. An empty box of fentanyl patches, Valium, Ritalin, and 2 syringes were found at the scene. Case 4 was a 39-year-old found by his mother, who admitted to removing a needle with attached syringe from the decedent's arm. Medications at the scene included hydrocodone, alprazolam, zolpidem, and fentanyl patches. All reported deaths were attributed to fentanyl intoxication, with blood concentrations ranging from 5 to 27 microg/L.

  13. Carney v Newton: expert evidence about the standard of clinical notes.

    PubMed

    Faunce, Thomas; Hammer, Ingrid; Jefferys, Susannah

    2007-12-01

    In Carney v Newton [2006] TASSC 4 the Tasmanian Supreme Court heard a claim that the defendant breached his duty of care by failing to properly diagnose and treat a node positive carcinoma in the plaintiff's breast tissue. At trial, argument turned on the actual dialogue that took place during the initial consultation, with significant reliance on the clinical notes of the defendant. The court gave considerable weight to "expert" witnesses in ascertaining the acceptability of the defendant's conduct concerning the maintenance and interpretation of his clinical notes. This raises important questions in relation to proof of quality of medical records as part of the current professional standard of care, as modified by recent legislation in most jurisdictions.

  14. Hippocampal Contribution to Implicit Configuration Memory Expressed via Eye Movements During Scene Exploration

    PubMed Central

    Ryals, Anthony J.; Wang, Jane X.; Polnaszek, Kelly L.; Voss, Joel L.

    2015-01-01

    Although hippocampus unequivocally supports explicit/ declarative memory, fewer findings have demonstrated its role in implicit expressions of memory. We tested for hippocampal contributions to an implicit expression of configural/relational memory for complex scenes using eye-movement tracking during functional magnetic resonance imaging (fMRI) scanning. Participants studied scenes and were later tested using scenes that resembled study scenes in their overall feature configuration but comprised different elements. These configurally similar scenes were used to limit explicit memory, and were intermixed with new scenes that did not resemble studied scenes. Scene configuration memory was expressed through eye movements reflecting exploration overlap (EO), which is the viewing of the same scene locations at both study and test. EO reliably discriminated similar study-test scene pairs from study-new scene pairs, was reliably greater for similarity-based recognition hits than for misses, and correlated with hippocampal fMRI activity. In contrast, subjects could not reliably discriminate similar from new scenes by overt judgments, although ratings of familiarity were slightly higher for similar than new scenes. Hippocampal fMRI correlates of this weak explicit memory were distinct from EO-related activity. These findings collectively suggest that EO was an implicit expression of scene configuration memory associated with hippocampal activity. Visual exploration can therefore reflect implicit hippocampal-related memory processing that can be observed in eye-movement behavior during naturalistic scene viewing. PMID:25620526

  15. An Analysis of the Max-Min Texture Measure.

    DTIC Science & Technology

    1982-01-01

    PANC 33 D2 Confusion Matrices for Scene A, IR 34 D3 Confusion Matrices for Scene B, PANC 35 D4 Confusion Matrices for Scene B, IR 36 D5 Confusion...Matrices for Scene C, PANC 37 D6 Confusion Matrices for Scene C, IR 38 D7 Confusion Matrices for Scene E, PANC 39 D8 Confusion Matrices for Scene E, IR 40...D9 Confusion Matrices for Scene H, PANC 41 DIO Confusion Matrices for Scene H, JR 42 3 .D 10CnuinMtie o cn ,IR4 AN ANALYSIS OF THE MAX-MIN TEXTURE

  16. Analysis of Actual Soil Degradation by Erosion Using Satellite Imagery and Terrain Attributes in the Czech Republic

    NASA Astrophysics Data System (ADS)

    Zizala, Daniel

    2015-04-01

    Soil water and wind erosion (possibly tillage erosion) is the most significant soil degradation factor in the Czech Republic. Moreover, this phenomenon also affects seriously quality of water sources., About 50 % of arable land are endangered by water erosion and about 10 % of arable land are endangered wind erosion in the Czech Republic. These processes have been accelerated by human activity. Specific condition of agriculture land in the Czech Republic including highland relief and particularly size of land parcel and intensification of agriculture does not enable to reduce flow of runoff water. Insufficient protection against accelerated erosion processes is related to lack of landscape and hydrographic elements and large area of agricultural plots. Currently, this issue is solved at plot scale by field investigation or at regional scale using numerical and empirical erosion models. Nevertheless, these models enable only to predict the potential of soil erosion. Large scale assessment of actual degradation level of soils is based on expert knowledge. However, there are still many uncertainties in this issue. Therefore characterization of actual degradation level of soil is required especially for assessment of long-term impact of soil erosion on soil fertility. Soil degradation by erosion can be effectively monitored or quantified by modern tools of remote sensing with variable level of detail accessible. Aims of our study is to analyse the applicability of remote sensing for monitoring of actual soil degradation by erosion. Satellite and aerial image data (multispectral and hyperspectral), terrain attributes and data from field investigation are the main source for this analyses. The first step was the delimitation of bare soils using supervised classification of the set of Landsat scenes from 2000 - 2014. The most suitable period of time for obtaining spectral image data with the lowest vegetation cover of soil was determined. The results were verified by statistical data of areas under farm crops from Czech Statistical Office. Information on number of scenes where bare soils are identified for each land parcel is available. This set of images with bare soils is used for assessment of soil degradation stage. Some land parcels were found without vegetation cover up to 40 times. Analysis was performed on 5 test sites in the Czech Republic and also using data from database of Soil Erosion Monitoring of Agricultural Land. Currently, more than 500 erosion events are registered in this database. Additional remote sensing data (Hyperion data, aerial hyperspectral data) was used for detailed analysis on the test sites. Results reveal that satellite imagery set, soil maps, terrain attributes and erosion modelling can be successfully applied in assessment of actual soil degradation by erosion. The research has been supported by the project no. QJ330118 "Using Remote Sensing for Monitoring of Soil Degradation by Erosion and Erosion Effects" funding by Ministry of Agriculture.

  17. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    PubMed

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Crime scene investigation, reporting, and reconstuction (CSIRR)

    NASA Astrophysics Data System (ADS)

    Booth, John F.; Young, Jeffrey M.; Corrigan, Paul

    1997-02-01

    Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.

  19. Predicting Ambulance Time of Arrival to the Emergency Department Using Global Positioning System and Google Maps

    PubMed Central

    Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig

    2014-01-01

    Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736

  20. Job-Seeking Behavior and Vocational Development.

    ERIC Educational Resources Information Center

    Stevens, Nancy D.

    Noting that job-seeking behavior, as contrasted with the processes of vocational choice and work adjustment, has been neglected in theories of vocational development, the author identifies three job seeking behavior patterns: (1) individuals exhibiting specific goals and self actualized behavior obtain desired jobs most successfully; (2) those…

  1. An Organizational Analysis of Special Education Reform.

    ERIC Educational Resources Information Center

    Skrtic, Thomas M.

    The paper identifies current special education practice and the current organization of schools as instrumental in actually creating the category of mildly handicapped students. A dichotomy between departments of special education and educational administration is noted. Only replacement of the system with an entirely different configuration and…

  2. Hurricane Bonnie, Northeast of Bermuda, Atlantic Ocean

    NASA Image and Video Library

    1992-09-20

    STS047-151-618 (19 Sept 1992) --- A large format Earth observation camera captured this scene of Hurricane Bonnie during the late phase of the mission. Bonnie was located about 500 miles from Bermuda near a point centered at 35.4 degrees north latitude and 56.8 degrees west longitude. The Linhof camera was aimed through one of Space Shuttle Endeavour's aft flight deck windows (note slight reflection at right). The crew members noticed the well defined eye in this hurricane, compared to an almost non-existent eye in the case of Hurricane Iniki, which was relatively broken up by the mission's beginning. Six NASA astronauts and a Japanese payload specialist conducted eight days of in-space research.

  3. Rocks Exposed on Slope in Aram Chaos

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-550, 20 November 2003

    This spectacular vista of sedimentary rocks outcropping on a slope in Aram Chaos was acquired by the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) on 14 November 2003. Dark piles of coarse talus have come down the slopes as these materials continue to erode over time. Note that there are no small meteor impact craters in this image, indicating that erosion of these outcrops has been recent, if not on-going. This area is located near 2.8oS, 20.5oW. The 200 meter scale bar is about 656 feet across. Sunlight illuminates the scene from the lower right.

  4. Medicine and music: a note on John Hunter (1728-93) and Joseph Haydn (1732-1809).

    PubMed

    Fu, Louis

    2010-05-01

    Joseph Haydn was a central figure in the development and growth of the European classical musical tradition in its transition from the Baroque period. John Hunter as the Founder of Scientific Surgery was a dominant figure in 18th-century British medical science. Anne Hunter née Home (1742-1821) was in her own right a figure of some eminence in the literary circles of 18th-century London. Attracted to the burgeoning medical and musical scenes of London, John Hunter married Anne Home and became a famous surgeon; Haydn became acquainted with the Hunters. The people, the opportunities and the circumstances had coincided.

  5. What makes gambling cool? Images of agency and self-control in fiction films.

    PubMed

    Egerer, Michael; Rantala, Varpu

    2015-03-01

    The study is a qualitative film analysis. It seeks to determine the semiotic and cinematic structures that make gambling appealing in films based on analysis of 72 film scenes from 28 narrative fiction films made from 1922 to 2003 about gambling in North American and West European mainstream cinema. The main game types include card games, casino games, and slot machines. The theme of self-control and competence was identified as being central to gambling's appeal. These images are strongly defined by gender. The study was funded by ELOMEDIA, financed by the Finnish Ministry of Education and Culture as well as the Finnish Foundation for Alcohol Studies. The limitations of the study are noted.

  6. Lunar Roving Vehicle parked in lunar depression on slope of Stone Mountain

    NASA Image and Video Library

    1972-04-22

    AS16-107-17473 (22 April 1972) --- The Lunar Roving Vehicle (LRV) appears to be parked in a deep lunar depression, on the slope of Stone Mountain. This photograph of the lunar scene at Station No. 4 was taken during the second Apollo 16 extravehicular activity (EVA) at the Descartes landing site. A sample collection bag is in the right foreground. Note field of small boulders at upper right. While astronauts John W. Young, commander, and Charles M. Duke Jr., lunar module pilot, descended in the Lunar Module (LM) "Orion" to explore the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) in lunar orbit.

  7. Scenes unseen: The parahippocampal cortex intrinsically subserves contextual associations, not scenes or places per se

    PubMed Central

    Bar, Moshe; Aminoff, Elissa; Schacter, Daniel L.

    2009-01-01

    The parahippocampal cortex (PHC) has been implicated both in episodic memory and in place/scene processing. We proposed that this region should instead be seen as intrinsically mediating contextual associations, and not place/scene processing or episodic memory exclusively. Given that place/scene processing and episodic memory both rely on associations, this modified framework provides a platform for reconciling what seemed like different roles assigned to the same region. Comparing scenes with scenes, we show here that the PHC responds significantly more strongly to scenes with rich contextual associations compared with scenes of equal visual qualities but less associations. This result provides the strongest support to the view that the PHC mediates contextual associations in general, rather than places or scenes proper, and necessitates a revision of current views such as that the PHC contains a dedicated place/scenes “module.” PMID:18716212

  8. Violence and its injury consequences in American movies

    PubMed Central

    McArthur, David L; Peek-Asa, Corinne; Webb, Theresa; Fisher, Kevin; Cook, Bernard; Browne, Nick; Kraus, Jess

    2000-01-01

    Objectives To evaluate the seriousness and frequency of violence and the degree of associated injury depicted in the 100 top-grossing American films of 1994. Methods Each scene in each film was examined for the presentation of violent actions on persons and coded by a systematic context-sensitive analytic scheme. Specific degrees of violence and indices of injury severity were abstracted. Only actually depicted, not implied, actions were coded, although both explicit and implied consequences were examined. Results The median number of violent actions per film was 16 (range, 0-110). Intentional violence outnumbered unintentional violence by a factor of 10. Almost 90% of violent actions showed no consequences to the recipient's body, although more than 80% of the violent actions were executed with lethal or moderate force. Fewer than 1% of violent actions were accompanied by injuries that were then medically attended. Conclusions Violent force in American films of 1994 was overwhelmingly intentional and in 4 of 5 cases was executed at levels likely to cause significant bodily injury. Not only action films but movies of all genres contained scenes in which the intensity of the action was not matched by correspondingly severe injury consequences. Many American films, regardless of genre, tend to minimize the consequences of violence to human beings. PMID:10986175

  9. Holographic three-dimensional telepresence using large-area photorefractive polymer.

    PubMed

    Blanche, P-A; Bablumian, A; Voorakaranam, R; Christenson, C; Lin, W; Gu, T; Flores, D; Wang, P; Hsieh, W-Y; Kathaperumal, M; Rachwal, B; Siddiqui, O; Thomas, J; Norwood, R A; Yamamoto, M; Peyghambarian, N

    2010-11-04

    Holography is a technique that is used to display objects or scenes in three dimensions. Such three-dimensional (3D) images, or holograms, can be seen with the unassisted eye and are very similar to how humans see the actual environment surrounding them. The concept of 3D telepresence, a real-time dynamic hologram depicting a scene occurring in a different location, has attracted considerable public interest since it was depicted in the original Star Wars film in 1977. However, the lack of sufficient computational power to produce realistic computer-generated holograms and the absence of large-area and dynamically updatable holographic recording media have prevented realization of the concept. Here we use a holographic stereographic technique and a photorefractive polymer material as the recording medium to demonstrate a holographic display that can refresh images every two seconds. A 50 Hz nanosecond pulsed laser is used to write the holographic pixels. Multicoloured holographic 3D images are produced by using angular multiplexing, and the full parallax display employs spatial multiplexing. 3D telepresence is demonstrated by taking multiple images from one location and transmitting the information via Ethernet to another location where the hologram is printed with the quasi-real-time dynamic 3D display. Further improvements could bring applications in telemedicine, prototyping, advertising, updatable 3D maps and entertainment.

  10. Moral imagination: Facilitating prosocial decision-making through scene imagery and theory of mind.

    PubMed

    Gaesser, Brendan; Keeler, Kerri; Young, Liane

    2018-02-01

    How we imagine and subjectively experience the future can inform how we make decisions in the present. Here, we examined a prosocial effect of imagining future episodes in motivating moral decisions about helping others in need, as well as the underlying cognitive mechanisms. Across three experiments we found that people are more willing to help others in specific situations after imagining helping them in those situations. Manipulating the spatial representation of imagined future episodes in particular was effective at increasing intentions to help others, suggesting that scene imagery plays an important role in the prosocial effect of episodic simulation. Path modeling analyses revealed that episodic simulation interacts with theory of mind in facilitating prosocial responses but can also operate independently. Moreover, we found that our manipulations of the imagined helping episode increased actual prosocial behavior, which also correlated with changes in reported willingness to help. Based on these findings, we propose a new model that begins to capture the multifaceted mechanisms by which episodic simulation contributes to prosocial decision-making, highlighting boundaries and promising future directions to explore. Implications for research in moral cognition, imagination, and patients with impairments in episodic simulation are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Deployment of spatial attention towards locations in memory representations. An EEG study.

    PubMed

    Leszczyński, Marcin; Wykowska, Agnieszka; Perez-Osorio, Jairo; Müller, Hermann J

    2013-01-01

    Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. -absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target.

  12. Motion of glossy objects does not promote separation of lighting and surface colour

    PubMed Central

    2017-01-01

    The surface properties of an object, such as texture, glossiness or colour, provide important cues to its identity. However, the actual visual stimulus received by the eye is determined by both the properties of the object and the illumination. We tested whether operational colour constancy for glossy objects (the ability to distinguish changes in spectral reflectance of the object, from changes in the spectrum of the illumination) was affected by rotational motion of either the object or the light source. The different chromatic and geometric properties of the specular and diffuse reflections provide the basis for this discrimination, and we systematically varied specularity to control the available information. Observers viewed animations of isolated objects undergoing either lighting or surface-based spectral transformations accompanied by motion. By varying the axis of rotation, and surface patterning or geometry, we manipulated: (i) motion-related information about the scene, (ii) relative motion between the surface patterning and the specular reflection of the lighting, and (iii) image disruption caused by this motion. Despite large individual differences in performance with static stimuli, motion manipulations neither improved nor degraded performance. As motion significantly disrupts frame-by-frame low-level image statistics, we infer that operational constancy depends on a high-level scene interpretation, which is maintained in all conditions. PMID:29291113

  13. Fuzzy logic system able to detect interesting areas of a video sequence

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, Christophe; Marichal, Xavier; Delmot, Thierry; Macq, Benoit M. M.

    1997-06-01

    This paper introduces an automatic tool able to analyze the picture according to the semantic interest an observer attributes to its content. Its aim is to give a 'level of interest' to the distinct areas of the picture extracted by any segmentation tool. For the purpose of dealing with semantic interpretation of images, a single criterion is clearly insufficient because the human brain, due to its a priori knowledge and its huge memory of real-world concrete scenes, combines different subjective criteria in order to assess its final decision. The developed method permits such combination through a model using assumptions to express some general subjective criteria. Fuzzy logic enables the user to encode knowledge in a form that is very close the way experts think about the decision process. This fuzzy modeling is also well suited to represent multiple collaborating or even conflicting experts opinions. Actually, the assumptions are verified through a non-hierarchical strategy that considers them in a random order, each partial result contributing to the final one. Presented results prove that the tool is effective for a wide range of natural pictures. It is versatile and flexible in that it can be used stand-alone or can take into account any a priori knowledge about the scene.

  14. Violence and its injury consequences in American movies: a public health perspective.

    PubMed

    McArthur, D L; Peek-Asa, C; Webb, T; Fisher, K; Cook, B; Browne, N; Kraus, J

    2000-09-01

    To evaluate the seriousness and frequency of violence and the degree of associated injury depicted in the 100 top-grossing American films of 1994. Each scene in each film was examined for the presentation of violent actions on persons and coded by a systematic context-sensitive analytic scheme. Specific degrees of violence and indices of injury severity were abstracted. Only actually depicted, not implied, actions were coded, although both explicit and implied consequences were examined. The median number of violent actions per film was 16 (range, 0-110). Intentional violence outnumbered unintentional violence by a factor of 10. Almost 90% of violent actions showed no consequences to the recipient's body, although more than 80% of the violent actions were executed with lethal or moderate force. Fewer than 1% of violent actions were accompanied by injuries that were then medically attended. Violent force in American films of 1994 was overwhelmingly intentional and in 4 of 5 cases was executed at levels likely to cause significant bodily injury. Not only action films but movies of all genres contained scenes in which the intensity of the action was not matched by correspondingly severe injury consequences. Many American films, regardless of genre, tend to minimize the consequences of violence to human beings.

  15. Global ensemble texture representations are critical to rapid scene perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-06-01

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. 46 CFR 72.05-20 - Stairways, ladders, and elevators.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... factor of safety of 4 based on the ultimate strength. (j) The stringers, treads, and all platforms and... means of an intermediate landing of rectangular or nearly rectangular shape based on the actual...) Except as further noted the provisions of this section apply to all vessels. (2) For small vessels...

  17. 46 CFR 72.05-20 - Stairways, ladders, and elevators.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... factor of safety of 4 based on the ultimate strength. (j) The stringers, treads, and all platforms and... means of an intermediate landing of rectangular or nearly rectangular shape based on the actual...) Except as further noted the provisions of this section apply to all vessels. (2) For small vessels...

  18. Research on Women of Color: From Ignorance to Awareness.

    ERIC Educational Resources Information Center

    Reid, Pamela Trotman; Kelly, Elizabeth

    1994-01-01

    Discusses the issue that women of color are dealt with as anomalies in psychological research, noting that research paradigms are actually focused on White, middle-class populations. It examines the methodological and theoretical transformations that have occurred in the literature and evaluates the extent to which researchers have successfully…

  19. Soviet Women Respond to Glasnost and Perestroika.

    ERIC Educational Resources Information Center

    Merrill, Martha C.

    1990-01-01

    Notes that Westerners tend to think of glasnost and perestroika in global, abstract terms when in actuality, they affect individual people in many ways. Profiles five Soviet women (Moscow Intourist guide, editor of women's magazine, concert pianist, college graduate, and worker at Chernobyl) and their differing responses to the changes sweeping…

  20. Clickers in the Classroom: A Review and a Replication

    ERIC Educational Resources Information Center

    Keough, Shawn M.

    2012-01-01

    This article reviews 66 clicker technology-based studies focusing on student perceptions/outcomes. Eight major perceptions/outcomes are noted, including high levels of performance (actual and perceived), student attention span, attendance, and participation, as well as student perceptions of satisfaction, feedback, and ease of use. Because the…

  1. Perceived Reachability in Hemispace

    ERIC Educational Resources Information Center

    Gabbard, C.; Ammar, D.; Rodrigues, L.

    2005-01-01

    A common observation in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate. Of the studies noted, reaching tasks have been presented in the general midline range. In the present study, strong right-handers were asked to judge the reachability of visual targets projected onto a table…

  2. Required High School Internships

    ERIC Educational Resources Information Center

    Graham, Kate; Morrow, Jennifer

    2013-01-01

    Through a literature review, and in the words of internees, this article describes the value of required internship for career growth. It notes that an internship experience ensures that students have a mentor who can be a professional reference, having actually witnessed what Mojkowski and Washor call the students' "non-academic"…

  3. 7 CFR 4287.156 - Protective advances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... preserves collateral and recovery is actually enhanced by making the advance. Protective advances will not...) Protective advances and interest thereon at the note rate will be guaranteed at the same percentage of loss... 7 Agriculture 15 2010-01-01 2010-01-01 false Protective advances. 4287.156 Section 4287.156...

  4. A Note on Economic Content and Test Validity.

    ERIC Educational Resources Information Center

    Soper, John C.; Brenneke, Judith Staley

    1987-01-01

    Offers practical tips on how teachers can determine whether classroom tests are actually measuring what they are designed to measure. Discusses criterion-related validity, construct validity, and content validity. Demonstrates how to determine the degree of content validity a particular test may have for a particular course or unit. (Author/DH)

  5. A view not to be missed: Salient scene content interferes with cognitive restoration

    PubMed Central

    Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975

  6. A view not to be missed: Salient scene content interferes with cognitive restoration.

    PubMed

    Van der Jagt, Alexander P N; Craig, Tony; Brewer, Mark J; Pearson, David G

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration.

  7. Comparative Analyses of Live-Action and Animated Film Remake Scenes: Finding Alternative Film-Based Teaching Resources

    ERIC Educational Resources Information Center

    Champoux, Joseph E.

    2005-01-01

    Live-action and animated film remake scenes can show many topics typically taught in organizational behaviour and management courses. This article discusses, analyses and compares such scenes to identify parallel film scenes useful for teaching. The analysis assesses the scenes to decide which scene type, animated or live-action, more effectively…

  8. A mixed-method research to investigate the adoption of mobile devices and Web2.0 technologies among medical students and educators.

    PubMed

    Fan, Si; Radford, Jan; Fabian, Debbie

    2016-04-19

    The past decade has witnessed the increasing adoption of Web 2.0 technologies in medical education. Recently, the notion of digital habitats, Web 2.0 supported learning environments, has also come onto the scene. While there has been initial research on the use of digital habitats for educational purposes, very limited research has examined the adoption of digital habitats by medical students and educators on mobile devices. This paper reports the Stage 1 findings of a two-staged study. The whole study aimed to develop and implement a personal digital habitat, namely digiMe, for medical students and educators at an Australian university. The first stage, however, examined the types of Web 2.0 tools and mobile devices that are being used by potential digiMe users, and reasons for their adoption. In this first stage of research, data were collected through a questionnaire and semi-structured interviews. Questionnaire data collected from 104 participants were analysed using the Predictive Analytics SoftWare (PASW). Frequencies, median and mean values were pursued. Kruskal Wallis tests were then performed to examine variations between views of different participant groups. Notes from the 6 interviews, together with responses to the open-ended section of the questionnaire, were analysed using the constructivist grounded theory approach, to generate key themes relevant to the adoption of Web 2.0 tools and mobile devices. The findings reflected the wide use of mobile devices, including both smart phones and computing tablets, by medical students and educators for learning, teaching and professional development purposes. Among the 22 types of Web 2.0 tools investigated, less than half of these tools were frequently used by the participants, this reflects the mismatch between users' desires and their actual practice. Age and occupation appeared to be the influential factors for their adoption. Easy access to information and improved communication are main purposes. This paper highlights the desire of medical students and educators for a more effective use of Web 2.0 technologies and mobile devices, and the observed mismatch between the desire and their actual practice. It also recognises the critical role of medical education institutions in facilitating this practice to respond to the mismatch.

  9. Observations at the Mars Pathfinder site: Do they provide "unequivocal" evidence of catastrophic flooding?

    USGS Publications Warehouse

    Chapman, M.G.; Kargel, J.S.

    1999-01-01

    After Mars Pathfinder landed at the mouth of Ares Vallis, a large channel that drains into the Chryse Planitia basin, the mission reports unanimously supported the interpretation that the lander site is the locus of catastrophic flooding by noting that all aspects of the scene are consistent with this interpretation. However, alternatives cannot be ruled out by any site observations, as all aspects of the scene are equally consistent with other interpretations of origin, namely, ice and mass-flow processes subsequently modified by wind erosion. The authors discuss alternative explanations for the geologic history of the channel based on a regional view of the circum-Chryse channels from Viking images (our best broad-scale information to date) and the local view from the recent Pathfinder landing site. Mega-indicators of channel origin, the regional geomorphology, geology, and planetary climatic conditions, taken together suggest some combination of flood, mass flow, glacial, and eolian processes. The macro-indicators of channel origin (sedimentologic) are also not indicative of one process of emplacement, either as single criteria or taken cumulatively. Finally, the micro-indicators of channel origin (geochemical and mineralogic composition) do not provide very tight constraints on the deposits' possible origins other than that water was in some way involved.

  10. Does scene context always facilitate retrieval of visual object representations?

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2011-04-01

    An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).

  11. Research in interactive scene analysis

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M.; Garvey, T. D.; Weyl, S. A.; Wolf, H. C.

    1975-01-01

    An interactive scene interpretation system (ISIS) was developed as a tool for constructing and experimenting with man-machine and automatic scene analysis methods tailored for particular image domains. A recently developed region analysis subsystem based on the paradigm of Brice and Fennema is described. Using this subsystem a series of experiments was conducted to determine good criteria for initially partitioning a scene into atomic regions and for merging these regions into a final partition of the scene along object boundaries. Semantic (problem-dependent) knowledge is essential for complete, correct partitions of complex real-world scenes. An interactive approach to semantic scene segmentation was developed and demonstrated on both landscape and indoor scenes. This approach provides a reasonable methodology for segmenting scenes that cannot be processed completely automatically, and is a promising basis for a future automatic system. A program is described that can automatically generate strategies for finding specific objects in a scene based on manually designated pictorial examples.

  12. A new method for combining live action and computer graphics in stereoscopic 3D

    NASA Astrophysics Data System (ADS)

    Rupkalvis, John A.; Gillen, Ron

    2008-02-01

    A primary requirement when elements are to be combined stereoscopically, is that homologous points in each eye view of each element have identical parallax separation at any point of interaction. If this is not done, the image parts on one element will appear to be at a different distance from the corresponding or associated parts on the other element. This results in a visual discontinuity that appears very unnatural. For example, if a live actor were to appear to "shake hands" with a cartoon character, a very natural appearing juncture may appear to be the case when seen in 2-D, but their hands may appear to miss when seen in 3-D. Previous efforts to compensate, or correct these errors have involved painstaking time-consuming trial-and-error tests. In the area of pure animation, efforts to make cartoon characters appear more realistic were developed. A "motion tracking" technique was developed. This involves an actor wearing a special suit with indicator marks at various points on their body. The actor walks through the scene, then the animator tracks the points using motion capture software. Because live action and CG elements can interact or change at several different points and levels within a scene, additional requirements must also be addressed. "Occlusions" occur when one object passes in front of another. A particular tracking point may appear in one eye-view, and not the other. When Z-axis differentials are to be considered in the live action as well as the CG elements, and both are to interact with each other, both eye-views must be tracked, especially at points of occlusion. A new approach would be to generate a three dimensional grid, within which the action is to take place. This grid can be projected, onto the stage where the live action part is to take place. When differential occlusions occur, the grid may be seen and CG elements plotted in reference to it. Because of the capability of precisely locating points in a digital image, a pixel-accurate virtual model of both the actual and the virtual scene may be matched with extreme accuracy. The metrology of the grid may also be easily changed at any time, not only as to the pitch of the lines, but also the introduction of intentional distortions, such as when a forced perspective is desired. This approach would also include using a special parallax indicator, which may be used as a physical generator, such as a bar-generator light and actually carried in the scene. Parallax indicators can provide instantaneous "readouts" of the parallax at any point on the animator's monitor. Customized software would equate as the cursor is moved around the screen, the exact parallax at that indicated pixel would appear on the screen, immediately adjacent to that point. Preferences would allow the choice of either keying the point to the left-eye image, the right-eye image, or a point midway in-between.

  13. Emotional and neutral scenes in competition: orienting, efficiency, and identification.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri; Hyönä, Jukka

    2007-12-01

    To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on--and shorter saccade latencies to--emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes.

  14. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    PubMed

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  15. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  16. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  17. Image Enhancement for Astronomical Scenes

    DTIC Science & Technology

    2013-09-01

    address this problem in the context of natural scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also...scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also mostly empty space. We compare two classes of

  18. Understanding and utilization of Thematic Mapper and other remotely sensed data for vegetation monitoring

    NASA Technical Reports Server (NTRS)

    Crist, E. P.; Cicone, R. C.; Metzler, M. D.; Parris, T. M.; Rice, D. P.; Sampson, R. E.

    1983-01-01

    The TM Tasseled Cap transformation, which provides both a 50% reduction in data volume with little or no loss of important information and spectral features with direct physical association, is presented and discussed. Using both simulated and actual TM data, some important characteristics of vegetation and soils in this feature space are described, as are the effects of solar elevation angle and atmospheric haze. A preliminary spectral haze diagnostic feature, based on only simulated data, is also examined. The characteristics of the TM thermal band are discussed, as is a demonstration of the use of TM data in energy balance studies. Some characteristics of AVHRR data are described, as are the sensitivities to scene content of several LANDSAT-MSS preprocessing techniques.

  19. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  20. Immersive telepresence system using high-resolution omnidirectional movies and a locomotion interface

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu

    2004-05-01

    Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.

  1. [The characteristics of computer simulation of traffic accidents].

    PubMed

    Zou, Dong-Hua; Liu, Ning-Guo; Chen, Jian-Guo; Jin, Xian-Long; Zhang, Xiao-Yun; Zhang, Jian-Hua; Chen, Yi-Jiu

    2008-12-01

    To reconstruct the collision process of traffic accident and the injury mode of the victim by computer simulation technology in forensic assessment of traffic accident. Forty actual accidents were reconstructed by stimulation software and high performance computer based on analysis of the trace evidences at the scene, damage of the vehicles and injury of the victims, with 2 cases discussed in details. The reconstruction correlated very well in 28 cases, well in 9 cases, and suboptimal in 3 cases with the above parameters. Accurate reconstruction of the accident would be helpful for assessment of the injury mechanism of the victims. Reconstruction of the collision process of traffic accident and the injury mechanism of the victim by computer simulation is useful in traffic accident assessment.

  2. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    PubMed

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  3. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-01-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703

  4. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior

    PubMed Central

    Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I

    2018-01-01

    Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information. PMID:29513219

  5. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior.

    PubMed

    Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I

    2018-03-07

    Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.

  6. Scene Integration Without Awareness: No Conclusive Evidence for Processing Scene Congruency During Continuous Flash Suppression.

    PubMed

    Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan

    2016-07-01

    A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.

  7. Remembering faces and scenes: The mixed-category advantage in visual working memory.

    PubMed

    Jiang, Yuhong V; Remington, Roger W; Asaad, Anthony; Lee, Hyejin J; Mikkalson, Taylor C

    2016-09-01

    We examined the mixed-category memory advantage for faces and scenes to determine how domain-specific cortical resources constrain visual working memory. Consistent with previous findings, visual working memory for a display of 2 faces and 2 scenes was better than that for a display of 4 faces or 4 scenes. This pattern was unaffected by manipulations of encoding duration. However, the mixed-category advantage was carried solely by faces: Memory for scenes was not better when scenes were encoded with faces rather than with other scenes. The asymmetry between faces and scenes was found when items were presented simultaneously or sequentially, centrally, or peripherally, and when scenes were drawn from a narrow category. A further experiment showed a mixed-category advantage in memory for faces and bodies, but not in memory for scenes and objects. The results suggest that unique category-specific interactions contribute significantly to the mixed-category advantage in visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Neural correlates of contextual cueing are modulated by explicit learning.

    PubMed

    Westerberg, Carmen E; Miller, Brennan B; Reber, Paul J; Cohen, Neal J; Paller, Ken A

    2011-10-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer's knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Neural correlates of contextual cueing are modulated by explicit learning

    PubMed Central

    Westerberg, Carmen E.; Miller, Brennan B.; Reber, Paul J.; Cohen, Neal J.; Paller, Ken A.

    2011-01-01

    Contextual cueing refers to the facilitated ability to locate a particular visual element in a scene due to prior exposure to the same scene. This facilitation is thought to reflect implicit learning, as it typically occurs without the observer’s knowledge that scenes repeat. Unlike most other implicit learning effects, contextual cueing can be impaired following damage to the medial temporal lobe. Here we investigated neural correlates of contextual cueing and explicit scene memory in two participant groups. Only one group was explicitly instructed about scene repetition. Participants viewed a sequence of complex scenes that depicted a landscape with five abstract geometric objects. Superimposed on each object was a letter T or L rotated left or right by 90°. Participants responded according to the target letter (T) orientation. Responses were highly accurate for all scenes. Response speeds were faster for repeated versus novel scenes. The magnitude of this contextual cueing did not differ between the two groups. Also, in both groups repeated scenes yielded reduced hemodynamic activation compared with novel scenes in several regions involved in visual perception and attention, and reductions in some of these areas were correlated with response-time facilitation. In the group given instructions about scene repetition, recognition memory for scenes was superior and was accompanied by medial temporal and more anterior activation. Thus, strategic factors can promote explicit memorization of visual scene information, which appears to engage additional neural processing beyond what is required for implicit learning of object configurations and target locations in a scene. PMID:21889947

  10. Reform of Higher Education in Russia: Habitus Conflict

    ERIC Educational Resources Information Center

    Babintsev, Valentin P.; Sapryka, Viktor ?.; Serkina, Yana I.; Ushamirskaya, Galina F.

    2016-01-01

    This article discusses changes that actually occur in the Russian Higher Education in the process of reform. The thesis that the functioning of the educational system increasingly detects formal rationality, not focused on the senses, and their imitation. It is noted that the Russian system of higher education refers to a specific type, which can…

  11. The Practice of Designing Qualitative Research on Educational Leadership: Notes for Emerging Scholars and Practitioner-Scholars

    ERIC Educational Resources Information Center

    Knapp, Michael S.

    2017-01-01

    This article addresses a gap in methodological writing, concerning typical practice in designing qualitative inquiry, especially in research on educational leadership. The article focuses on how qualitative research designs are actually developed and explores implications for scholars' work, especially for new scholars and for methods teachers.…

  12. Management and Supervisory Training: A Review and Annotated Bibliography.

    DTIC Science & Technology

    1980-09-01

    presented by sound films. It should be noted that no measures of actual performance as leaders were obtained. Moffie , Calhoon, and O’Brien (1964) reported a...Attitudes of Research and Development Supervisors." Journal of Applied Psychology, 1960, 44, 224-232. Moffie , D.J., Calhoon, R., and O’Brien, J.K

  13. The Writing Laboratory: Organization, Management, and Methods.

    ERIC Educational Resources Information Center

    Steward, Joyce S.; Croft, Mary K.

    The four chapters of this book move from the history, philosophy, and approaches that writing laboratories encompass to a look at the many facets of their organization before treating in detail the actual teaching process and the practical elements of writing laboratory management. Chapter one notes the growth of writing labs and discusses…

  14. NOTES ON MATHEMATICS IN PRIMARY SCHOOLS.

    ERIC Educational Resources Information Center

    WHEELER, D.H.; AND OTHERS

    THIS BOOK IS A COLLECTION OF MATERIALS AND IDEAS ABOUT THE NEWER METHODS OF MATHEMATICS TEACHING BY A GROUP OF TEACHERS AND STUDENT-TEACHER LECTURERS. REPORTS FROM ACTUAL LESSONS AND VARIED ILLUSTRATIONS OF CHILDREN'S WORK MAKE UP A SIGNIFICANT PORTION OF THIS BOOK. THE MATHEMATICS IS PRESENTED IN A STYLE AND MANNER WHICH ENCOURAGES THE READER TO…

  15. Exploring Juror's Listening Processes: The Effect of Listening Style Preference on Juror Decision Making.

    ERIC Educational Resources Information Center

    Worthington, Debra L.

    2001-01-01

    Examines the relationship between listening style preference and jurors' assignment of negligence and damages. Notes that 90 men and 84 women drawn from introductory communication courses viewed videotaped attorney presentations and the judge's instructions from an actual court case. Indicates that participants with a people-oriented listening…

  16. NEW APPROACHES TO THE STUDY OF HUMAN COMMUNICATION.

    ERIC Educational Resources Information Center

    SARLES, HARVEY B.

    IN VARIOUS STUDIES, BRIEFLY DESCRIBED IN THIS PAPER, SOUND FILMS WERE MADE OF PEOPLE ENGAGED IN VERBAL COMMUNICATION. THE FILMS WERE ANALYZED TO NOTE RELATIONSHIPS BETWEEN PHYSICAL MOVEMENT AND THE ACTUAL CONTENT OF THE CONVERSATION. THE FRAMES OF THE FILM WERE SEQUENTIALLY NUMBERED TO CORRELATE THEM TO THE NEAREST FRAME WITH THE SOUND RECORDING.…

  17. Clueless Newbies in the MUDs: An Introduction to Multiple-User Environments.

    ERIC Educational Resources Information Center

    LeNoir, W. David

    1998-01-01

    Describes Multiple-User Dungeons (MUDs), multiple-user computer programs that allow participants to interact with others in "real time" exchanges. Discusses their potential in the writing classroom and beyond, and notes their potential for faculty development activities. Offers a list of Internet resources, some actual MUD addresses, and other…

  18. The Problem-Solving Power of Teachers

    ERIC Educational Resources Information Center

    Sacks, Ariel

    2013-01-01

    Risk takers of all kinds have joined the effort to find new and better ways to structure nearly every aspect of teaching and learning. But as teacher leader and blogger Ariel Sacks notes, "Sadly, most of the experiments in education reform come from the imaginations of people who don't actually teach children." Top-down experiments…

  19. Students' Perceptions of the Role of Assessments at Higher Education

    ERIC Educational Resources Information Center

    Lynam, Siobhan; Cachia, Moira

    2018-01-01

    The quality assessment agency higher education review noted that assessment and feedback in higher education still remains an area of concern for students. Despite this, very little research has been carried out to assess students' experience of assessments. The evidence for what factors within assessments actually contribute to student engagement…

  20. A Special Education Systems Simulation Model: Teacher Training Emphasis.

    ERIC Educational Resources Information Center

    Jones, Wayne; And Others

    The authors illustrate the application of a systems approach for educational decision-makers through utilization of a special education systems simulation model with emphasis on teacher training. It is noted that the model provides a procedure to answer "what if" type questions before actually implementing a proposed program. Discussed are the…

  1. Implementing Multiage Education: A Practical Guide.

    ERIC Educational Resources Information Center

    Kasten, Wendy C.; Lolli, Elizabeth Monce

    Noting that multiage education continues to receive a great deal of interest as educators, legislators, and parents seek to find ways to improve educational experiences for all children, this book takes readers by the hand and guides them as they move from exploring the concept of multiage to the actual stages of implementation. As is consistent…

  2. Proficiency Verification Systems (PVS): Skills Indices for Language Arts. Technical Note.

    ERIC Educational Resources Information Center

    Humes, Ann

    The procedures undertaken in developing and organizing skills indexes for use in coding elementary school language arts textbooks to determine what is actually taught are presented in this paper. The outlined procedures included performing a preliminary analysis on four language arts textbooks to compile an extensive list of skills and performance…

  3. Scene incongruity and attention.

    PubMed

    Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John

    2017-02-01

    Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. ERBE Geographic Scene and Monthly Snow Data

    NASA Technical Reports Server (NTRS)

    Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.

    1997-01-01

    The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.

  5. Bag of Lines (BoL) for Improved Aerial Scene Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sridharan, Harini; Cheriyadat, Anil M.

    2014-09-22

    Feature representation is a key step in automated visual content interpretation. In this letter, we present a robust feature representation technique, referred to as bag of lines (BoL), for high-resolution aerial scenes. The proposed technique involves extracting and compactly representing low-level line primitives from the scene. The compact scene representation is generated by counting the different types of lines representing various linear structures in the scene. Through extensive experiments, we show that the proposed scene representation is invariant to scale changes and scene conditions and can discriminate urban scene categories accurately. We compare the BoL representation with the popular scalemore » invariant feature transform (SIFT) and Gabor wavelets for their classification and clustering performance on an aerial scene database consisting of images acquired by sensors with different spatial resolutions. The proposed BoL representation outperforms the SIFT- and Gabor-based representations.« less

  6. Updating representations of learned scenes.

    PubMed

    Finlay, Cory A; Motes, Michael A; Kozhevnikov, Maria

    2007-05-01

    Two experiments were designed to compare scene recognition reaction time (RT) and accuracy patterns following observer versus scene movement. In Experiment 1, participants memorized a scene from a single perspective. Then, either the scene was rotated or the participants moved (0 degrees -360 degrees in 36 degrees increments) around the scene, and participants judged whether the objects' positions had changed. Regardless of whether the scene was rotated or the observer moved, RT increased with greater angular distance between judged and encoded views. In Experiment 2, we varied the delay (0, 6, or 12 s) between scene encoding and locomotion. Regardless of the delay, however, accuracy decreased and RT increased with angular distance. Thus, our data show that observer movement does not necessarily update representations of spatial layouts and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.

  7. Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory

    NASA Technical Reports Server (NTRS)

    Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.

    2005-01-01

    Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.

  8. Stuck on semantics: Processing of irrelevant object-scene inconsistencies modulates ongoing gaze behavior.

    PubMed

    Cornelissen, Tim H W; Võ, Melissa L-H

    2017-01-01

    People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics-let alone their semantic congruity-processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.

  9. Visual search for changes in scenes creates long-term, incidental memory traces.

    PubMed

    Utochkin, Igor S; Wolfe, Jeremy M

    2018-05-01

    Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.

  10. Earth observations taken during STS-83 mission

    NASA Image and Video Library

    2016-08-12

    STS083-748-006 (4-8 April 1997) --- This type of scene is seen about every 45 minutes as the astronauts travel around the world. Sunrises and sunsets differ in structure, since the tropopause altitude and atmospheric lamina temperatures vary with time of day, season, and latitude. Close analysis of these terminator photographs provide counts of the number and spacing of atmospheric laminae. In the photographs, as many as 4 laminae have been noted in the normally red-to-orange troposphere, and up to 12 laminae have been counted in the blue upper atmosphere. However, true replication of human vision is not possible using present films. For instance, while on orbit, one astronaut counted 22 layers. The photograph of that event recorded only 8 such layers.

  11. Strychnine overdose following ingestion of gopher bait.

    PubMed

    Lindsey, Tania; O'Hara, Joseph; Irvine, Rebecca; Kerrigan, Sarah

    2004-03-01

    A 52-year-old male was discovered supine on his bed in a state of early decomposition. Commercial strychnine-treated gopher pellets were found in the home, and suicide notes were present at the scene. Biological fluids and tissues were tested for basic, acidic, and neutral drugs using gas chromatography-mass spectrometry. Concentrations of strychnine in heart and femoral blood were 0.96 and 0.31 mg/L, respectively. Vitreous fluid, bile, urine, liver, and brain specimens contained 0.36 mg/L, 1.17 mg/L, 2.92 mg/L, 4.59 mg/kg, and 0.86 mg/kg strychnine, respectively. No other drugs were detected in any of the samples. The cause of death was attributed to rodenticide poisoning, and the manner of death was suicide.

  12. InSight Lander in Assembly

    NASA Image and Video Library

    2015-05-27

    The Mars lander that NASA's InSight mission will use for investigating how rocky planets formed and evolved is being assembled by Lockheed Martin Space Systems, Denver. In this scene from January 2015, Lockheed Martin spacecraft specialists are working on the lander in a clean room. InSight, for Interior Exploration Using Seismic Investigations, Geodesy and Heat Transport, is scheduled for launch in March 2016 and landing in September 2016. Note: After thorough examination, NASA managers have decided to suspend the planned March 2016 launch of the Interior Exploration using Seismic Investigations Geodesy and Heat Transport (InSight) mission. The decision follows unsuccessful attempts to repair a leak in a section of the prime instrument in the science payload. http://photojournal.jpl.nasa.gov/catalog/PIA19402

  13. Water Ice on Pluto

    NASA Image and Video Library

    2015-10-08

    Regions with exposed water ice are highlighted in blue in this composite image from New Horizons' Ralph instrument, combining visible imagery from the Multispectral Visible Imaging Camera (MVIC) with infrared spectroscopy from the Linear Etalon Imaging Spectral Array (LEISA). The strongest signatures of water ice occur along Virgil Fossa, just west of Elliot crater on the left side of the inset image, and also in Viking Terra near the top of the frame. A major outcrop also occurs in Baré Montes towards the right of the image, along with numerous much smaller outcrops, mostly associated with impact craters and valleys between mountains. The scene is approximately 280 miles (450 kilometers) across. Note that all surface feature names are informal. http://ppj2:8080/catalog/PIA19963

  14. [Violence and sexism in television cartoons for children. Analysis of the contents].

    PubMed

    Prieto Rodríguez, M A; March Cerdá, J C; Argente del Castillo, A

    1996-04-15

    To detect features of violence and sexism in cartoons in the children's programmes of Spanish television companies. Analysis of the content of cartoons broadcast by TV-1, TV-2, Canal Sur, Antena 3 and Tele 5 during one week. The programmes recorded were viewed by two independent observers, first separately and then together. All those scenes with violent contents or sexist messages were noted. The main findings were: a) violent contents were very common; b) roles and jobs linked to gender were found; c) advertising accompanied and was inserted within children's programming. The points identified show the need for both school and family to encourage children to develop a critical attitude to the messages they receive.

  15. [Old and new tasks in social pediatrics].

    PubMed

    Hellbrügge, T

    1980-07-03

    Social pediatrics has been highly successful in transmitting basic pediatric knowledge into the last family so that it has been possible by the measures of prevention and prophylaxis to reduce the infant mortality rate from 30% to less than 2% and to eradicate some infectious diseases completely. The importance of social pediatrics can be noted with the misery of children in the developing countries where morbidity and mortality in childhood are as high as ever although the knowledge in clinical pediatrics is on the same level. In the industrial nations psychosocial diseases have replaced infectious diseases. The fight against these requires special social pediatric efforts in order to meet the threat of e.g. juvenile delinquency and the drug scene together with pedopsychology and pedagogics.

  16. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    PubMed Central

    2012-01-01

    Background In research on event-related potentials (ERP) to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention) is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN) and the late positive potential (LPP). Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes. PMID:22607397

  17. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks.

    PubMed

    Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-06-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Seek and you shall remember: Scene semantics interact with visual search to build better memories

    PubMed Central

    Draschkow, Dejan; Wolfe, Jeremy M.; Võ, Melissa L.-H.

    2014-01-01

    Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. PMID:25015385

  19. A novel scene management technology for complex virtual battlefield environment

    NASA Astrophysics Data System (ADS)

    Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan

    2018-04-01

    The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.

  20. Microbial growth and physiology in space - A review

    NASA Technical Reports Server (NTRS)

    Cioletti, Louis A.; Mishra, S. K.; Pierson, Duane L.

    1991-01-01

    An overview of microbial behavior in closed environments is given with attention to data related to simulated microgravity and actual space flight. Microbes are described in terms of antibiotic sensitivity, subcellular structure, and physiology, and the combined effects are considered of weightlessness and cosmic radiation on human immunity to such microorganisms. Space flight results report such effects as increased phage induction, accelerated microbial growth rates, and the increased risk of disease communication and microbial exchange aboard confining spacecraft. Ultrastructural changes are also noted in the nuclei, cell membranes, and cytoplasmic streaming, and it appears that antibiotic sensitivity is reduced under both actual and simulated conditions of spaceflight.

  1. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  2. Scene-Based Contextual Cueing in Pigeons

    PubMed Central

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  3. Expertise in crime scene examination: comparing search strategies of expert and novice crime scene examiners in simulated crime scenes.

    PubMed

    Baber, Chris; Butler, Mark

    2012-06-01

    The strategies of novice and expert crime scene examiners were compared in searching crime scenes. Previous studies have demonstrated that experts frame a scene through reconstructing the likely actions of a criminal and use contextual cues to develop hypotheses that guide subsequent search for evidence. Novice (first-year undergraduate students of forensic sciences) and expert (experienced crime scene examiners) examined two "simulated" crime scenes. Performance was captured through a combination of concurrent verbal protocol and own-point recording, using head-mounted cameras. Although both groups paid attention to the likely modus operandi of the perpetrator (in terms of possible actions taken), the novices paid more attention to individual objects, whereas the experts paid more attention to objects with "evidential value." Novices explore the scene in terms of the objects that it contains, whereas experts consider the evidence analysis that can be performed as a consequence of the examination. The suggestion is that the novices are putting effort into detailing the scene in terms of its features, whereas the experts are putting effort into the likely actions that can be performed as a consequence of the examination. The findings have helped in developing the expertise of novice crime scene examiners and approaches to training of expertise within this population.

  4. Cloud Classification in Polar and Desert Regions and Smoke Classification from Biomass Burning Using a Hierarchical Neural Network

    NASA Technical Reports Server (NTRS)

    Alexander, June; Corwin, Edward; Lloyd, David; Logar, Antonette; Welch, Ronald

    1996-01-01

    This research focuses on a new neural network scene classification technique. The task is to identify scene elements in Advanced Very High Resolution Radiometry (AVHRR) data from three scene types: polar, desert and smoke from biomass burning in South America (smoke). The ultimate goal of this research is to design and implement a computer system which will identify the clouds present on a whole-Earth satellite view as a means of tracking global climate changes. Previous research has reported results for rule-based systems (Tovinkere et at 1992, 1993) for standard back propagation (Watters et at. 1993) and for a hierarchical approach (Corwin et al 1994) for polar data. This research uses a hierarchical neural network with don't care conditions and applies this technique to complex scenes. A hierarchical neural network consists of a switching network and a collection of leaf networks. The idea of the hierarchical neural network is that it is a simpler task to classify a certain pattern from a subset of patterns than it is to classify a pattern from the entire set. Therefore, the first task is to cluster the classes into groups. The switching, or decision network, performs an initial classification by selecting a leaf network. The leaf networks contain a reduced set of similar classes, and it is in the various leaf networks that the actual classification takes place. The grouping of classes in the various leaf networks is determined by applying an iterative clustering algorithm. Several clustering algorithms were investigated, but due to the size of the data sets, the exhaustive search algorithms were eliminated. A heuristic approach using a confusion matrix from a lightly trained neural network provided the basis for the clustering algorithm. Once the clusters have been identified, the hierarchical network can be trained. The approach of using don't care nodes results from the difficulty in generating extremely complex surfaces in order to separate one class from all of the others. This approach finds pairwise separating surfaces and forms the more complex separating surface from combinations of simpler surfaces. This technique both reduces training time and improves accuracy over the previously reported results. Accuracies of 97.47%, 95.70%, and 99.05% were achieved for the polar, desert and smoke data sets.

  5. Eye Movement Control during Scene Viewing: Immediate Effects of Scene Luminance on Fixation Durations

    ERIC Educational Resources Information Center

    Henderson, John M.; Nuthmann, Antje; Luke, Steven G.

    2013-01-01

    Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation…

  6. Initial Scene Representations Facilitate Eye Movement Guidance in Visual Search

    ERIC Educational Resources Information Center

    Castelhano, Monica S.; Henderson, John M.

    2007-01-01

    What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a…

  7. Iconic memory for the gist of natural scenes.

    PubMed

    Clarke, Jason; Mack, Arien

    2014-11-01

    Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Adolescent Characters and Alcohol Use Scenes in Brazilian Movies, 2000-2008.

    PubMed

    Castaldelli-Maia, João Mauricio; de Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh

    2016-04-01

    Quantitative structured assessment of 193 scenes depicting substance use from a convenience sample of 50 Brazilian movies was performed. Logistic regression and analysis of variance or multivariate analysis of variance models were employed to test for two different types of outcome regarding alcohol appearance: The mean length of alcohol scenes in seconds and the prevalence of alcohol use scenes. The presence of adolescent characters was associated with a higher prevalence of alcohol use scenes compared to nonalcohol use scenes. The presence of adolescents was also associated with a higher than average length of alcohol use scenes compared to the nonalcohol use scenes. Alcohol use was negatively associated with cannabis, cocaine, and other drugs use. However, when the use of cannabis, cocaine, or other drugs was present in the alcohol use scenes, a higher average length was found. This may mean that most vulnerable group may see drinking as a more attractive option leading to higher alcohol use. © The Author(s) 2016.

  9. How many pixels make a memory? Picture memory for small pictures.

    PubMed

    Wolfe, Jeremy M; Kuzmova, Yoana I

    2011-06-01

    Torralba (Visual Neuroscience, 26, 123-131, 2009) showed that, if the resolution of images of scenes were reduced to the information present in very small "thumbnail images," those scenes could still be recognized. The objects in those degraded scenes could be identified, even though it would be impossible to identify them if they were removed from the scene context. Can tiny and/or degraded scenes be remembered, or are they like brief presentations, identified but not remembered. We report that memory for tiny and degraded scenes parallels the recognizability of those scenes. You can remember a scene to approximately the degree to which you can classify it. Interestingly, there is a striking asymmetry in memory when scenes are not the same size on their initial appearance and subsequent test. Memory for a large, full-resolution stimulus can be tested with a small, degraded stimulus. However, memory for a small stimulus is not retrieved when it is tested with a large stimulus.

  10. Multiple scene attitude estimator performance for LANDSAT-1

    NASA Technical Reports Server (NTRS)

    Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.

    1979-01-01

    Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.

  11. Understanding what is visible in a mirror or through a window before and after updating the position of an object.

    PubMed

    Bertamini, Marco

    2014-01-01

    In the Venus effect observers assume that Venus is admiring her own reflection in the mirror (Bertamini et al., 2003a). However, since the observer sees her face in the mirror, Venus is actually looking at the reflection of the painter. This effect is general because it is not specific to paintings or to images of people. This study tests whether people have difficulties in estimating what is visible from a given viewpoint using a paper and pencil task. Participants (N = 80) judged what is visible in a scene that could include a mirror or an aperture. The object in the scene (a train) was already located in front of the mirror or behind the aperture, or the same object had to be imagined to move to that location. The hypothesis was that this extra step (spatial transformation) is always part of how people reason about mirrors because they have to imagine the location of the reflection based on the location of the physical object. If so, this manipulation would equate the difficulty of the mirror and of the aperture conditions. Results show that performance on the paper and pencil task was better than expected, probably because of the asymmetric nature of the object used. However, an additional cost in reasoning about mirrors was confirmed.

  12. Violence and its injury consequences in American movies: a public health perspective

    PubMed Central

    McArthur, D.; Peek-Asa, C.; Webb, T.; Fisher, K.; Cook, B.; Browne, N.; Kraus, J.

    2000-01-01

    Objectives—The purpose of this study was to evaluate the seriousness and frequency of violence and the degree of associated injury depicted in the 100 top grossing American films of 1994. Methods—Each scene in each film was examined for the presentation of violent actions upon persons and coded by means of a systematic context sensitive analytic scheme. Specific degrees of violence and indices of injury severity were abstracted. Only actually depicted, not implied, actions were coded, although both explicit and implied consequences were examined. Results—The median number of violent actions per film was 16, with a range from 1 to 110. Intentional violence outnumbered unintentional violence by a factor of 10. Almost 90% of violent actions showed no consequences to the recipient's body, although more than 80% of the violent actions were executed with lethal or moderate force. Fewer than 1% of violent actions were accompanied by injuries that were then medically attended. Conclusions—Violent force in American films of 1994 was overwhelmingly intentional and in four of five cases was executed at levels likely to cause significant bodily injury. Not only action films but movies of all genres contained scenes in which the intensity of the action was not matched by correspondingly severe injury consequences. Many American films, regardless of genre, tend to minimize the consequences of violence to human beings. PMID:10875668

  13. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays.

    PubMed

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-03-15

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target's point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment.

  14. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays

    PubMed Central

    Hu, Cheng; Wang, Jingyang; Tian, Weiming; Zeng, Tao; Wang, Rui

    2017-01-01

    Multiple-Input Multiple-Output (MIMO) radar provides much more flexibility than the traditional radar thanks to its ability to realize far more observation channels than the actual number of transmit and receive (T/R) elements. In designing the MIMO imaging radar arrays, the commonly used virtual array theory generally assumes that all elements are on the same line. However, due to the physical size of the antennas and coupling effect between T/R elements, a certain height difference between T/R arrays is essential, which will result in the defocusing of edge points of the scene. On the other hand, the virtual array theory implies far-field approximation. Therefore, with a MIMO array designed by this theory, there will exist inevitable high grating lobes in the imaging results of near-field edge points of the scene. To tackle these problems, this paper derives the relationship between target’s point spread function (PSF) and pattern of T/R arrays, by which the design criterion is presented for near-field imaging MIMO arrays. Firstly, the proper height between T/R arrays is designed to focus the near-field edge points well. Secondly, the far-field array is modified to suppress the grating lobes in the near-field area. Finally, the validity of the proposed methods is verified by two simulations and an experiment. PMID:28294996

  15. Violence and its injury consequences in American movies: a public health perspective.

    PubMed

    McArthur, D; Peek-Asa, C; Webb, T; Fisher, K; Cook, B; Browne, N; Kraus, J

    2000-06-01

    The purpose of this study was to evaluate the seriousness and frequency of violence and the degree of associated injury depicted in the 100 top grossing American films of 1994. Each scene in each film was examined for the presentation of violent actions upon persons and coded by means of a systematic context sensitive analytic scheme. Specific degrees of violence and indices of injury severity were abstracted. Only actually depicted, not implied, actions were coded, although both explicit and implied consequences were examined. The median number of violent actions per film was 16, with a range from 1 to 110. Intentional violence outnumbered unintentional violence by a factor of 10. Almost 90% of violent actions showed no consequences to the recipient's body, although more than 80% of the violent actions were executed with lethal or moderate force. Fewer than 1% of violent actions were accompanied by injuries that were then medically attended. Violent force in American films of 1994 was overwhelmingly intentional and in four of five cases was executed at levels likely to cause significant bodily injury. Not only action films but movies of all genres contained scenes in which the intensity of the action was not matched by correspondingly severe injury consequences. Many American films, regardless of genre, tend to minimize the consequences of violence to human beings.

  16. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    NASA Astrophysics Data System (ADS)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  17. When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes

    ERIC Educational Resources Information Center

    Vo, Melissa L. -H.; Wolfe, Jeremy M.

    2012-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…

  18. Effects of memory colour on colour constancy for unknown coloured objects.

    PubMed

    Granzier, Jeroen J M; Gegenfurtner, Karl R

    2012-01-01

    The perception of an object's colour remains constant despite large variations in the chromaticity of the illumination-colour constancy. Hering suggested that memory colours, the typical colours of objects, could help in estimating the illuminant's colour and therefore be an important factor in establishing colour constancy. Here we test whether the presence of objects with diagnostical colours (fruits, vegetables, etc) within a scene influence colour constancy for unknown coloured objects in the scene. Subjects matched one of four Munsell papers placed in a scene illuminated under either a reddish or a greenish lamp with the Munsell book of colour illuminated by a neutral lamp. The Munsell papers were embedded in four different scenes-one scene containing diagnostically coloured objects, one scene containing incongruent coloured objects, a third scene with geometrical objects of the same colour as the diagnostically coloured objects, and one scene containing non-diagnostically coloured objects (eg, a yellow coffee mug). All objects were placed against a black background. Colour constancy was on average significantly higher for the scene containing the diagnostically coloured objects compared with the other scenes tested. We conclude that the colours of familiar objects help in obtaining colour constancy for unknown objects.

  19. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    PubMed

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  20. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes.

    PubMed

    Smith, Tim J; Mital, Parag K

    2013-07-17

    Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.

  1. Learning from Mistakes

    PubMed Central

    Fischer, Melissa A; Mazor, Kathleen M; Baril, Joann; Alper, Eric; DeMarco, Deborah; Pugnaire, Michele

    2006-01-01

    CONTEXT Trainees are exposed to medical errors throughout medical school and residency. Little is known about what facilitates and limits learning from these experiences. OBJECTIVE To identify major factors and areas of tension in trainees' learning from medical errors. DESIGN, SETTING, AND PARTICIPANTS Structured telephone interviews with 59 trainees (medical students and residents) from 1 academic medical center. Five authors reviewed transcripts of audiotaped interviews using content analysis. RESULTS Trainees were aware that medical errors occur from early in medical school. Many had an intense emotional response to the idea of committing errors in patient care. Students and residents noted variation and conflict in institutional recommendations and individual actions. Many expressed role confusion regarding whether and how to initiate discussion after errors occurred. Some noted the conflict between reporting errors to seniors who were responsible for their evaluation. Learners requested more open discussion of actual errors and faculty disclosure. No students or residents felt that they learned better from near misses than from actual errors, and many believed that they learned the most when harm was caused. CONCLUSIONS Trainees are aware of medical errors, but remaining tensions may limit learning. Institutions can immediately address variability in faculty response and local culture by disseminating clear, accessible algorithms to guide behavior when errors occur. Educators should develop longitudinal curricula that integrate actual cases and faculty disclosure. Future multi-institutional work should focus on identified themes such as teaching and learning in emotionally charged situations, learning from errors and near misses and balance between individual and systems responsibility. PMID:16704381

  2. Note on Inverse Bremsstrahlung in a Strong Electromagnetic Field

    DOE R&D Accomplishments Database

    Bethe, H. A.

    1972-09-01

    The collisional energy loss of an electron undergoing forced oscillation in an electromagnetic field behaves quite differently in the low and high intensity limits. ... It is shown that in the case of an electromagnetic field v {sub o} >> v {sub t} the rate of transfer is much slower, and actually decreases with the strength of the field.

  3. An Unfamiliar Minority: Vietnam Veterans on Campus.

    ERIC Educational Resources Information Center

    Drake, David

    This essay, based on the personal experiences of a professional librarian, who is also a Vietnam veteran, looks at why Vietnam veterans are a minority in higher education and the misconceptions that surround them. The lack of contact with actual veterans by administrators in many institutions of higher education is noted, and the paper goes on to…

  4. Deterring Drinking and Driving: The Australian Approach.

    ERIC Educational Resources Information Center

    Berger, Dale E.; Berger, Peggy M.

    This paper begins by noting that recent efforts in the United States to reduce the incidence of alcohol-impaired driving have not been very effective and suggests that for efforts to be effective, they must raise the actual risk of punishment to a level that cannot be ignored by potential offenders. It then describes an effective system of…

  5. Laboratory Test of the Galilean Universality of the Free Fall Experiment

    ERIC Educational Resources Information Center

    Christensen, Rasmus S.; Teiwes, Ricky; Petersen, Steffen V.; Uggerhøj, Ulrik I.; Jacoby, Bo

    2014-01-01

    There is a popular myth that Galileo dropped two objects of the same shape but different mass, noted their equal fall time, and concluded that gravitational motion is independent of the mass of the object. This paper demonstrates that this experiment--if actually performed--most likely would have yielded a different result and thus with modern…

  6. Does Performance Related Pay for Teachers Improve Student Performance? Some Evidence from India.

    ERIC Educational Resources Information Center

    Kingdon, Geeta; Teal, Francis

    This study examined whether teacher pay was responsive to measures of student performance, noting whether higher pay actually raised student learning outcomes. Data came from a survey of students and schools in India, where public and private school sectors have developed in parallel. The survey collected data on 902 students, 172 teachers, and…

  7. A Strategy for Playing in the Majors When You're in the Minors.

    ERIC Educational Resources Information Center

    O'Keefe, Heather C.

    Noting that advertising students in smaller colleges have little opportunity for exposure to major advertising companies as role models, the advertising class at a North Dakota university was given the opportunity to design an actual magazine layout to promote the university's Aerospace Science Center. The winning ad, as judged by the dean of the…

  8. Particle Size Control for PIV Seeding Using Dry Ice

    DTIC Science & Technology

    2010-03-01

    in flight actually being carried out, the observations, drawings and notes of Leonardo da Vinci showed an analytical process to develop a way for...theoretical particle response: dvp dt = −C(vp − U) C = 18µ ρpd2p 86 87 Bibliography 1. Linscott, R. N. and Da Vinci , L., The Notebooks of Leonardo Da Vinci

  9. The forensic value of X-linked markers in mixed-male DNA analysis.

    PubMed

    He, HaiJun; Zha, Lagabaiyila; Cai, JinHong; Huang, Jian

    2018-05-04

    Autosomal genetic markers and Y chromosome markers have been widely applied in analysis of mixed stains at crime scenes by forensic scientists. However, true genotype combinations are often difficult to distinguish using autosomal markers when similar amounts of DNA are contributed by multiple donors. In addition, specific individuals cannot be determined by Y chromosomal markers because male relatives share the same Y chromosome. X-linked markers, possessing characteristics somewhere intermediate between autosomes and the Y chromosome, are less universally applied in criminal casework. In this paper, X markers are proposed to apply to male mixtures because their true genes can be more easily and accurately recognized than the decision of the genotypes of AS markers. In this study, an actual two-man mixed stain from a forensic case file and simulated male-mixed DNA were examined simultaneously with the X markers and autosomal markers. Finally, the actual mixture was separated successfully by the X markers, although it was unresolved by AS-STRs, and the separation ratio of the simulated mixture was much higher using Chr X tools than with AS methods. We believe X-linked markers provide significant advantages in individual discrimination of male mixtures that should be further applied to forensic work.

  10. Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics

    NASA Astrophysics Data System (ADS)

    Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu

    2007-11-01

    In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.

  11. Gaze Behavior in a Natural Environment with a Task-Relevant Distractor: How the Presence of a Goalkeeper Distracts the Penalty Taker

    PubMed Central

    Kurz, Johannes; Hegele, Mathias; Munzert, Jörn

    2018-01-01

    Gaze behavior in natural scenes has been shown to be influenced not only by top–down factors such as task demands and action goals but also by bottom–up factors such as stimulus salience and scene context. Whereas gaze behavior in the context of static pictures emphasizes spatial accuracy, gazing in natural scenes seems to rely more on where to direct the gaze involving both anticipative components and an evaluation of ongoing actions. Not much is known about gaze behavior in far-aiming tasks in which multiple task-relevant targets and distractors compete for the allocation of visual attention via gaze. In the present study, we examined gaze behavior in the far-aiming task of taking a soccer penalty. This task contains a proximal target, the ball; a distal target, an empty location within the goal; and a salient distractor, the goalkeeper. Our aim was to investigate where participants direct their gaze in a natural environment with multiple potential fixation targets that differ in task relevance and salience. Results showed that the early phase of the run-up seems to be driven by both the salience of the stimulus setting and the need to perform a spatial calibration of the environment. The late run-up, in contrast, seems to be controlled by attentional demands of the task with penalty takers having habitualized a visual routine that is not disrupted by external influences (e.g., the goalkeeper). In addition, when trying to shoot a ball as accurately as possible, penalty takers directed their gaze toward the ball in order to achieve optimal foot-ball contact. These results indicate that whether gaze is driven by salience of the stimulus setting or by attentional demands depends on the phase of the actual task. PMID:29434560

  12. Interaction between scene-based and array-based contextual cueing.

    PubMed

    Rosenbaum, Gail M; Jiang, Yuhong V

    2013-07-01

    Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene-target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene-target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.

  13. The roles of scene priming and location priming in object-scene consistency effects

    PubMed Central

    Heise, Nils; Ansorge, Ulrich

    2014-01-01

    Presenting consistent objects in scenes facilitates object recognition as compared to inconsistent objects. Yet the mechanisms by which scenes influence object recognition are still not understood. According to one theory, consistent scenes facilitate visual search for objects at expected places. Here, we investigated two predictions following from this theory: If visual search is responsible for consistency effects, consistency effects could be weaker (1) with better-primed than less-primed object locations, and (2) with less-primed than better-primed scenes. In Experiments 1 and 2, locations of objects were varied within a scene to a different degree (one, two, or four possible locations). In addition, object-scene consistency was studied as a function of progressive numbers of repetitions of the backgrounds. Because repeating locations and backgrounds could facilitate visual search for objects, these repetitions might alter the object-scene consistency effect by lowering of location uncertainty. Although we find evidence for a significant consistency effect, we find no clear support for impacts of scene priming or location priming on the size of the consistency effect. Additionally, we find evidence that the consistency effect is dependent on the eccentricity of the target objects. These results point to only small influences of priming to object-scene consistency effects but all-in-all the findings can be reconciled with a visual-search explanation of the consistency effect. PMID:24910628

  14. Semantic guidance of eye movements in real-world scenes

    PubMed Central

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914

  15. Semantic guidance of eye movements in real-world scenes.

    PubMed

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Basic level scene understanding: categories, attributes and structures

    PubMed Central

    Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude

    2013-01-01

    A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590

  17. Scrambled eyes? Disrupting scene structure impedes focal processing and increases bottom-up guidance.

    PubMed

    Foulsham, Tom; Alan, Rana; Kingstone, Alan

    2011-10-01

    Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.

  18. Characteristics of modern pollen rain and the relationship to vegetation in sagebrush-steppe environments of Montana, USA

    NASA Astrophysics Data System (ADS)

    Briles, C.; Bryant, V.

    2010-12-01

    Variations in pollen production and dispersal characteristics among plant species complicate our ability to determine direct relationships between deposited pollen and actual vegetation. In order to better understand modern pollen-vegetation relationships, we analyzed pollen from 61 samples taken from sagebrush-steppe environments across Montana and compared them with the actual vegetation composition at each site. We also determined to what degree sagebrush-steppe communities can be geographically distinguished from one another based on their pollen signature. Pollen preservation was good, especially in wetter environments, with pollen degradataion ranging from 4-15%. Diploxylon Pinus was the primary contributor to the pollen rain, even in plots where pine trees did not occur or were several kilometers from the plot. Artemisia and grass pollen are underrepresented in the soils samples, while Chenopodiaceae and Juniperus pollen are overrepresented when compared to actual vegetation composition. Insect-pollinated species are present only in very minor amounts in the soil samples, even though some (e.g., Brassica) are abundant in the plots. In general, pollen spectra show significant differences between regions, however, within each region the individual spectra are not statistically significant from one another. An understanding of modern pollen-vegetation relationships and the palynological “fingerprint” of sagebrush-steppe communities aid in climatic and ecological interpretations of fossil pollen assemblages. The data also provide important control samples for forensics studies that use pollen to geolocate an object or person to a crime scene.

  19. Full Scenes Produce More Activation than Close-Up Scenes and Scene-Diagnostic Objects in Parahippocampal and Retrosplenial Cortex: An fMRI Study

    ERIC Educational Resources Information Center

    Henderson, John M.; Larson, Christine L.; Zhu, David C.

    2008-01-01

    We used fMRI to directly compare activation in two cortical regions previously identified as relevant to real-world scene processing: retrosplenial cortex and a region of posterior parahippocampal cortex functionally defined as the parahippocampal place area (PPA). We compared activation in these regions to full views of scenes from a global…

  20. The scene and the unseen: manipulating photographs for experiments on change blindness and scene memory: image manipulation for change blindness.

    PubMed

    Ball, Felix; Elzemann, Anne; Busch, Niko A

    2014-09-01

    The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.

  1. When eye movements express memory for old and new scenes in the absence of awareness and independent of hippocampus

    PubMed Central

    Smith, Christine N.; Squire, Larry R.

    2017-01-01

    Eye movements can reflect memory. For example, participants make fewer fixations and sample fewer regions when viewing old versus new scenes (the repetition effect). It is unclear whether the repetition effect requires that participants have knowledge (awareness) of the old–new status of the scenes or if it can occur independent of knowledge about old–new status. It is also unclear whether the repetition effect is hippocampus-dependent or hippocampus-independent. A complication is that testing conscious memory for the scenes might interfere with the expression of unconscious (unaware), experience-dependent eye movements. In experiment 1, 75 volunteers freely viewed old and new scenes without knowledge that memory for the scenes would later be tested. Participants then made memory judgments and confidence judgments for each scene during a surprise recognition memory test. Participants exhibited the repetition effect regardless of the accuracy or confidence associated with their memory judgments (i.e., the repetition effect was independent of their awareness of the old–new status of each scene). In experiment 2, five memory-impaired patients with medial temporal lobe damage and six controls also viewed old and new scenes without expectation of memory testing. Both groups exhibited the repetition effect, even though the patients were impaired at recognizing which scenes were old and which were new. Thus, when participants viewed scenes without expectation of memory testing, eye movements associated with old and new scenes reflected unconscious, hippocampus-independent memory. These findings are consistent with the formulation that, when memory is expressed independent of awareness, memory is hippocampus-independent. PMID:28096499

  2. Study on mechanism of amplitude fluctuation of dual-frequency beat in microchip Nd:YAG laser

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Tan, Yidong; Zhang, Shulian; Sun, Liqun

    2017-01-01

    In the laser heterodyne interferometry based on the microchip Nd:YAG dual-frequency laser, the amplitude of the beat note periodically fluctuates in time domain, which leads to the instability of the measurement. On the frequency spectrums of the two mono-frequency components of the laser and their beat note, several weak sideband signals are observed on both sides of the beat note. It is proved that the sideband frequencies are associated with the relaxation oscillation frequencies of the laser. The mechanism for the relaxation oscillations inducing the occurrence of the sideband signals is theoretically analyzed, and the quantitative relationship between the intensity ratio of the beat note to the sideband signal and the level of the amplitude fluctuation is simulated with the derived mathematical model. The results demonstrate that the periodical amplitude fluctuation of the beat note is actually induced by the relaxation oscillation. And the level of the amplitude fluctuation is lower than 10% when the intensity ratio is greater than 32 dB. These conclusions are beneficial to reduce the amplitude fluctuation of the microchip Nd:YAG dual-frequency laser and improve the stability of the heterodyne interferometry.

  3. Threat of Punishment Motivates Memory Encoding via Amygdala, Not Midbrain, Interactions with the Medial Temporal Lobe

    PubMed Central

    Murty, Vishnu P.; LaBar, Kevin S.; Adcock, R. Alison

    2012-01-01

    Neural circuits associated with motivated declarative encoding and active threat avoidance have both been described, but the relative contribution of these systems to punishment-motivated encoding remains unknown. The current study used functional magnetic resonance imaging in humans to examine mechanisms of declarative memory enhancement when subjects were motivated to avoid punishments that were contingent on forgetting. A motivational cue on each trial informed participants whether they would be punished or not for forgetting an upcoming scene image. Items associated with the threat of shock were better recognized 24 h later. Punishment-motivated enhancements in subsequent memory were associated with anticipatory activation of right amygdala and increases in its functional connectivity with parahippocampal and orbitofrontal cortices. On a trial-by-trial basis, right amygdala activation during the motivational cue predicted hippocampal activation during encoding of the subsequent scene; across participants, the strength of this interaction predicted memory advantages due to motivation. Of note, punishment-motivated learning was not associated with activation of dopaminergic midbrain, as would be predicted by valence-independent models of motivation to learn. These data are consistent with the view that motivation by punishment activates the amygdala, which in turn prepares the medial temporal lobe for memory formation. The findings further suggest a brain system for declarative learning motivated by punishment that is distinct from that for learning motivated by reward. PMID:22745496

  4. Threat of punishment motivates memory encoding via amygdala, not midbrain, interactions with the medial temporal lobe.

    PubMed

    Murty, Vishnu P; Labar, Kevin S; Adcock, R Alison

    2012-06-27

    Neural circuits associated with motivated declarative encoding and active threat avoidance have both been described, but the relative contribution of these systems to punishment-motivated encoding remains unknown. The current study used functional magnetic resonance imaging in humans to examine mechanisms of declarative memory enhancement when subjects were motivated to avoid punishments that were contingent on forgetting. A motivational cue on each trial informed participants whether they would be punished or not for forgetting an upcoming scene image. Items associated with the threat of shock were better recognized 24 h later. Punishment-motivated enhancements in subsequent memory were associated with anticipatory activation of right amygdala and increases in its functional connectivity with parahippocampal and orbitofrontal cortices. On a trial-by-trial basis, right amygdala activation during the motivational cue predicted hippocampal activation during encoding of the subsequent scene; across participants, the strength of this interaction predicted memory advantages due to motivation. Of note, punishment-motivated learning was not associated with activation of dopaminergic midbrain, as would be predicted by valence-independent models of motivation to learn. These data are consistent with the view that motivation by punishment activates the amygdala, which in turn prepares the medial temporal lobe for memory formation. The findings further suggest a brain system for declarative learning motivated by punishment that is distinct from that for learning motivated by reward.

  5. Direct versus indirect processing changes the influence of color in natural scene categorization.

    PubMed

    Otsuka, Sachio; Kawaguchi, Jun

    2009-10-01

    We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.

  6. Giving voice to African thought in medical research ethics.

    PubMed

    Tangwa, Godfrey B

    2017-04-01

    In this article, I consider the virtual absence of an African voice and perspective in global discourses of medical research ethics against the backdrop of the high burden of diseases and epidemics on the continent and the fact that the continent is actually the scene of numerous and sundry medical research studies. I consider some reasons for this state of affairs as well as how the situation might be redressed. Using examples from the HIV/AIDS and Ebola epidemics, I attempt to show that the marginalization of Africa in medical research and medical research ethics is deliberate rather than accidental. It is causally related, in general terms, to a Eurocentric hegemony derived from colonialism and colonial indoctrination cum proselytization. I end by proposing seven theses for the critical reflection and appraisal of the reader.

  7. [The Briochés: a family of marionette operators].

    PubMed

    Baron, Pierre; Cony, Gérard

    2006-01-01

    In the 17th and 18th centuries, empiries were travelling in Europe, coming, in majority from actual Italy. To attract the crowds, they put up boards on the street and play some pantomimes, parades or improvised scenes. In the 17th century this street show, little or none at all printed, was in a normal way in the day life. In the 18th century this street show is still alive and whole families as the Brioché, the Contugi, the Toscano, the Ricci, the Borsari have transmitted the art of the stage and the empiric practice. A few individuals of these families moved from empirics to be graduate dentists. In other families of dentists as the Talma or Fauchard it is the stage which brought some.

  8. Dioptric defocus maps across the visual field for different indoor environments.

    PubMed

    García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried

    2018-01-01

    One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the 'environmental defocus' over the visual field. At present, no devices are available that could provide this information. A 'Kinect sensor v1' camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying 'indoor defocus error signals' across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and 'defocus maps' were generated for various scenes and tasks.

  9. Reprint of: Cling film plastic wrap: An innovation for dead body packaging, preservation and transportation by first responders as a replacement for cadaver body bag in large scale disasters.

    PubMed

    Khoo, Lay See; Lai, Poh Soon; Saidin, Mohd Hilmi; Noor, Zahari; Mahmood, Mohd Shah

    2018-07-01

    Cadaver body bags are the conventional method to contain a human body or human remains, which includes the use for storage and transportation of the deceased at any crime scene or disaster scene. During disasters, most often than not, the first responders including the police will be equipped with cadaver body bags to do scene processing of human remains and collection of personal belongings at the disaster site. However, in an unanticipated large scale disasters involving hundreds and thousands of fatalities, cadaver body bags supplies may be scarce. The authors have therefore innovated the cling film plastic wrap as an alternative for the cadaver body bag used at the disaster site. The plastic wrap was tested on six different experimental subjects, i.e. both adult and child mannequins; body parts of the mannequin figure (arm and hand); a human adult subject and an unknown dead body. The strengths of the cling film plastic wrap are discussed in comparison with the cadaver body bag in the aspects of costing, weight, duration of the wrap, water and body fluid resistant properties, visibility and other advantages. An average savings of more than 5000% are noted for both adult body wrap and child body wrap compared to the cadaver body wrap. This simply means that the authors can either wrap 25 adult dead bodies or 80 children dead bodies with the cost of 1 cadaver body bag. The cling film plastic wrap has proven to have significant innovation impact for dead body management particularly by the first responders in large scale disasters. With proper handling of dead bodies, first responders can manage the dead with dignity and respect in an overwhelmed situation to facilitate the humanitarian victim identification process later. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Cling film plastic wrap: An innovation for dead body packaging, preservation and transportation by first responders as a replacement for cadaver body bag in large scale disasters.

    PubMed

    Khoo, Lay See; Lai, Poh Soon; Saidin, Mohd Hilmi; Noor, Zahari; Mahmood, Mohd Shah

    2018-04-01

    Cadaver body bags are the conventional method to contain a human body or human remains, which includes the use for storage and transportation of the deceased at any crime scene or disaster scene. During disasters, most often than not, the first responders including the police will be equipped with cadaver body bags to do scene processing of human remains and collection of personal belongings at the disaster site. However, in an unanticipated large scale disasters involving hundreds and thousands of fatalities, cadaver body bags supplies may be scarce. The authors have therefore innovated the cling film plastic wrap as an alternative for the cadaver body bag used at the disaster site. The plastic wrap was tested on six different experimental subjects, i.e. both adult and child mannequins; body parts of the mannequin figure (arm and hand); a human adult subject and an unknown dead body. The strengths of the cling film plastic wrap are discussed in comparison with the cadaver body bag in the aspects of costing, weight, duration of the wrap, water and body fluid resistant properties, visibility and other advantages. An average savings of more than 5000% are noted for both adult body wrap and child body wrap compared to the cadaver body wrap. This simply means that the authors can either wrap 25 adult dead bodies or 80 children dead bodies with the cost of 1 cadaver body bag. The cling film plastic wrap has proven to have significant innovation impact for dead body management particularly by the first responders in large scale disasters. With proper handling of dead bodies, first responders can manage the dead with dignity and respect in an overwhelmed situation to facilitate the humanitarian victim identification process later. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Phase 1 Development Report for the SESSA Toolkit.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knowlton, Robert G.; Melton, Brad J; Anderson, Robert J.

    The Site Exploitation System for Situational Awareness ( SESSA ) tool kit , developed by Sandia National Laboratories (SNL) , is a comprehensive de cision support system for crime scene data acquisition and Sensitive Site Exploitation (SSE). SESSA is an outgrowth of another SNL developed decision support system , the Building R estoration Operations Optimization Model (BROOM), a hardware/software solution for data acquisition, data management, and data analysis. SESSA was designed to meet forensic crime scene needs as defined by the DoD's Military Criminal Investigation Organiza tion (MCIO) . SESSA is a very comprehensive toolki t with a considerable amountmore » of database information managed through a Microsoft SQL (Structured Query Language) database engine, a Geographical Information System (GIS) engine that provides comprehensive m apping capabilities, as well as a an intuitive Graphical User Interface (GUI) . An electronic sketch pad module is included. The system also has the ability to efficiently generate necessary forms for forensic crime scene investigations (e.g., evidence submittal, laboratory requests, and scene notes). SESSA allows the user to capture photos on site, and can read and generate ba rcode labels that limit transcription errors. SESSA runs on PC computers running Windows 7, but is optimized for touch - screen tablet computers running Windows for ease of use at crime scenes and on SSE deployments. A prototype system for 3 - dimensional (3 D) mapping and measur e ments was also developed to complement the SESSA software. The mapping system employs a visual/ depth sensor that captures data to create 3D visualizations of an interior space and to make distance measurements with centimeter - level a ccuracy. Output of this 3D Model Builder module provides a virtual 3D %22walk - through%22 of a crime scene. The 3D mapping system is much less expensive and easier to use than competitive systems. This document covers the basic installation and operation of th e SESSA tool kit in order to give the user enough information to start using the tool kit . SESSA is currently a prototype system and this documentation covers the initial release of the tool kit . Funding for SESSA was provided by the Department of Defense (D oD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL) . ACKNOWLEDGEMENTS The authors wish to acknowledge the funding support for the development of the Site Exploitation System for Situational Awareness (SESSA) toolkit from the Department of Defense (DoD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL). Special thanks to Mr. Garold Warner, of DFSC, who served as the Project Manager. Individuals that worked on the design, functional attributes, algorithm development, system arc hitecture, and software programming include: Robert Knowlton, Brad Melton, Robert Anderson, and Wendy Amai.« less

  12. Viewing nature scenes positively affects recovery of autonomic function following acute-mental stress.

    PubMed

    Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F

    2013-06-04

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.

  13. The occipital place area represents the local elements of scenes

    PubMed Central

    Kamps, Frederik S.; Julian, Joshua B.; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D.

    2016-01-01

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents in not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements – both in spatial boundary and scene content representation – while PPA and RSC represent global scene properties. PMID:26931815

  14. Viewing Nature Scenes Positively Affects Recovery of Autonomic Function Following Acute-Mental Stress

    PubMed Central

    2013-01-01

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor. PMID:23590163

  15. Neural Correlates of Fixation Duration during Real-world Scene Viewing: Evidence from Fixation-related (FIRE) fMRI.

    PubMed

    Henderson, John M; Choi, Wonil

    2015-06-01

    During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.

  16. The occipital place area represents the local elements of scenes.

    PubMed

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Vision-based Detection of Acoustic Timed Events: a Case Study on Clarinet Note Onsets

    NASA Astrophysics Data System (ADS)

    Bazzica, A.; van Gemert, J. C.; Liem, C. C. S.; Hanjalic, A.

    2017-05-01

    Acoustic events often have a visual counterpart. Knowledge of visual information can aid the understanding of complex auditory scenes, even when only a stereo mixdown is available in the audio domain, \\eg identifying which musicians are playing in large musical ensembles. In this paper, we consider a vision-based approach to note onset detection. As a case study we focus on challenging, real-world clarinetist videos and carry out preliminary experiments on a 3D convolutional neural network based on multiple streams and purposely avoiding temporal pooling. We release an audiovisual dataset with 4.5 hours of clarinetist videos together with cleaned annotations which include about 36,000 onsets and the coordinates for a number of salient points and regions of interest. By performing several training trials on our dataset, we learned that the problem is challenging. We found that the CNN model is highly sensitive to the optimization algorithm and hyper-parameters, and that treating the problem as binary classification may prevent the joint optimization of precision and recall. To encourage further research, we publicly share our dataset, annotations and all models and detail which issues we came across during our preliminary experiments.

  18. Deciding what is possible and impossible following hippocampal damage in humans.

    PubMed

    McCormick, Cornelia; Rosenthal, Clive R; Miller, Thomas D; Maguire, Eleanor A

    2017-03-01

    There is currently much debate about whether the precise role of the hippocampus in scene processing is predominantly constructive, perceptual, or mnemonic. Here, we developed a novel experimental paradigm designed to control for general perceptual and mnemonic demands, thus enabling us to specifically vary the requirement for constructive processing. We tested the ability of patients with selective bilateral hippocampal damage and matched control participants to detect either semantic (e.g., an elephant with butterflies for ears) or constructive (e.g., an endless staircase) violations in realistic images of scenes. Thus, scenes could be semantically or constructively 'possible' or 'impossible'. Importantly, general perceptual and memory requirements were similar for both types of scene. We found that the patients performed comparably to control participants when deciding whether scenes were semantically possible or impossible, but were selectively impaired at judging if scenes were constructively possible or impossible. Post-task debriefing indicated that control participants constructed flexible mental representations of the scenes in order to make constructive judgements, whereas the patients were more constrained and typically focused on specific fragments of the scenes, with little indication of having constructed internal scene models. These results suggest that one contribution the hippocampus makes to scene processing is to construct internal representations of spatially coherent scenes, which may be vital for modelling the world during both perception and memory recall. © 2016 The Authors. Hippocampus Published by Wiley Periodicals, Inc. © 2016 The Authors. Hippocampus Published by Wiley Periodicals, Inc.

  19. Scene construction in developmental amnesia: An fMRI study☆

    PubMed Central

    Mullally, Sinéad L.; Vargha-Khadem, Faraneh; Maguire, Eleanor A.

    2014-01-01

    Amnesic patients with bilateral hippocampal damage sustained in adulthood are generally unable to construct scenes in their imagination. By contrast, patients with developmental amnesia (DA), where hippocampal damage was acquired early in life, have preserved performance on this task, although the reason for this sparing is unclear. One possibility is that residual function in remnant hippocampal tissue is sufficient to support basic scene construction in DA. Such a situation was found in the one amnesic patient with adult-acquired hippocampal damage (P01) who could also construct scenes. Alternatively, DA patients’ scene construction might not depend on the hippocampus, perhaps being instead reliant on non-hippocampal regions and mediated by semantic knowledge. To adjudicate between these two possibilities, we examined scene construction during functional MRI (fMRI) in Jon, a well-characterised patient with DA who has previously been shown to have preserved scene construction. We found that when Jon constructed scenes he activated many of the regions known to be associated with imagining scenes in control participants including ventromedial prefrontal cortex, posterior cingulate, retrosplenial and posterior parietal cortices. Critically, however, activity was not increased in Jon's remnant hippocampal tissue. Direct comparisons with a group of control participants and patient P01, confirmed that they activated their right hippocampus more than Jon. Our results show that a type of non-hippocampal dependent scene construction is possible and occurs in DA, perhaps mediated by semantic memory, which does not appear to involve the vivid visualisation of imagined scenes. PMID:24231038

  20. A Psychoevolutionary Approach to Identifying Preferred Nature Scenes With Potential to Provide Restoration From Stress.

    PubMed

    Thake, Carol L; Bambling, Matthew; Edirippulige, Sisira; Marx, Eric

    2017-10-01

    Research supports therapeutic use of nature scenes in healthcare settings, particularly to reduce stress. However, limited literature is available to provide a cohesive guide for selecting scenes that may provide optimal therapeutic effect. This study produced and tested a replicable process for selecting nature scenes with therapeutic potential. Psychoevolutionary theory informed the construction of the Importance for Survival Scale (IFSS), and its usefulness for identifying scenes that people generally prefer to view and that hold potential to reduce stress was tested. Relationships between Importance for Survival (IFS), preference, and restoration were tested. General community participants ( N = 20 males, 20 females; M age = 48 years) Q-sorted sets of landscape photographs (preranked by the researcher in terms of IFS using the IFSS) from most to least preferred, and then completed the Short-Version Revised Restoration Scale in response to viewing a selection of the scenes. Results showed significant positive relationships between IFS and each of scene preference (large effect), and restoration potential (medium effect), as well as between scene preference and restoration potential across the levels of IFS (medium effect), and for individual participants and scenes (large effect). IFS was supported as a framework for identifying nature scenes that people will generally prefer to view and that hold potential for restoration from emotional distress; however, greater therapeutic potential may be expected when people can choose which of the scenes they would prefer to view. Evidence for the effectiveness of the IFSS was produced.

  1. Scene construction in developmental amnesia: an fMRI study.

    PubMed

    Mullally, Sinéad L; Vargha-Khadem, Faraneh; Maguire, Eleanor A

    2014-01-01

    Amnesic patients with bilateral hippocampal damage sustained in adulthood are generally unable to construct scenes in their imagination. By contrast, patients with developmental amnesia (DA), where hippocampal damage was acquired early in life, have preserved performance on this task, although the reason for this sparing is unclear. One possibility is that residual function in remnant hippocampal tissue is sufficient to support basic scene construction in DA. Such a situation was found in the one amnesic patient with adult-acquired hippocampal damage (P01) who could also construct scenes. Alternatively, DA patients' scene construction might not depend on the hippocampus, perhaps being instead reliant on non-hippocampal regions and mediated by semantic knowledge. To adjudicate between these two possibilities, we examined scene construction during functional MRI (fMRI) in Jon, a well-characterised patient with DA who has previously been shown to have preserved scene construction. We found that when Jon constructed scenes he activated many of the regions known to be associated with imagining scenes in control participants including ventromedial prefrontal cortex, posterior cingulate, retrosplenial and posterior parietal cortices. Critically, however, activity was not increased in Jon's remnant hippocampal tissue. Direct comparisons with a group of control participants and patient P01, confirmed that they activated their right hippocampus more than Jon. Our results show that a type of non-hippocampal dependent scene construction is possible and occurs in DA, perhaps mediated by semantic memory, which does not appear to involve the vivid visualisation of imagined scenes. © 2013 Published by Elsevier Ltd.

  2. Does object view influence the scene consistency effect?

    PubMed

    Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2015-04-01

    Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.

  3. The influence of behavioral relevance on the processing of global scene properties: An ERP study.

    PubMed

    Hansen, Natalie E; Noesen, Birken T; Nador, Jeffrey D; Harel, Assaf

    2018-05-02

    Recent work studying the temporal dynamics of visual scene processing (Harel et al., 2016) has found that global scene properties (GSPs) modulate the amplitude of early Event-Related Potentials (ERPs). It is still not clear, however, to what extent the processing of these GSPs is influenced by their behavioral relevance, determined by the goals of the observer. To address this question, we investigated how behavioral relevance, operationalized by the task context impacts the electrophysiological responses to GSPs. In a set of two experiments we recorded ERPs while participants viewed images of real-world scenes, varying along two GSPs, naturalness (manmade/natural) and spatial expanse (open/closed). In Experiment 1, very little attention to scene content was required as participants viewed the scenes while performing an orthogonal fixation-cross task. In Experiment 2 participants saw the same scenes but now had to actively categorize them, based either on their naturalness or spatial expense. We found that task context had very little impact on the early ERP responses to the naturalness and spatial expanse of the scenes: P1, N1, and P2 could distinguish between open and closed scenes and between manmade and natural scenes across both experiments. Further, the specific effects of naturalness and spatial expanse on the ERP components were largely unaffected by their relevance for the task. A task effect was found at the N1 and P2 level, but this effect was manifest across all scene dimensions, indicating a general effect rather than an interaction between task context and GSPs. Together, these findings suggest that the extraction of global scene information reflected in the early ERP components is rapid and very little influenced by top-down observer-based goals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. [Effect of the alimentary factor on the immunobiologic reactivity of children's bodies].

    PubMed

    Voznesenskaia, F M; Panshinskaia, N M

    1976-01-01

    Observations covered 66 healthy six-year old children of a childrens' home. The actual alimentation of the children was studied according to tabulated values for one year and 112 apportionoments. In the rations of actual nutrition a disturbed correlation of proteins, fats and carbohydrates was noted. Seasonal variations of the salival lysozyme activity were revealed against the background of the actual alimentation. The lowest antimicrobial activity of the lysozyme was recorded in the winter and spring seasons of the year. The low lysozyme activity of the saliva in spring may be explained by deficiency of the animal protein in the ration. In winter time added to the insufficient content of the animal protein were features specific for the day's routine, typical of this season. An addition of the animal protein to the actual nutritional rations of the children, in the form of eggs and nonfat dry milk and a correction of the proteins, fats and carbohydrates proportions in the rations led to a statistically significant rise in the lysozyme activity in the saliva of children during all the months of observation.

  5. Show that you care.

    PubMed

    Wesolowski, C E

    1990-01-01

    Are you an Ebenezer Scrooge when it comes to reward and recognition for your staff? How many times last week did you phone a member of your staff, or better yet, visit in person to say that you appreciated something they did? When was the last time you wrote a note of thanks? Do you routinely recognize special efforts during staff meetings? When was the last time you awarded a certificate of appreciation to an employee for a job well done? Are employees working "behind the scenes" recognized? Do you have a system in place to recognize groups who work well as a team? If you answered "no" to most of these questions, don't fret. Establishing a reward and recognition program is relatively easy to do. And, it won't break the budget either.

  6. Identification, definition and mapping of terrestrial ecosystems in interior Alaska

    NASA Technical Reports Server (NTRS)

    Anderson, J. H. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. A transect of the Tanana River Flats to Murphy Dome, Alaska was accomplished. The transect includes an experimental forest and information on the range of vegetation-land form types. Multispectral black and white prints of the Eagle Summit Research Area, Alaska, were studied in conjunction with aerial photography and field notes to determine the characteristics of the vegetation. Black and white MSS prints were compared with aerial photographs of the village of Wiseman, Alaska. No positive identifications could be made without reference to aerial photographs or ground truth data. Color coded density slice scenes of the Eagle Summit Research Area were produced from black and white NASA aerial photographs. Infestations of the spruce beetle in the Cook Inlet, Alaska, were studied using aerial photographs.

  7. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  8. NASA Fundamental Remote Sensing Science Research Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The NASA Fundamental Remote Sensing Research Program is described. The program provides a dynamic scientific base which is continually broadened and from which future applied research and development can draw support. In particular, the overall objectives and current studies of the scene radiation and atmospheric effect characterization (SRAEC) project are reviewed. The SRAEC research can be generically structured into four types of activities including observation of phenomena, empirical characterization, analytical modeling, and scene radiation analysis and synthesis. The first three activities are the means by which the goal of scene radiation analysis and synthesis is achieved, and thus are considered priority activities during the early phases of the current project. Scene radiation analysis refers to the extraction of information describing the biogeophysical attributes of the scene from the spectral, spatial, and temporal radiance characteristics of the scene including the atmosphere. Scene radiation synthesis is the generation of realistic spectral, spatial, and temporal radiance values for a scene with a given set of biogeophysical attributes and atmospheric conditions.

  9. Bulk silicon as photonic dynamic infrared scene projector

    NASA Astrophysics Data System (ADS)

    Malyutenko, V. K.; Bogatyrenko, V. V.; Malyutenko, O. Yu.

    2013-04-01

    A Si-based fast (frame rate >1 kHz), large-scale (scene area 100 cm2), broadband (3-12 μm), dynamic contactless infrared (IR) scene projector is demonstrated. An IR movie appears on a scene because of the conversion of a visible scenario projected at a scene kept at elevated temperature. Light down conversion comes as a result of free carrier generation in a bulk Si scene followed by modulation of its thermal emission output in the spectral band of free carrier absorption. The experimental setup, an IR movie, figures of merit, and the process's advantages in comparison to other projector technologies are discussed.

  10. Effects of memory colour on colour constancy for unknown coloured objects

    PubMed Central

    Granzier, Jeroen J M; Gegenfurtner, Karl R

    2012-01-01

    The perception of an object's colour remains constant despite large variations in the chromaticity of the illumination—colour constancy. Hering suggested that memory colours, the typical colours of objects, could help in estimating the illuminant's colour and therefore be an important factor in establishing colour constancy. Here we test whether the presence of objects with diagnostical colours (fruits, vegetables, etc) within a scene influence colour constancy for unknown coloured objects in the scene. Subjects matched one of four Munsell papers placed in a scene illuminated under either a reddish or a greenish lamp with the Munsell book of colour illuminated by a neutral lamp. The Munsell papers were embedded in four different scenes—one scene containing diagnostically coloured objects, one scene containing incongruent coloured objects, a third scene with geometrical objects of the same colour as the diagnostically coloured objects, and one scene containing non-diagnostically coloured objects (eg, a yellow coffee mug). All objects were placed against a black background. Colour constancy was on average significantly higher for the scene containing the diagnostically coloured objects compared with the other scenes tested. We conclude that the colours of familiar objects help in obtaining colour constancy for unknown objects. PMID:23145282

  11. Can deja vu result from similarity to a prior experience? Support for the similarity hypothesis of deja vu.

    PubMed

    Cleary, Anne M; Ryals, Anthony J; Nomi, Jason S

    2009-12-01

    The strange feeling of having been somewhere or done something before--even though there is evidence to the contrary--is called déjà vu. Although déjà vu is beginning to receive attention among scientists (Brown, 2003, 2004), few studies have empirically investigated the phenomenon. We investigated the hypothesis that déjà vu is related to feelings of familiarity and that it can result from similarity between a novel scene and that of a scene experienced in one's past. We used a variation of the recognition-without-recall method of studying familiarity (Cleary, 2004) to examine instances in which participants failed to recall a studied scene in response to a configurally similar novel test scene. In such instances, resemblance to a previously viewed scene increased both feelings of familiarity and of déjà vu. Furthermore, in the absence of recall, resemblance of a novel scene to a previously viewed scene increased the probability of a reported déjà vu state for the novel scene, and feelings of familiarity with a novel scene were directly related to feelings of being in a déjà vu state.

  12. Selective scene perception deficits in a case of topographical disorientation.

    PubMed

    Robin, Jessica; Lowe, Matthew X; Pishdadian, Sara; Rivest, Josée; Cant, Jonathan S; Moscovitch, Morris

    2017-07-01

    Topographical disorientation (TD) is a neuropsychological condition characterized by an inability to find one's way, even in familiar environments. One common contributing cause of TD is landmark agnosia, a visual recognition impairment specific to scenes and landmarks. Although many cases of TD with landmark agnosia have been documented, little is known about the perceptual mechanisms which lead to selective deficits in recognizing scenes. In the present study, we test LH, a man who exhibits TD and landmark agnosia, on measures of scene perception that require selectively attending to either the configural or surface properties of a scene. Compared to healthy controls, LH demonstrates perceptual impairments when attending to the configuration of a scene, but not when attending to its surface properties, such as the pattern of the walls or whether the ground is sand or grass. In contrast, when focusing on objects instead of scenes, LH demonstrates intact perception of both geometric and surface properties. This study demonstrates that in a case of TD and landmark agnosia, the perceptual impairments are selective to the layout of scenes, providing insight into the mechanism of landmark agnosia and scene-selective perceptual processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Research and Technology Development for Construction of 3d Video Scenes

    NASA Astrophysics Data System (ADS)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  14. The Neural Dynamics of Attentional Selection in Natural Scenes.

    PubMed

    Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V

    2016-10-12

    The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.

  15. Effects of Note-Taking and Extended Writing on Expository Text Comprehension: Who Benefits?

    ERIC Educational Resources Information Center

    Hebert, Michael; Graham, Steve; Rigby-Wills, Hope; Ganson, Katie

    2014-01-01

    Writing may be an especially useful tool for improving the reading comprehension of lower performing readers and students with disabilities. However, it is reasonable to expect that students with poor writing skills in particular, may actually be less adept at using writing to improve their reading skills, and may not be able to do so without…

  16. Vegetable oils for tractors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moroney, M.

    1981-11-14

    Preliminary tests by the Agricultural Institute, show that tractors can be run on a 50:50 rape oil-diesel mixture or on pure rape oil. In fact, engine power actually increased slightly with the 50:50 blend but decreased fractionally with pure rape oil. Research at the North Dakota State University on using sunflower oil as an alternative to diesel fuel is also noted.

  17. Literacy and Education: Understanding the New Literacy Studies in the Classroom

    ERIC Educational Resources Information Center

    Pahl, Kate; Rowsell, Jennifer

    2005-01-01

    In this book the authors note that for too long teachers have been at the mercy of government programmes, which have emphasized the acquisition of literacy as a set of skills. They suggest that an exciting new theory coming out of the New Literacy Studies actually helps students to access literacy skills. They attempt to bridge the gap between…

  18. Lessons from Linnea: "Linnea in Monet's Garden" as a Prototype of Radical Change in Informational Books for Children.

    ERIC Educational Resources Information Center

    Hendrickson, Linnea

    1999-01-01

    Identifies and analyzes the elements of the informational book "Linnea in Monet's Garden" that have made it so successful. Notes (1) its use of the device of a fictional journey; (2) a character so real and engaging; and (3) the information presented which parallels the actual growth of Linnea's interest in Monet. (RS)

  19. Guide to Your Child's Nutrition: Making Peace at the Table and Building Healthy Eating Habits for Life.

    ERIC Educational Resources Information Center

    Dietz, William H., Ed.; Stern, Loraine, Ed.

    Noting that the real challenge for parents is not being aware of what to feed their children, but rather getting children to actually eat those foods, this guide provides advice for parents of infants through adolescents regarding children's dietary needs while recognizing the role of children's emotions, tastes, and preferences. Following the…

  20. The Way We Never Were: American Families and the Nostalgia Trap.

    ERIC Educational Resources Information Center

    Coontz, Stephanie

    The pessimists' view is that the U.S. family is collapsing; on the other hand, optimists view it is as merely diversifying. Too often, both camps begin with an ahistorical, static notion of what the family was like before the contemporary period. Noting that the actual complexity of our history gets buried under the weight of an idealized image,…

  1. When Disaster Strikes Is Logistics and Contracting Support Ready?

    DTIC Science & Technology

    2011-09-27

    improve response in the event of an actual crisis . The Defense Contingency Contracting Handbook (Christianson, A., Coombs , J., Harbin, S., Ingram...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Recent crisis responses, including the...and management of the DoD?s logistics and contracting support for contingency, expeditionary, and crisis response, and provide specific recommendations

  2. Changes in Occupational Employment in the Food and Kindred Products Industry, 1977-1980. Technical Note No. 1.

    ERIC Educational Resources Information Center

    Lewis, Gary

    The extent to which occupational staffing patterns change over time was examined in a study focusing on the Food and Kindred Products industry--Standard Industrial Classification (SIC) 20. Data were taken from the 1977 and 1980 Occupational Employment Statistics program coordinated by the United States Department of Labor Statistics. Actual 1980…

  3. Teaching Business French through Case Studies: Presentation of a Marketing Case.

    ERIC Educational Resources Information Center

    Federico, Salvatore; Moore, Catherine

    The use of case studies as a means for teaching business French is discussed. The approach is advocated because of the realism of case studies, which are based on actual occurrences. Characteristics of a good case are noted: it tells a story, focuses on interest-arousing issues, is set in the past 10 years, permits empathy with the main…

  4. Note from North America: The Road Not Taken, the Data Not Recorded

    ERIC Educational Resources Information Center

    Alper, Paul

    2014-01-01

    In 1916 Robert Frost published his famous poem, "The Road Not Taken," in which he muses about what might have been had he chosen a different path, made a different choice. While counterfactual arguments in general can often lead to vacuous nowheres, frequently in statistics the data that are not presented actually exist, in a sense,…

  5. The fitness to practise hearing. 1. Where's your evidence? Keeping good records.

    PubMed

    Solon, Mark

    2011-04-01

    To some midwives, record keeping may be a chore. However, getting your records right will provide you with the best possible defence if you have the misfortune to be called to a 'Fitness to Practise Hearing'. In this article I shall consider what makes good records and next month, examine the hearing itself. It is important to note that if called to a hearing, it may be several years after the event in question and you must be able to verify what actually happened and why. Considering that a midwife delivers hundreds of babies each year, it would be very difficult to prove your fitness to practise from memory alone; in cross-examination you will need some concrete evidence of what actually happened. You should keep your notes for at least six years as breach of contract cases can be brought for up to six years after the breach is discovered (whereas negligence cases need to be brought by three years after the negligent act). In the case of a minor, a case can be brought three years after a child reaches adulthood and may therefore be up to 21 years after the event.

  6. Measurements of scene spectral radiance variability

    NASA Astrophysics Data System (ADS)

    Seeley, Juliette A.; Wack, Edward C.; Mooney, Daniel L.; Muldoon, Michael; Shey, Shen; Upham, Carolyn A.; Harvey, John M.; Czerwinski, Richard N.; Jordan, Michael P.; Vallières, Alexandre; Chamberland, Martin

    2006-05-01

    Detection performance of LWIR passive standoff chemical agent sensors is strongly influenced by various scene parameters, such as atmospheric conditions, temperature contrast, concentration-path length product (CL), agent absorption coefficient, and scene spectral variability. Although temperature contrast, CL, and agent absorption coefficient affect the detected signal in a predictable manner, fluctuations in background scene spectral radiance have less intuitive consequences. The spectral nature of the scene is not problematic in and of itself; instead it is spatial and temporal fluctuations in the scene spectral radiance that cannot be entirely corrected for with data processing. In addition, the consequence of such variability is a function of the spectral signature of the agent that is being detected and is thus different for each agent. To bracket the performance of background-limited (low sensor NEDN), passive standoff chemical sensors in the range of relevant conditions, assessment of real scene data is necessary1. Currently, such data is not widely available2. To begin to span the range of relevant scene conditions, we have acquired high fidelity scene spectral radiance measurements with a Telops FTIR imaging spectrometer 3. We have acquired data in a variety of indoor and outdoor locations at different times of day and year. Some locations include indoor office environments, airports, urban and suburban scenes, waterways, and forest. We report agent-dependent clutter measurements for three of these backgrounds.

  7. Cross-cultural differences in item and background memory: examining the influence of emotional intensity and scene congruency.

    PubMed

    Mickley Steinmetz, Katherine R; Sturkie, Charlee M; Rochester, Nina M; Liu, Xiaodong; Gutchess, Angela H

    2018-07-01

    After viewing a scene, individuals differ in what they prioritise and remember. Culture may be one factor that influences scene memory, as Westerners have been shown to be more item-focused than Easterners (see Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 922-934). However, cultures may differ in their sensitivity to scene incongruences and emotion processing, which may account for cross-cultural differences in scene memory. The current study uses hierarchical linear modeling (HLM) to examine scene memory while controlling for scene congruency and the perceived emotional intensity of the images. American and East Asian participants encoded pictures that included a positive, negative, or neutral item placed on a neutral background. After a 20-min delay, participants were shown the item and background separately along with similar and new items and backgrounds to assess memory specificity. Results indicated that even when congruency and emotional intensity were controlled, there was evidence that Americans had better item memory than East Asians. Incongruent scenes were better remembered than congruent scenes. However, this effect did not differ by culture. This suggests that Americans' item focus may result in memory changes that are robust despite variations in scene congruency and perceived emotion.

  8. A Comparison of the Visual Attention Patterns of People With Aphasia and Adults Without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes.

    PubMed

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-04-01

    The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.

  9. Insects and associated arthropods analyzed during medicolegal death investigations in Harris County, Texas, USA: January 2013- April 2016

    PubMed Central

    2017-01-01

    The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner’s office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types. PMID:28604832

  10. Insects and associated arthropods analyzed during medicolegal death investigations in Harris County, Texas, USA: January 2013- April 2016.

    PubMed

    Sanford, Michelle R

    2017-01-01

    The application of insect and arthropod information to medicolegal death investigations is one of the more exacting applications of entomology. Historically limited to homicide investigations, the integration of full time forensic entomology services to the medical examiner's office in Harris County has opened up the opportunity to apply entomology to a wide variety of manner of death classifications and types of scenes to make observations on a number of different geographical and species-level trends in Harris County, Texas, USA. In this study, a retrospective analysis was made of 203 forensic entomology cases analyzed during the course of medicolegal death investigations performed by the Harris County Institute of Forensic Sciences in Houston, TX, USA from January 2013 through April 2016. These cases included all manner of death classifications, stages of decomposition and a variety of different scene types that were classified into decedents transported from the hospital (typically associated with myiasis or sting allergy; 3.0%), outdoor scenes (32.0%) or indoor scenes (65.0%). Ambient scene air temperature at the time scene investigation was the only significantly different factor observed between indoor and outdoor scenes with average indoor scene temperature being slightly cooler (25.2°C) than that observed outdoors (28.0°C). Relative humidity was not found to be significantly different between scene types. Most of the indoor scenes were classified as natural (43.3%) whereas most of the outdoor scenes were classified as homicides (12.3%). All other manner of death classifications came from both indoor and outdoor scenes. Several species were found to be significantly associated with indoor scenes as indicated by a binomial test, including Blaesoxipha plinthopyga (Wiedemann) (Diptera: Sarcophagidae), all Sarcophagidae (including B. plinthopyga), Megaselia scalaris Loew (Diptera: Phoridae), Synthesiomyia nudiseta Wulp (Diptera: Muscidae) and Lucilia cuprina (Wiedemann) (Diptera: Calliphoridae). The only species that was a significant indicator of an outdoor scene was Lucilia eximia (Wiedemann) (Diptera: Calliphoridae). All other insect species that were collected in five or more cases were collected from both indoor and outdoor scenes. A species list with month of collection and basic scene characteristics with the length of the estimated time of colonization is also presented. The data presented here provide valuable casework related species data for Harris County, TX and nearby areas on the Gulf Coast that can be used to compare to other climate regions with other species assemblages and to assist in identifying new species introductions to the area. This study also highlights the importance of potential sources of uncertainty in preparation and interpretation of forensic entomology reports from different scene types.

  11. Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.

    PubMed

    Coco, Moreno I; Keller, Frank; Malcolm, George L

    2016-11-01

    The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.

  12. Richat Structure, Mauritania, Perspective View, Landsat Image over SRTM Elevation

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This prominent circular feature, known as the Richat Structure, in the Sahara desert of Mauritania is often noted by astronauts because it forms a conspicuous 50-kilometer-wide (30-mile-wide) bull's-eye on the otherwise rather featureless expanse of the desert. Initially mistaken for a possible impact crater, it is now known to be an eroded circular anticline (structural dome) of layered sedimentary rocks.

    Extensive sand dunes occur in this region and the interaction of bedrock topography, wind, and moving sand is evident in this scene. Note especially how the dune field ends abruptly short of the cliffs at the far right as wind from the northeast (lower right) apparently funnels around the cliff point, sweeping clean areas near the base of the cliff. Note also the small isolated peak within the dune field. That peak captures some sand on its windward side, but mostly deflects the wind and sand around its sides, creating a sand-barren streak that continues far downwind.

    This view was generated from a Landsat satellite image draped over an elevation model produced by the Shuttle Radar Topography Mission (SRTM). The view uses a 6-times vertical exaggeration to greatly enhance topographic expression. For vertical scale, note that the height of the mesa ridge in the back center of the view is about 285 meters (about 935 feet) tall. Colors of the scene were enhanced by use of a combination of visible and infrared bands, which helps to differentiate bedrock (browns), sand (yellow, some white), minor vegetation in drainage channels (green), and salty sediments (bluish whites). Some shading of the elevation model was included to further highlight the topographic features.

    Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Geospatial-Intelligence Agency (NGA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.

    View Size: 68 kilometers (42 miles) wide by 112 kilometers (69 miles) distance Location: 21.2 degrees North latitude, 11.7 degrees West longitude Orientation: View toward west-northwest Image Data: Landsat Bands 1, 4, 7 in B.G.R. Date Acquired: February 2000 (SRTM), January 13, 1987 (Landsat)

  13. Guidance of visual attention by semantic information in real-world scenes

    PubMed Central

    Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc

    2014-01-01

    Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724

  14. Crime scene units: a look to the future

    NASA Astrophysics Data System (ADS)

    Baldwin, Hayden B.

    1999-02-01

    The scientific examination of physical evidence is well recognized as a critical element in conducting successful criminal investigations and prosecutions. The forensic science field is an ever changing discipline. With the arrival of DNA, new processing techniques for latent prints, portable lasers, and electro-static dust print lifters, and training of evidence technicians has become more important than ever. These scientific and technology breakthroughs have increased the possibility of collecting and analyzing physical evidence that was never possible before. The problem arises with the collection of physical evidence from the crime scene not from the analysis of the evidence. The need for specialized units in the processing of all crime scenes is imperative. These specialized units, called crime scene units, should be trained and equipped to handle all forms of crime scenes. The crime scenes units would have the capability to professionally evaluate and collect pertinent physical evidence from the crime scenes.

  15. Physics Based Modeling and Rendering of Vegetation in the Thermal Infrared

    NASA Technical Reports Server (NTRS)

    Smith, J. A.; Ballard, J. R., Jr.

    1999-01-01

    We outline a procedure for rendering physically-based thermal infrared images of simple vegetation scenes. Our approach incorporates the biophysical processes that affect the temperature distribution of the elements within a scene. Computer graphics plays a key role in two respects. First, in computing the distribution of scene shaded and sunlit facets and, second, in the final image rendering once the temperatures of all the elements in the scene have been computed. We illustrate our approach for a simple corn scene where the three-dimensional geometry is constructed based on measured morphological attributes of the row crop. Statistical methods are used to construct a representation of the scene in agreement with the measured characteristics. Our results are quite good. The rendered images exhibit realistic behavior in directional properties as a function of view and sun angle. The root-mean-square error in measured versus predicted brightness temperatures for the scene was 2.1 deg C.

  16. Selective attention during scene perception: evidence from negative priming.

    PubMed

    Gordon, Robert D

    2006-10-01

    In two experiments, we examined the role of semantic scene content in guiding attention during scene viewing. In each experiment, performance on a lexical decision task was measured following the brief presentation of a scene. The lexical decision stimulus named an object that was either present or not present in the scene. The results of Experiment 1 revealed no priming from inconsistent objects (whose identities conflicted with the scene in which they appeared), but negative priming from consistent objects. The results of Experiment 2 indicated that negative priming from consistent objects occurs only when inconsistent objects are present in the scenes. Together, the results suggest that observers are likely to attend to inconsistent objects, and that representations of consistent objects are suppressed in the presence of an inconsistent object. Furthermore, the data suggest that inconsistent objects draw attention because they are relatively difficult to identify in an inappropriate context.

  17. Figure-Ground Organization in Visual Cortex for Natural Scenes

    PubMed Central

    2016-01-01

    Abstract Figure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes, and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ∼30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge. PMID:28058269

  18. Short report: the effect of expertise in hiking on recognition memory for mountain scenes.

    PubMed

    Kawamura, Satoru; Suzuki, Sae; Morikawa, Kazunori

    2007-10-01

    The nature of an expert memory advantage that does not depend on stimulus structure or chunking was examined, using more ecologically valid stimuli in the context of a more natural activity than previously studied domains. Do expert hikers and novice hikers see and remember mountain scenes differently? In the present experiment, 18 novice hikers and 17 expert hikers were presented with 60 photographs of scenes from hiking trails. These scenes differed in the degree of functional aspects that implied some action possibilities or dangers. The recognition test revealed that the memory performance of experts was significantly superior to that of novices for scenes with highly functional aspects. The memory performance for the scenes with few functional aspects did not differ between novices and experts. These results suggest that experts pay more attention to, and thus remember better, scenes with functional meanings than do novices.

  19. Scene text recognition in mobile applications by character descriptor and structure configuration.

    PubMed

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  20. Effects of aging on neural connectivity underlying selective memory for emotional scenes

    PubMed Central

    Waring, Jill D.; Addis, Donna Rose; Kensinger, Elizabeth A.

    2012-01-01

    Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults’ encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults’ connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. PMID:22542836

  1. Effects of aging on neural connectivity underlying selective memory for emotional scenes.

    PubMed

    Waring, Jill D; Addis, Donna Rose; Kensinger, Elizabeth A

    2013-02-01

    Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults' encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults' connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. Published by Elsevier Inc.

  2. Scene-Aware Adaptive Updating for Visual Tracking via Correlation Filters

    PubMed Central

    Zhang, Sirou; Qiao, Xiaoya

    2017-01-01

    In recent years, visual object tracking has been widely used in military guidance, human-computer interaction, road traffic, scene monitoring and many other fields. The tracking algorithms based on correlation filters have shown good performance in terms of accuracy and tracking speed. However, their performance is not satisfactory in scenes with scale variation, deformation, and occlusion. In this paper, we propose a scene-aware adaptive updating mechanism for visual tracking via a kernel correlation filter (KCF). First, a low complexity scale estimation method is presented, in which the corresponding weight in five scales is employed to determine the final target scale. Then, the adaptive updating mechanism is presented based on the scene-classification. We classify the video scenes as four categories by video content analysis. According to the target scene, we exploit the adaptive updating mechanism to update the kernel correlation filter to improve the robustness of the tracker, especially in scenes with scale variation, deformation, and occlusion. We evaluate our tracker on the CVPR2013 benchmark. The experimental results obtained with the proposed algorithm are improved by 33.3%, 15%, 6%, 21.9% and 19.8% compared to those of the KCF tracker on the scene with scale variation, partial or long-time large-area occlusion, deformation, fast motion and out-of-view. PMID:29140311

  3. The Identification and Modeling of Visual Cue Usage in Manual Control Task Experiments

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend; Trejo, Leonard J. (Technical Monitor)

    1999-01-01

    Many fields of endeavor require humans to conduct manual control tasks while viewing a perspective scene. Manual control refers to tasks in which continuous, or nearly continuous, control adjustments are required. Examples include flying an aircraft, driving a car, and riding a bicycle. Perspective scenes can arise through natural viewing of the world, simulation of a scene (as in flight simulators), or through imaging devices (such as the cameras on an unmanned aerospace vehicle). Designers frequently have some degree of control over the content and characteristics of a perspective scene; airport designers can choose runway markings, vehicle designers can influence the size and shape of windows, as well as the location of the pilot, and simulator database designers can choose scene complexity and content. Little theoretical framework exists to help designers determine the answers to questions related to perspective scene content. An empirical approach is most commonly used to determine optimum perspective scene configurations. The goal of the research effort described in this dissertation has been to provide a tool for modeling the characteristics of human operators conducting manual control tasks with perspective-scene viewing. This is done for the purpose of providing an algorithmic, as opposed to empirical, method for analyzing the effects of changing perspective scene content for closed-loop manual control tasks.

  4. High-fidelity real-time maritime scene rendering

    NASA Astrophysics Data System (ADS)

    Shyu, Hawjye; Taczak, Thomas M.; Cox, Kevin; Gover, Robert; Maraviglia, Carlos; Cahill, Colin

    2011-06-01

    The ability to simulate authentic engagements using real-world hardware is an increasingly important tool. For rendering maritime environments, scene generators must be capable of rendering radiometrically accurate scenes with correct temporal and spatial characteristics. When the simulation is used as input to real-world hardware or human observers, the scene generator must operate in real-time. This paper introduces a novel, real-time scene generation capability for rendering radiometrically accurate scenes of backgrounds and targets in maritime environments. The new model is an optimized and parallelized version of the US Navy CRUISE_Missiles rendering engine. It was designed to accept environmental descriptions and engagement geometry data from external sources, render a scene, transform the radiometric scene using the electro-optical response functions of a sensor under test, and output the resulting signal to real-world hardware. This paper reviews components of the scene rendering algorithm, and details the modifications required to run this code in real-time. A description of the simulation architecture and interfaces to external hardware and models is presented. Performance assessments of the frame rate and radiometric accuracy of the new code are summarized. This work was completed in FY10 under Office of Secretary of Defense (OSD) Central Test and Evaluation Investment Program (CTEIP) funding and will undergo a validation process in FY11.

  5. The new generation of OpenGL support in ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, M.

    2008-07-01

    OpenGL has been promoted to become the main 3D rendering engine of the ROOT framework. This required a major re-modularization of OpenGL support on all levels, from basic window-system specific interface to medium-level object-representation and top-level scene management. This new architecture allows seamless integration of external scene-graph libraries into the ROOT OpenGL viewer as well as inclusion of ROOT 3D scenes into external GUI and OpenGL-based 3D-rendering frameworks. Scene representation was removed from inside of the viewer, allowing scene-data to be shared among several viewers and providing for a natural implementation of multi-view canvas layouts. The object-graph traversal infrastructure allows free mixing of 3D and 2D-pad graphics and makes implementation of ROOT canvas in pure OpenGL possible. Scene-elements representing ROOT objects trigger automatic instantiation of user-provided rendering-objects based on the dictionary information and class-naming convention. Additionally, a finer, per-object control over scene-updates is available to the user, allowing overhead-free maintenance of dynamic 3D scenes and creation of complex real-time animations. User-input handling was modularized as well, making it easy to support application-specific scene navigation, selection handling and tool management.

  6. [Perception of objects and scenes in age-related macular degeneration].

    PubMed

    Tran, T H C; Boucart, M

    2012-01-01

    Vision related quality of life questionnaires suggest that patients with AMD exhibit difficulties in finding objects and in mobility. In the natural environment, objects seldom appear in isolation. They appear in a spatial context which may obscure them in part or place obstacles in the patient's path. Furthermore, the luminance of a natural scene varies as a function of the hour of the day and the light source, which can alter perception. This study aims to evaluate recognition of objects and natural scenes by patients with AMD, by using photographs of such scenes. Studies demonstrate that AMD patients are able to categorize scenes as nature scenes or urban scenes and to discriminate indoor from outdoor scenes with a high degree of precision. They detect objects better in isolation, in color, or against a white background than in their natural contexts. These patients encounter more difficulties than normally sighted individuals in detecting objects in a low-contrast, black-and-white scene. These results may have implications for rehabilitation, for layout of texts and magazines for the reading-impaired and for the rearrangement of the spatial environment of older AMD patients in order to facilitate mobility, finding objects and reducing the risk of falls. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  7. Visual search for arbitrary objects in real scenes

    PubMed Central

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  8. Illumination discrimination in real and simulated scenes

    PubMed Central

    Radonjić, Ana; Pearce, Bradley; Aston, Stacey; Krieger, Avery; Dubin, Hilary; Cottaris, Nicolas P.; Brainard, David H.; Hurlbert, Anya C.

    2016-01-01

    Characterizing humans' ability to discriminate changes in illumination provides information about the visual system's representation of the distal stimulus. We have previously shown that humans are able to discriminate illumination changes and that sensitivity to such changes depends on their chromatic direction. Probing illumination discrimination further would be facilitated by the use of computer-graphics simulations, which would, in practice, enable a wider range of stimulus manipulations. There is no a priori guarantee, however, that results obtained with simulated scenes generalize to real illuminated scenes. To investigate this question, we measured illumination discrimination in real and simulated scenes that were well-matched in mean chromaticity and scene geometry. Illumination discrimination thresholds were essentially identical for the two stimulus types. As in our previous work, these thresholds varied with illumination change direction. We exploited the flexibility offered by the use of graphics simulations to investigate whether the differences across direction are preserved when the surfaces in the scene are varied. We show that varying the scene's surface ensemble in a manner that also changes mean scene chromaticity modulates the relative sensitivity to illumination changes along different chromatic directions. Thus, any characterization of sensitivity to changes in illumination must be defined relative to the set of surfaces in the scene. PMID:28558392

  9. Visual search for arbitrary objects in real scenes.

    PubMed

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  10. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  11. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.

    PubMed

    Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  12. Object Scene Flow

    NASA Astrophysics Data System (ADS)

    Menze, Moritz; Heipke, Christian; Geiger, Andreas

    2018-06-01

    This work investigates the estimation of dense three-dimensional motion fields, commonly referred to as scene flow. While great progress has been made in recent years, large displacements and adverse imaging conditions as observed in natural outdoor environments are still very challenging for current approaches to reconstruction and motion estimation. In this paper, we propose a unified random field model which reasons jointly about 3D scene flow as well as the location, shape and motion of vehicles in the observed scene. We formulate the problem as the task of decomposing the scene into a small number of rigidly moving objects sharing the same motion parameters. Thus, our formulation effectively introduces long-range spatial dependencies which commonly employed local rigidity priors are lacking. Our inference algorithm then estimates the association of image segments and object hypotheses together with their three-dimensional shape and motion. We demonstrate the potential of the proposed approach by introducing a novel challenging scene flow benchmark which allows for a thorough comparison of the proposed scene flow approach with respect to various baseline models. In contrast to previous benchmarks, our evaluation is the first to provide stereo and optical flow ground truth for dynamic real-world urban scenes at large scale. Our experiments reveal that rigid motion segmentation can be utilized as an effective regularizer for the scene flow problem, improving upon existing two-frame scene flow methods. At the same time, our method yields plausible object segmentations without requiring an explicitly trained recognition model for a specific object class.

  13. The Sport Expert's Attention Superiority on Skill-related Scene Dynamic by the Activation of left Medial Frontal Gyrus: An ERP and LORETA Study.

    PubMed

    He, Mengyang; Qi, Changzhu; Lu, Yang; Song, Amanda; Hayat, Saba Z; Xu, Xia

    2018-05-21

    Extensive studies have shown that a sports expert is superior to a sports novice in visually perceptual-cognitive processes of sports scene information, however the attentional and neural basis of it has not been thoroughly explored. The present study examined whether a sport expert has the attentional superiority on scene information relevant to his/her sport skill, and explored what factor drives this superiority. To address this problem, EEGs were recorded as participants passively viewed sport scenes (tennis vs. non-tennis) and negative emotional faces in the context of a visual attention task, where the pictures of sport scenes or of negative emotional faces randomly followed the pictures with overlapping sport scenes and negative emotional faces. ERP results showed that for experts, the evoked potential of attentional competition elicited by the overlap of tennis scene was significantly larger than that evoked by the overlap of non-tennis scene, while this effect was absent for novices. The LORETA showed that the experts' left medial frontal gyrus (MFG) cortex was significantly more active as compared to the right MFG when processing the overlap of tennis scene, but the lateralization effect was not significant in novices. Those results indicate that experts have attentional superiority on skill-related scene information, despite intruding the scene through negative emotional faces that are prone to cause negativity bias toward their visual field as a strong distractor. This superiority is actuated by the activation of left MFG cortex and probably due to self-reference. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.

    PubMed

    Öhlschläger, Sabine; Võ, Melissa Le-Hoa

    2017-10-01

    Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .

  15. A prototype molecular interactive collaborative environment (MICE).

    PubMed

    Bourne, P; Gribskov, M; Johnson, G; Moreland, J; Wavra, S; Weissig, H

    1998-01-01

    Illustrations of macromolecular structure in the scientific literature contain a high level of semantic content through which the authors convey, among other features, the biological function of that macromolecule. We refer to these illustrations as molecular scenes. Such scenes, if available electronically, are not readily accessible for further interactive interrogation. The basic PDB format does not retain features of the scene; formats like PostScript retain the scene but are not interactive; and the many formats used by individual graphics programs, while capable of reproducing the scene, are neither interchangeable nor can they be stored in a database and queried for features of the scene. MICE defines a Molecular Scene Description Language (MSDL) which allows scenes to be stored in a relational database (a molecular scene gallery) and queried. Scenes retrieved from the gallery are rendered in Virtual Reality Modeling Language (VRML) and currently displayed in WebView, a VRML browser modified to support the Virtual Reality Behavior System (VRBS) protocol. VRBS provides communication between multiple client browsers, each capable of manipulating the scene. This level of collaboration works well over standard Internet connections and holds promise for collaborative research at a distance and distance learning. Further, via VRBS, the VRML world can be used as a visual cue to trigger an application such as a remote MEME search. MICE is very much work in progress. Current work seeks to replace WebView with Netscape, Cosmoplayer, a standard VRML plug-in, and a Java-based console. The console consists of a generic kernel suitable for multiple collaborative applications and additional application-specific controls. Further details of the MICE project are available at http:/(/)mice.sdsc.edu.

  16. Effect of Viewing Smoking Scenes in Motion Pictures on Subsequent Smoking Desire in Audiences in South Korea

    PubMed Central

    Sohn, Minsung

    2017-01-01

    Background In the modern era of heightened awareness of public health, smoking scenes in movies remain relatively free from public monitoring. The effect of smoking scenes in movies on the promotion of viewers’ smoking desire remains unknown. Objective The study aimed to explore whether exposure of adolescent smokers to images of smoking in fılms could stimulate smoking behavior. Methods Data were derived from a national Web-based sample survey of 748 Korean high-school students. Participants aged 16-18 years were randomly assigned to watch three short video clips with or without smoking scenes. After adjusting covariates using propensity score matching, paired sample t test and logistic regression analyses compared the difference in smoking desire before and after exposure of participants to smoking scenes. Results For male adolescents, cigarette craving was significantly higher in those who watched movies with smoking scenes than in the control group who did not view smoking scenes (t307.96=2.066, P<.05). In the experimental group, too, cigarette cravings of adolescents after viewing smoking scenes were significantly higher than they were before watching smoking scenes (t161.00=2.867, P<.01). After adjusting for covariates, more impulsive adolescents, particularly males, had significantly higher cigarette cravings: adjusted odds ratio (aOR) 3.40 (95% CI 1.40-8.23). However, those who actively sought health information had considerably lower cigarette cravings than those who did not engage in information-seeking: aOR 0.08 (95% CI 0.01-0.88). Conclusions Smoking scenes in motion pictures may increase male adolescent smoking desire. Establishing a standard that restricts the frequency of smoking scenes in films and assigning a smoking-related screening grade to films is warranted. PMID:28716768

  17. Fixations on objects in natural scenes: dissociating importance from salience

    PubMed Central

    't Hart, Bernard M.; Schmidt, Hannah C. E. F.; Roth, Christine; Einhäuser, Wolfgang

    2013-01-01

    The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's “importance” for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (“common”/“important”) or a rarely named (“rare”/“unimportant”) object, track the observers' eye movements during scene viewing and ask them to provide keywords describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in reading. PMID:23882251

  18. Social relevance drives viewing behavior independent of low-level salience in rhesus macaques

    PubMed Central

    Solyst, James A.; Buffalo, Elizabeth A.

    2014-01-01

    Quantifying attention to social stimuli during the viewing of complex social scenes with eye tracking has proven to be a sensitive method in the diagnosis of autism spectrum disorders years before average clinical diagnosis. Rhesus macaques provide an ideal model for understanding the mechanisms underlying social viewing behavior, but to date no comparable behavioral task has been developed for use in monkeys. Using a novel scene-viewing task, we monitored the gaze of three rhesus macaques while they freely viewed well-controlled composed social scenes and analyzed the time spent viewing objects and monkeys. In each of six behavioral sessions, monkeys viewed a set of 90 images (540 unique scenes) with each image presented twice. In two-thirds of the repeated scenes, either a monkey or an object was replaced with a novel item (manipulated scenes). When viewing a repeated scene, monkeys made longer fixations and shorter saccades, shifting from a rapid orienting to global scene contents to a more local analysis of fewer items. In addition to this repetition effect, in manipulated scenes, monkeys demonstrated robust memory by spending more time viewing the replaced items. By analyzing attention to specific scene content, we found that monkeys strongly preferred to view conspecifics and that this was not related to their salience in terms of low-level image features. A model-free analysis of viewing statistics found that monkeys that were viewed earlier and longer had direct gaze and redder sex skin around their face and rump, two important visual social cues. These data provide a quantification of viewing strategy, memory and social preferences in rhesus macaques viewing complex social scenes, and they provide an important baseline with which to compare to the effects of therapeutics aimed at enhancing social cognition. PMID:25414633

  19. Prehospital Blood Product Administration Opportunities in Ground Transport ALS EMS - A Descriptive Study.

    PubMed

    Mix, Felicia M; Zielinski, Martin D; Myers, Lucas A; Berns, Kathy S; Luke, Anurahda; Stubbs, James R; Zietlow, Scott P; Jenkins, Donald H; Sztajnkrycer, Matthew D

    2018-06-01

    IntroductionHemorrhage remains the major cause of preventable death after trauma. Recent data suggest that earlier blood product administration may improve outcomes. The purpose of this study was to determine whether opportunities exist for blood product transfusion by ground Emergency Medical Services (EMS). This was a single EMS agency retrospective study of ground and helicopter responses from January 1, 2011 through December 31, 2015 for adult trauma patients transported from the scene of injury who met predetermined hemodynamic (HD) parameters for potential transfusion (heart rate [HR]≥120 and/or systolic blood pressure [SBP]≤90). A total of 7,900 scene trauma ground transports occurred during the study period. Of 420 patients meeting HD criteria for transfusion, 53 (12.6%) had a significant mechanism of injury (MOI). Outcome data were available for 51 patients; 17 received blood products during their emergency department (ED) resuscitation. The percentage of patients receiving blood products based upon HD criteria ranged from 1.0% (HR) to 5.9% (SBP) to 38.1% (HR+SBP). In all, 74 Helicopter EMS (HEMS) transports met HD criteria for blood transfusion, of which, 28 patients received prehospital blood transfusion. Statistically significant total patient care time differences were noted for both the HR and the SBP cohorts, with HEMS having longer time intervals; no statistically significant difference in mean total patient care time was noted in the HR+SBP cohort. In this study population, HD parameters alone did not predict need for ED blood product administration. Despite longer transport times, only one-third of HEMS patients meeting HD criteria for blood administration received prehospital transfusion. While one-third of ground Advanced Life Support (ALS) transport patients manifesting HD compromise received blood products in the ED, this represented 0.2% of total trauma transports over the study period. Given complex logistical issues involved in prehospital blood product administration, opportunities for ground administration appear limited within the described system. MixFM, ZielinskiMD, MyersLA, BernsKS, LukeA, StubbsJR, ZietlowSP, JenkinsDH, SztajnkrycerMD. Prehospital blood product administration opportunities in ground transport ALS EMS - a descriptive study. Prehosp Disaster Med. 2018;33(3):230-236.

  20. Recognition of Natural Scenes from Global Properties: Seeing the Forest without Representing the Trees

    ERIC Educational Resources Information Center

    Greene, Michelle R.; Oliva, Aude

    2009-01-01

    Human observers are able to rapidly and accurately categorize natural scenes, but the representation mediating this feat is still unknown. Here we propose a framework of rapid scene categorization that does not segment a scene into objects and instead uses a vocabulary of global, ecological properties that describe spatial and functional aspects…

  1. Scene and Position Specificity in Visual Memory for Objects

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2006-01-01

    This study investigated whether and how visual representations of individual objects are bound in memory to scene context. Participants viewed a series of naturalistic scenes, and memory for the visual form of a target object in each scene was examined in a 2-alternative forced-choice test, with the distractor object either a different object…

  2. Just Another Social Scene: Evidence for Decreased Attention to Negative Social Scenes in High-Functioning Autism

    ERIC Educational Resources Information Center

    Santos, Andreia; Chaminade, Thierry; Da Fonseca, David; Silva, Catarina; Rosset, Delphine; Deruelle, Christine

    2012-01-01

    The adaptive threat-detection advantage takes the form of a preferential orienting of attention to threatening scenes. In this study, we compared attention to social scenes in 15 high-functioning individuals with autism (ASD) and matched typically developing (TD) individuals. Eye-tracking was recorded while participants were presented with pairs…

  3. Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes

    ERIC Educational Resources Information Center

    Becker, Mark W.; Rasmussen, Ian P.

    2008-01-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…

  4. PC Scene Generation

    NASA Astrophysics Data System (ADS)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  5. Adaptive Cross-correlation Algorithm and Experiment of Extended Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Morgan, Rhonda M.; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    We have developed a new, adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels in two extended-scene images captured by a Shack-Hartmann wavefront sensor (SH-WFS). It determines the positions of all of the extended-scene image cells relative to a reference cell using an FFT-based iterative image shifting algorithm. It works with both point-source spot images as well as extended scene images. We have also set up a testbed for extended0scene SH-WFS, and tested the ACC algorithm with the measured data of both point-source and extended-scene images. In this paper we describe our algorithm and present out experimental results.

  6. Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification

    PubMed Central

    Wu, Lin

    2017-01-01

    With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories. PMID:28706534

  7. CCD imaging technology and the war on crime

    NASA Astrophysics Data System (ADS)

    McNeill, Glenn E.

    1992-08-01

    Linear array based CCD technology has been successfully used in the development of an Automatic Currency Reader/Comparator (ACR/C) system. The ACR/C system is designed to provide a method for tracking US currency in the organized crime and drug trafficking environments where large amounts of cash are involved in illegal transactions and money laundering activities. United States currency notes can be uniquely identified by the combination of the denomination serial number and series year. The ACR/C system processes notes at five notes per second using a custom transport a stationary linear array and optical character recognition (OCR) techniques to make such identifications. In this way large sums of money can be " marked" (using the system to read and store their identifiers) and then circulated within various crime networks. The system can later be used to read and compare confiscated notes to the known sets of identifiers from the " marked" set to document a trail of criminal activities. With the ACR/C law enforcement agencies can efficiently identify currency without actually marking it. This provides an undetectable means for making each note individually traceable and facilitates record keeping for providing evidence in a court of law. In addition when multiple systems are used in conjunction with a central data base the system can be used to track currency geographically. 1.

  8. Assessing QuADEM: Preliminary Notes on a New Method for Evaluating Online Language Learning Courseware

    ERIC Educational Resources Information Center

    Strobl, Carola; Jacobs, Geert

    2011-01-01

    In this article, we set out to assess QuADEM (Quality Assessment of Digital Educational Material), one of the latest methods for evaluating online language learning courseware. What is special about QuADEM is that the evaluation is based on observing the actual usage of the online courseware and that, from a checklist of 12 different components,…

  9. A New Way to Demonstrate the Radiometer as a Heat Engine

    ERIC Educational Resources Information Center

    Hladkouski, V. I.; Pinchuk, A. I.

    2015-01-01

    While the radiometer is readily available as a toy, A. E. Woodruff notes that it is also a very useful tool to help us understand how to resolve certain scientific problems. Many physicists think they know how the radiometer works, but only a few actually understand it. Here we present a demonstration that shows that a radiometer can be thought of…

  10. Requirements Analysis for Effective Management Information Systems Design: A Framework and Case Study.

    DTIC Science & Technology

    1981-12-01

    68 1. Product Policy ---------------------------- 68 2. Price Policy ------------------------------ 69 3. Policy toward Rivals...compete with its rivals; the aim of these policies is to achieve product differentiation; b) Pricing Policies - price structures that are generally...actions which a firm takes to minimize both its actual and potential competition. It should be noted that product and pricing policies are greatly

  11. Simulation and Calculation of the APEX Attitude

    DTIC Science & Technology

    1992-07-29

    attitude computation. As a by-product, several interesting features that may be present in the APEX attitude behavior are noted. The APEX satellite...DEFINITION OF THE ATTITUDE Generally speaking, it is possible to define the spacecraft . ttitude in several ways, so long as the process of computation and...actual APEX attitude behavior . However, it is not the purpose of this work to assess the probable degree of attitude

  12. Characterizing Health Disparities in the Age of Autism Diagnosis in a Study of 8-Year-Old Children

    ERIC Educational Resources Information Center

    Parikh, Chandni; Kurzius-Spencer, Margaret; Mastergeorge, Ann M.; Pettygrove, Sydney

    2018-01-01

    The diagnosis of autism spectrum disorder (ASD) is often delayed from the time of noted concerns to the actual diagnosis. The current study used child- and family-level factors to identify homogeneous classes in a surveillance-based sample (n = 2303) of 8-year-old children with ASD. Using latent class analysis, a 5-class model emerged and the…

  13. Policy Implications for Continuous Employment Decisions of High School Principals: An Alternative Methodological Approach for Using High-Stakes Testing Outcomes

    ERIC Educational Resources Information Center

    Young, I. Phillip; Fawcett, Paul

    2013-01-01

    Several teacher models exist for using high-stakes testing outcomes to make continuous employment decisions for principals. These models are reviewed, and specific flaws are noted if these models are retrofitted for principals. To address these flaws, a different methodology is proposed on the basis of actual field data. Specially addressed are…

  14. Dioptric defocus maps across the visual field for different indoor environments

    PubMed Central

    García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried

    2017-01-01

    One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the ‘environmental defocus’ over the visual field. At present, no devices are available that could provide this information. A ‘Kinect sensor v1’ camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying ‘indoor defocus error signals’ across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and ‘defocus maps’ were generated for various scenes and tasks. PMID:29359108

  15. Characteristics of Behavior of Robots with Emotion Model

    NASA Astrophysics Data System (ADS)

    Sato, Shigehiko; Nozawa, Akio; Ide, Hideto

    Cooperated multi robots system has much dominance in comparison with single robot system. It is able to adapt to various circumstances and has a flexibility for variation of tasks. However it has still problems to control each robot, though methods for control multi robots system have been studied. Recently, the robots have been coming into real scene. And emotion and sensitivity of the robots have been widely studied. In this study, human emotion model based on psychological interaction was adapt to multi robots system to achieve methods for organization of multi robots. The characteristics of behavior of multi robots system achieved through computer simulation were analyzed. As a result, very complexed and interesting behavior was emerged even though it has rather simple configuration. And it has flexiblity in various circumstances. Additional experiment with actual robots will be conducted based on the emotion model.

  16. Selecting band combinations with thematic mapper data

    NASA Technical Reports Server (NTRS)

    Sheffield, C. A.

    1983-01-01

    A problem arises in making color composite images because there are 210 different possible color presentations of TM three-band images. A method is given for reducing that 210 to a single choice, decided by the statistics of a scene or subscene, and taking into full account any correlations that exist between different bands. Instead of using total variance as the measure for information content of the band triplets, the ellipsoid of maximum volume is selected which discourages selection of bands with high correlation. The band triplet is obtained by computing and ranking in order the determinants of each 3 x 3 principal submatrix of the original matrix M. After selection of the best triplet, the assignment of colors is made by using the actual variances (the diagonal elements of M): green (maximum variance), red (second largest variance), blue (smallest variance).

  17. Optimising crime scene temperature collection for forensic entomology casework.

    PubMed

    Hofer, Ines M J; Hart, Andrew J; Martín-Vega, Daniel; Hall, Martin J R

    2017-01-01

    The value of minimum post-mortem interval (minPMI) estimations in suspicious death investigations from insect evidence using temperature modelling is indisputable. In order to investigate the reliability of the collected temperature data used for modelling minPMI, it is necessary to study the effects of data logger location on the accuracy and precision of measurements. Digital data logging devices are the most commonly used temperature measuring devices in forensic entomology, however, the relationship between ambient temperatures (measured by loggers) and body temperatures has been little studied. The placement of loggers in this study in three locations (two outdoors, one indoors) had measurable effects when compared with actual body temperature measurements (simulated with pig heads), some more significant than others depending on season, exposure to the environment and logger location. Overall, the study demonstrated the complexity of the question of optimal logger placement at a crime scene and the potential impact of inaccurate temperature data on minPMI estimations, showing the importance of further research in this area and development of a standard protocol. Initial recommendations are provided for data logger placement (within a Stevenson Screen where practical), situations to avoid (e.g. placement of logger in front of windows when measuring indoor temperatures), and a baseline for further research into producing standard guidelines for logger placement, to increase the accuracy of minPMI estimations and, thereby, the reliability of forensic entomology evidence in court. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. A simulation study of scene confusion factors in sensing soil moisture from orbital radar

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.; Roth, F. T.

    1983-01-01

    Simulated C-band radar imagery for a 124-km by 108-km test site in eastern Kansas is used to classify soil moisture. Simulated radar resolutions are 100 m by 100 m, 1 km by 1km, and 3 km by 3 km. Distributions of actual near-surface soil moisture are established daily for a 23-day accounting period using a water budget model. Within the 23-day period, three orbital radar overpasses are simulated roughly corresponding to generally moist, wet, and dry soil moisture conditions. The radar simulations are performed by a target/sensor interaction model dependent upon a terrain model, land-use classification, and near-surface soil moisture distribution. The accuracy of soil-moisture classification is evaluated for each single-date radar observation and also for multi-date detection of relative soil moisture change. In general, the results for single-date moisture detection show that 70% to 90% of cropland can be correctly classified to within +/- 20% of the true percent of field capacity. For a given radar resolution, the expected classification accuracy is shown to be dependent upon both the general soil moisture condition and also the geographical distribution of land-use and topographic relief. An analysis of cropland, urban, pasture/rangeland, and woodland subregions within the test site indicates that multi-temporal detection of relative soil moisture change is least sensitive to classification error resulting from scene complexity and topographic effects.

  19. Electrophysiological revelations of trial history effects in a color oddball search task.

    PubMed

    Shin, Eunsam; Chong, Sang Chul

    2016-12-01

    In visual oddball search tasks, viewing a no-target scene (i.e., no-target selection trial) leads to the facilitation or delay of the search time for a target in a subsequent trial. Presumably, this selection failure leads to biasing attentional set and prioritizing stimulus features unseen in the no-target scene. We observed attention-related ERP components and tracked the course of attentional biasing as a function of trial history. Participants were instructed to identify color oddballs (i.e., targets) shown in varied trial sequences. The number of no-target scenes preceding a target scene was increased from zero to two to reinforce attentional biasing, and colors presented in two successive no-target scenes were repeated or changed to systematically bias attention to specific colors. For the no-target scenes, the presentation of a second no-target scene resulted in an early selection of, and sustained attention to, the changed colors (mirrored in the frontal selection positivity, the anterior N2, and the P3b). For the target scenes, the N2pc indicated an earlier allocation of attention to the targets with unseen or remotely seen colors. Inhibitory control of attention, shown in the anterior N2, was greatest when the target scene was followed by repeated no-target scenes with repeated colors. Finally, search times and the P3b were influenced by both color previewing and its history. The current results demonstrate that attentional biasing can occur on a trial-by-trial basis and be influenced by both feature previewing and its history. © 2016 Society for Psychophysiological Research.

  20. Method for separating video camera motion from scene motion for constrained 3D displacement measurements

    NASA Astrophysics Data System (ADS)

    Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

    2014-09-01

    Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.

  1. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study.

    PubMed

    Mavratzakis, Aimee; Herbert, Cornelia; Walla, Peter

    2016-01-01

    In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250ms) than for scenes (500ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns

    PubMed Central

    Shakespeare, Timothy J.; Yong, Keir X. X.; Frost, Chris; Kim, Lois G.; Warrington, Elizabeth K.; Crutch, Sebastian J.

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes. PMID:24106469

  3. Canceled to Be Called Back: A Retrospective Cohort Study of Canceled Helicopter Emergency Medical Service Scene Calls That Are Later Transferred to a Trauma Center.

    PubMed

    Nolan, Brodie; Ackery, Alun; Nathens, Avery; Sawadsky, Bruce; Tien, Homer

    In our trauma system, helicopter emergency medical services (HEMS) can be requested to attend a scene call for an injured patient before arrival by land paramedics. Land paramedics can cancel this response if they deem it unnecessary. The purpose of this study is to describe the frequency of canceled HEMS scene calls that were subsequently transferred to 2 trauma centers and to assess for any impact on morbidity and mortality. Probabilistic matching was used to identify canceled HEMS scene call patients who were later transported to 2 trauma centers over a 48-month period. Registry data were used to compare canceled scene call patients with direct from scene patients. There were 290 requests for HEMS scene calls, of which 35.2% were canceled. Of those canceled, 24.5% were later transported to our trauma centers. Canceled scene call patients were more likely to be older and to be discharged home from the trauma center without being admitted. There is a significant amount of undertriage of patients for whom an HEMS response was canceled and later transported to a trauma center. These patients face similar morbidity and mortality as patients who are brought directly from scene to a trauma center. Copyright © 2018 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  4. The neural bases of spatial frequency processing during scene perception

    PubMed Central

    Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole

    2014-01-01

    Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226

  5. Subliminal encoding and flexible retrieval of objects in scenes.

    PubMed

    Wuethrich, Sergej; Hannula, Deborah E; Mast, Fred W; Henke, Katharina

    2018-04-27

    Our episodic memory stores what happened when and where in life. Episodic memory requires the rapid formation and flexible retrieval of where things are located in space. Consciousness of the encoding scene is considered crucial for episodic memory formation. Here, we question the necessity of consciousness and hypothesize that humans can form unconscious episodic memories. Participants were presented with subliminal scenes, i.e., scenes invisible to the conscious mind. The scenes displayed objects at certain locations for participants to form unconscious object-in-space memories. Later, the same scenes were presented supraliminally, i.e., visibly, for retrieval testing. Scenes were presented absent the objects and rotated by 90°-270° in perspective to assess the representational flexibility of unconsciously formed memories. During the test phase, participants performed a forced-choice task that required them to place an object in one of two highlighted scene locations and their eye movements were recorded. Evaluation of the eye tracking data revealed that participants remembered object locations unconsciously, irrespective of changes in viewing perspective. This effect of gaze was related to correct placements of objects in scenes, and an intuitive decision style was necessary for unconscious memories to influence intentional behavior to a significant degree. We conclude that conscious perception is not mandatory for spatial episodic memory formation. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  6. Motivational Objects in Natural Scenes (MONS): A Database of >800 Objects.

    PubMed

    Schomaker, Judith; Rau, Elias M; Einhäuser, Wolfgang; Wittmann, Bianca C

    2017-01-01

    In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS). The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object ("critical object") being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral), while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1) Desire to own the object; (2) Approach/Avoid; (3) Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  7. The visual light field in real scenes

    PubMed Central

    Xia, Ling; Pont, Sylvia C.; Heynderickx, Ingrid

    2014-01-01

    Human observers' ability to infer the light field in empty space is known as the “visual light field.” While most relevant studies were performed using images on computer screens, we investigate the visual light field in a real scene by using a novel experimental setup. A “probe” and a scene were mixed optically using a semitransparent mirror. Twenty participants were asked to judge whether the probe fitted the scene with regard to the illumination intensity, direction, and diffuseness. Both smooth and rough probes were used to test whether observers use the additional cues for the illumination direction and diffuseness provided by the 3D texture over the rough probe. The results confirmed that observers are sensitive to the intensity, direction, and diffuseness of the illumination also in real scenes. For some lighting combinations on scene and probe, the awareness of a mismatch between the probe and scene was found to depend on which lighting condition was on the scene and which on the probe, which we called the “swap effect.” For these cases, the observers judged the fit to be better if the average luminance of the visible parts of the probe was closer to the average luminance of the visible parts of the scene objects. The use of a rough instead of smooth probe was found to significantly improve observers' abilities to detect mismatches in lighting diffuseness and directions. PMID:25926970

  8. Seeing for speaking: Semantic and lexical information provided by briefly presented, naturalistic action scenes

    PubMed Central

    Bölte, Jens; Hofmann, Reinhild; Meier, Claudine C.; Dobel, Christian

    2018-01-01

    At the interface between scene perception and speech production, we investigated how rapidly action scenes can activate semantic and lexical information. Experiment 1 examined how complex action-scene primes, presented for 150 ms, 100 ms, or 50 ms and subsequently masked, influenced the speed with which immediately following action-picture targets are named. Prime and target actions were either identical, showed the same action with different actors and environments, or were unrelated. Relative to unrelated primes, identical and same-action primes facilitated naming the target action, even when presented for 50 ms. In Experiment 2, neutral primes assessed the direction of effects. Identical and same-action scenes induced facilitation but unrelated actions induced interference. In Experiment 3, written verbs were used as targets for naming, preceded by action primes. When target verbs denoted the prime action, clear facilitation was obtained. In contrast, interference was observed when target verbs were phonologically similar, but otherwise unrelated, to the names of prime actions. This is clear evidence for word-form activation by masked action scenes. Masked action pictures thus provide conceptual information that is detailed enough to facilitate apprehension and naming of immediately following scenes. Masked actions even activate their word-form information–as is evident when targets are words. We thus show how language production can be primed with briefly flashed masked action scenes, in answer to long-standing questions in scene processing. PMID:29652939

  9. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    ERIC Educational Resources Information Center

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  10. An application of cluster detection to scene analysis

    NASA Technical Reports Server (NTRS)

    Rosenfeld, A. H.; Lee, Y. H.

    1971-01-01

    Certain arrangements of local features in a scene tend to group together and to be seen as units. It is suggested that in some instances, this phenomenon might be interpretable as a process of cluster detection in a graph-structured space derived from the scene. This idea is illustrated using a class of scenes that contain only horizontal and vertical line segments.

  11. The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory

    ERIC Educational Resources Information Center

    Hollingworth, Andrew

    2005-01-01

    In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or…

  12. Hierarchy-associated semantic-rule inference framework for classifying indoor scenes

    NASA Astrophysics Data System (ADS)

    Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei

    2016-03-01

    Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  13. Skidmore Clips of Neutral and Expressive Scenarios (SCENES): Novel dynamic stimuli for social cognition research.

    PubMed

    Schofield, Casey A; Weeks, Justin W; Taylor, Lea; Karnedy, Colten

    2015-12-30

    Social cognition research has relied primarily on photographic emotional stimuli. Such stimuli likely have limited ecological validity in terms of representing real world social interactions. The current study presents evidence for the validity of a new stimuli set of dynamic social SCENES (Skidmore Clips of Emotional and Neutral Expressive Scenarios). To develop these stimuli, ten undergraduate theater students were recruited to portray members of an audience. This audience was configured to display (seven) varying configurations of social feedback, ranging from unequivocally approving to unequivocally disapproving (including three different versions of balanced/neutral scenes). Validity data were obtained from 383 adult participants recruited from Amazon's Mechanical Turk. Each participant viewed three randomly assigned scenes and provided a rating of the perceived criticalness of each scene. Results indicate that the SCENES reflect the intended range of emotionality, and pairwise comparisons suggest that the SCENES capture distinct levels of critical feedback. Overall, the SCENES stimuli set represents a publicly available (www.scenesstimuli.com) resource for researchers interested in measuring social cognition in the presence of dynamic and naturalistic social stimuli. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Medial Temporal Lobe Contributions to Episodic Future Thinking: Scene Construction or Future Projection?

    PubMed

    Palombo, D J; Hayes, S M; Peterson, K M; Keane, M M; Verfaellie, M

    2018-02-01

    Previous research has shown that the medial temporal lobes (MTL) are more strongly engaged when individuals think about the future than about the present, leading to the suggestion that future projection drives MTL engagement. However, future thinking tasks often involve scene processing, leaving open the alternative possibility that scene-construction demands, rather than future projection, are responsible for the MTL differences observed in prior work. This study explores this alternative account. Using functional magnetic resonance imaging, we directly contrasted MTL activity in 1) high scene-construction and low scene-construction imagination conditions matched in future thinking demands and 2) future-oriented and present-oriented imagination conditions matched in scene-construction demands. Consistent with the alternative account, the MTL was more active for the high versus low scene-construction condition. By contrast, MTL differences were not observed when comparing the future versus present conditions. Moreover, the magnitude of MTL activation was associated with the extent to which participants imagined a scene but was not associated with the extent to which participants thought about the future. These findings help disambiguate which component processes of imagination specifically involve the MTL. Published by Oxford University Press 2016.

  15. Familiarity from the configuration of objects in 3-dimensional space and its relation to déjà vu: a virtual reality investigation.

    PubMed

    Cleary, Anne M; Brown, Alan S; Sawyer, Benjamin D; Nomi, Jason S; Ajoku, Adaeze C; Ryals, Anthony J

    2012-06-01

    Déjà vu is the striking sense that the present situation feels familiar, alongside the realization that it has to be new. According to the Gestalt familiarity hypothesis, déjà vu results when the configuration of elements within a scene maps onto a configuration previously seen, but the previous scene fails to come to mind. We examined this using virtual reality (VR) technology. When a new immersive VR scene resembled a previously-viewed scene in its configuration but people failed to recall the previously-viewed scene, familiarity ratings and reports of déjà vu were indeed higher than for completely novel scenes. People also exhibited the contrasting sense of newness and of familiarity that is characteristic of déjà vu. Familiarity ratings and déjà vu reports among scenes recognized as new increased with increasing feature-match of a scene to one stored in memory, suggesting that feature-matching can produce familiarity and déjà vu when recall fails. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. When does repeated search in scenes involve memory? Looking AT versus looking FOR objects in scenes

    PubMed Central

    Võ, Melissa L.-H.; Wolfe, Jeremy M.

    2014-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches – despite previous encounters with the target objects - demonstrates the dominance of guidance by generic scene knowledge in real-world search. PMID:21688939

  17. The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.

    PubMed

    Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc

    2014-12-01

    A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Guidance of attention to objects and locations by long-term memory of natural scenes.

    PubMed

    Becker, Mark W; Rasmussen, Ian P

    2008-11-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.

  19. When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes.

    PubMed

    Võ, Melissa L-H; Wolfe, Jeremy M

    2012-02-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches-despite previous encounters with the target objects-demonstrates the dominance of guidance by generic scene knowledge in real-world search.

  20. Cultural differences in the lateral occipital complex while viewing incongruent scenes

    PubMed Central

    Yang, Yung-Jui; Goh, Joshua; Hong, Ying-Yi; Park, Denise C.

    2010-01-01

    Converging behavioral and neuroimaging evidence indicates that culture influences the processing of complex visual scenes. Whereas Westerners focus on central objects and tend to ignore context, East Asians process scenes more holistically, attending to the context in which objects are embedded. We investigated cultural differences in contextual processing by manipulating the congruence of visual scenes presented in an fMR-adaptation paradigm. We hypothesized that East Asians would show greater adaptation to incongruent scenes, consistent with their tendency to process contextual relationships more extensively than Westerners. Sixteen Americans and 16 native Chinese were scanned while viewing sets of pictures consisting of a focal object superimposed upon a background scene. In half of the pictures objects were paired with congruent backgrounds, and in the other half objects were paired with incongruent backgrounds. We found that within both the right and left lateral occipital complexes, Chinese participants showed significantly greater adaptation to incongruent scenes than to congruent scenes relative to American participants. These results suggest that Chinese were more sensitive to contextual incongruity than were Americans and that they reacted to incongruent object/background pairings by focusing greater attention on the object. PMID:20083532

  1. Oculomotor capture during real-world scene viewing depends on cognitive load.

    PubMed

    Matsukura, Michi; Brockmole, James R; Boot, Walter R; Henderson, John M

    2011-03-25

    It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Notes on aerodynamic forces on airship hulls

    NASA Technical Reports Server (NTRS)

    Tuckerman, L B

    1923-01-01

    For a first approximation the air flow around the airship hull is assumed to obey the laws of perfect (i.e. free from viscosity) incompressible fluid. The flow is further assumed to be free from vortices (or rotational motion of the fluid). These assumptions lead to very great simplifications of the formulae used but necessarily imply an imperfect picture of the actual conditions. The value of the results depends therefore upon the magnitude of the forces produced by the disturbances in the flow caused by viscosity with the consequent production of vortices in the fluid. If these are small in comparison with the forces due to the assumed irrotational perfect fluid flow the results will give a good picture of the actual conditions of an airship in flight.

  3. Functional double dissociation within the entorhinal cortex for visual scene-dependent choice behavior

    PubMed Central

    Yoo, Seung-Woo; Lee, Inah

    2017-01-01

    How visual scene memory is processed differentially by the upstream structures of the hippocampus is largely unknown. We sought to dissociate functionally the lateral and medial subdivisions of the entorhinal cortex (LEC and MEC, respectively) in visual scene-dependent tasks by temporarily inactivating the LEC and MEC in the same rat. When the rat made spatial choices in a T-maze using visual scenes displayed on LCD screens, the inactivation of the MEC but not the LEC produced severe deficits in performance. However, when the task required the animal to push a jar or to dig in the sand in the jar using the same scene stimuli, the LEC but not the MEC became important. Our findings suggest that the entorhinal cortex is critical for scene-dependent mnemonic behavior, and the response modality may interact with a sensory modality to determine the involvement of the LEC and MEC in scene-based memory tasks. DOI: http://dx.doi.org/10.7554/eLife.21543.001 PMID:28169828

  4. [How five-year-old children distribute rewards: effects of the amount of reward and a crying face].

    PubMed

    Tsutsu, Kiyomi

    2013-10-01

    Five-year-old children were presented with two scenes in which one character made three stars and the other made nine stars. In one of the scenes, both characters' facial expressions were neutral (neutral face scene), and in the other scene the character who produced three stars had a crying face (crying face scene). Children distributed different numbers of rewards to the two characters: equal to (Middle-N), less than (Small-N), or more than (Large-N) the total number of stars in each scene. Then the children were asked for their reason after they distributed the rewards. It was found that (a) the participants' distributions depended on the total number of rewards but (b) not on the characters' facial expressions, and (c) the justifications of their distributions in the Middle-N condition were different between the scenes. These results suggest that the total number of rewards triggers an automatic distribution process, and that an ex post facto justification takes place when needed.

  5. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  6. Study on Earthquake Emergency Evacuation Drill Trainer Development

    NASA Astrophysics Data System (ADS)

    ChangJiang, L.

    2016-12-01

    With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.

  7. Post-Colonization Interval Estimates Using Multi-Species Calliphoridae Larval Masses and Spatially Distinct Temperature Data Sets: A Case Study

    PubMed Central

    Weatherbee, Courtney R.; Pechal, Jennifer L.; Stamper, Trevor; Benbow, M. Eric

    2017-01-01

    Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs). To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6) throughout decomposition. Expected accumulated degree hours (ADH) associated with Cochliomyia macellaria and Phormia regina third instars (presence and length) were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m–7.61 km) from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates. PMID:28375172

  8. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  9. Suicide, accident? The importance of the scene investigation.

    PubMed

    Ermenc, B; Prijon, T

    2005-01-17

    We present the as yet unresolved case of the death by gunshot wound of a 21-year-old student from a recent local inspection. It was reported that the daughter of the house had been shot through the window while she was washing the dishes. Slight discrepancies were noted in the statements of the family, who are very religious. The firearm, projectile and cartridge have not been found despite an intensive search. The daughter and the mother tested positive for traces of gunpowder on their hands, while in the case of the son traces were found on his hands and on his vest. That the trajectory of the projectile was from the kitchen outwards was established on the basis of a small hole in the inner pane of the kitchen window and a larger hole in the outer pane. The shot passed through the victim's cheek and the neck. The entrance wound (aditus) on the right cheek had complementary features characteristic of a gunshot from a short-barrelled firearm at relative proximity. The shot passed through the left jugular vein and the left internal carotid artery. The exit wound (exitus) was slightly larger and of irregular shape. The family chose a traditional burial. The mother and son did not present themselves for polygraph testing. A charge was filed against the mother of the deceased. Emphasis was placed on the scene investigation. A covered-up suicide? An accident (a scuffle when trying to prevent suicide)?

  10. Investigation of several aspects of LANDSAT-4 data quality. [Sacramento, San Francisco, and NE Arkansas

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C. (Principal Investigator)

    1984-01-01

    The Thematic Mapper scene of Sacramento, CA acquired during the TDRSS test was received in TIPS format. Quadrants for both scenes were tested for band-to-band registration using reimplemented block correlation techniques. Summary statistics for band-to-band registrations of TM band combinations for Quadrant 4 of the NE Arkansas scene in TIPS format are tabulated as well as those for Quadrant 1 of the Sacramento scene. The system MTF analysis for the San Francisco scene is completed. The thermal band did not have sufficient contrast for the targets used and was not analyzed.

  11. A qualitative approach for recovering relative depths in dynamic scenes

    NASA Technical Reports Server (NTRS)

    Haynes, S. M.; Jain, R.

    1987-01-01

    This approach to dynamic scene analysis is a qualitative one. It computes relative depths using very general rules. The depths calculated are qualitative in the sense that the only information obtained is which object is in front of which others. The motion is qualitative in the sense that the only required motion data is whether objects are moving toward or away from the camera. Reasoning, which takes into account the temporal character of the data and the scene, is qualitative. This approach to dynamic scene analysis can tolerate imprecise data because in dynamic scenes the data are redundant.

  12. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  13. GOVERNING GENETIC DATABASES: COLLECTION, STORAGE AND USE

    PubMed Central

    Gibbons, Susan M.C.; Kaye, Jane

    2008-01-01

    This paper provides an introduction to a collection of five papers, published as a special symposium journal issue, under the title: “Governing Genetic Databases: Collection, Storage and Use”. It begins by setting the scene, to provide a backdrop and context for the papers. It describes the evolving scientific landscape around genetic databases and genomic research, particularly within the biomedical and criminal forensic investigation fields. It notes the lack of any clear, coherent or coordinated legal governance regime, either at the national or international level. It then identifies and reflects on key cross-cutting issues and themes that emerge from the five papers, in particular: terminology and definitions; consent; special concerns around population genetic databases (biobanks) and forensic databases; international harmonisation; data protection; data access; boundary-setting; governance; and issues around balancing individual interests against public good values. PMID:18841252

  14. Homicide-suicide and duty to warn.

    PubMed

    Burgess, Ann W; Sekula, L Kathleen; Carretta, Carrie M

    2015-03-01

    This retrospective study of medical examiner records from three counties reported on 252 persons who killed 302 victims before killing themselves and reviews the Tarasoff ruling that set the standard for duty to warn and/or protect third parties whose lives are threatened by a patient. The three sites varied significantly for the perpetrator in terms of race, employment, cause of death, and motive. Female offenders killed more children under the age of 10 and adolescents than did male offenders. Evidence of premeditation included suicide notes and weapon brought to the crime scene, while strangulation indicated a spontaneous domestic homicide. Implications for practice are discussed including the importance of evaluating violent thoughts, fantasies, and behaviors in acute emergency settings and recommendations include second opinion consultation for Tarasoff-type cases and psychological autopsy review for completed homicide-suicide cases.

  15. Validity and reliability of naturalistic driving scene categorization Judgments from crowdsourcing.

    PubMed

    Cabrall, Christopher D D; Lu, Zhenji; Kyriakidis, Miltos; Manca, Laura; Dijksterhuis, Chris; Happee, Riender; de Winter, Joost

    2018-05-01

    A common challenge with processing naturalistic driving data is that humans may need to categorize great volumes of recorded visual information. By means of the online platform CrowdFlower, we investigated the potential of crowdsourcing to categorize driving scene features (i.e., presence of other road users, straight road segments, etc.) at greater scale than a single person or a small team of researchers would be capable of. In total, 200 workers from 46 different countries participated in 1.5days. Validity and reliability were examined, both with and without embedding researcher generated control questions via the CrowdFlower mechanism known as Gold Test Questions (GTQs). By employing GTQs, we found significantly more valid (accurate) and reliable (consistent) identification of driving scene items from external workers. Specifically, at a small scale CrowdFlower Job of 48 three-second video segments, an accuracy (i.e., relative to the ratings of a confederate researcher) of 91% on items was found with GTQs compared to 78% without. A difference in bias was found, where without GTQs, external workers returned more false positives than with GTQs. At a larger scale CrowdFlower Job making exclusive use of GTQs, 12,862 three-second video segments were released for annotation. Infeasible (and self-defeating) to check the accuracy of each at this scale, a random subset of 1012 categorizations was validated and returned similar levels of accuracy (95%). In the small scale Job, where full video segments were repeated in triplicate, the percentage of unanimous agreement on the items was found significantly more consistent when using GTQs (90%) than without them (65%). Additionally, in the larger scale Job (where a single second of a video segment was overlapped by ratings of three sequentially neighboring segments), a mean unanimity of 94% was obtained with validated-as-correct ratings and 91% with non-validated ratings. Because the video segments overlapped in full for the small scale Job, and in part for the larger scale Job, it should be noted that such reliability reported here may not be directly comparable. Nonetheless, such results are both indicative of high levels of obtained rating reliability. Overall, our results provide compelling evidence for CrowdFlower, via use of GTQs, being able to yield more accurate and consistent crowdsourced categorizations of naturalistic driving scene contents than when used without such a control mechanism. Such annotations in such short periods of time present a potentially powerful resource in driving research and driving automation development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Eye Movements and Visual Memory for Scenes

    DTIC Science & Technology

    2005-01-01

    Scene memory research has demonstrated that the memory representation of a semantically inconsistent object in a scene is more detailed and/or complete... memory during scene viewing, then changes to semantically inconsistent objects (which should be represented more com- pletely) should be detected more... semantic description. Due to the surprise nature of the visual memory test, any learning that occurred during the search portion of the experiment was

  17. Boundary layer and fundamental problems of hydrodynamics (compatibility of a logarithmic velocity profile in a turbulent boundary layer with the experience values)

    NASA Astrophysics Data System (ADS)

    Zaryankin, A. E.

    2017-11-01

    The compatibility of the semiempirical turbulence theory of L. Prandtl with the actual flow pattern in a turbulent boundary layer is considered in this article, and the final calculation results of the boundary layer is analyzed based on the mentioned theory. It shows that accepted additional conditions and relationships, which integrate the differential equation of L. Prandtl, associating the turbulent stresses in the boundary layer with the transverse velocity gradient, are fulfilled only in the near-wall region where the mentioned equation loses meaning and are inconsistent with the physical meaning on the main part of integration. It is noted that an introduced concept about the presence of a laminar sublayer between the wall and the turbulent boundary layer is the way of making of a physical meaning to the logarithmic velocity profile, and can be defined as adjustment of the actual flow to the formula that is inconsistent with the actual boundary conditions. It shows that coincidence of the experimental data with the actual logarithmic profile is obtained as a result of the use of not particular physical value, as an argument, but function of this value.

  18. Cerebral Correlates of Emotional and Action Appraisals During Visual Processing of Emotional Scenes Depending on Spatial Frequency: A Pilot Study.

    PubMed

    Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole

    2016-01-01

    Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task's demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands.

  19. Effect of Viewing Smoking Scenes in Motion Pictures on Subsequent Smoking Desire in Audiences in South Korea.

    PubMed

    Sohn, Minsung; Jung, Minsoo

    2017-07-17

    In the modern era of heightened awareness of public health, smoking scenes in movies remain relatively free from public monitoring. The effect of smoking scenes in movies on the promotion of viewers' smoking desire remains unknown. The study aimed to explore whether exposure of adolescent smokers to images of smoking in fılms could stimulate smoking behavior. Data were derived from a national Web-based sample survey of 748 Korean high-school students. Participants aged 16-18 years were randomly assigned to watch three short video clips with or without smoking scenes. After adjusting covariates using propensity score matching, paired sample t test and logistic regression analyses compared the difference in smoking desire before and after exposure of participants to smoking scenes. For male adolescents, cigarette craving was significantly higher in those who watched movies with smoking scenes than in the control group who did not view smoking scenes (t 307.96 =2.066, P<.05). In the experimental group, too, cigarette cravings of adolescents after viewing smoking scenes were significantly higher than they were before watching smoking scenes (t 161.00 =2.867, P<.01). After adjusting for covariates, more impulsive adolescents, particularly males, had significantly higher cigarette cravings: adjusted odds ratio (aOR) 3.40 (95% CI 1.40-8.23). However, those who actively sought health information had considerably lower cigarette cravings than those who did not engage in information-seeking: aOR 0.08 (95% CI 0.01-0.88). Smoking scenes in motion pictures may increase male adolescent smoking desire. Establishing a standard that restricts the frequency of smoking scenes in films and assigning a smoking-related screening grade to films is warranted. ©Minsung Sohn, Minsoo Jung. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 17.07.2017.

  20. "Getting out of downtown": a longitudinal study of how street-entrenched youth attempt to exit an inner city drug scene.

    PubMed

    Knight, Rod; Fast, Danya; DeBeck, Kora; Shoveller, Jean; Small, Will

    2017-05-02

    Urban drug "scenes" have been identified as important risk environments that shape the health of street-entrenched youth. New knowledge is needed to inform policy and programing interventions to help reduce youths' drug scene involvement and related health risks. The aim of this study was to identify how young people envisioned exiting a local, inner-city drug scene in Vancouver, Canada, as well as the individual, social and structural factors that shaped their experiences. Between 2008 and 2016, we draw on 150 semi-structured interviews with 75 street-entrenched youth. We also draw on data generated through ethnographic fieldwork conducted with a subgroup of 25 of these youth between. Youth described that, in order to successfully exit Vancouver's inner city drug scene, they would need to: (a) secure legitimate employment and/or obtain education or occupational training; (b) distance themselves - both physically and socially - from the urban drug scene; and (c) reduce their drug consumption. As youth attempted to leave the scene, most experienced substantial social and structural barriers (e.g., cycling in and out of jail, the need to access services that are centralized within a place that they are trying to avoid), in addition to managing complex individual health issues (e.g., substance dependence). Factors that increased youth's capacity to successfully exit the drug scene included access to various forms of social and cultural capital operating outside of the scene, including supportive networks of friends and/or family, as well as engagement with addiction treatment services (e.g., low-threshold access to methadone) to support cessation or reduction of harmful forms of drug consumption. Policies and programming interventions that can facilitate young people's efforts to reduce engagement with Vancouver's inner-city drug scene are critically needed, including meaningful educational and/or occupational training opportunities, 'low threshold' addiction treatment services, as well as access to supportive housing outside of the scene.

Top